llm-application-dev

Pass

Audited by Gen Agent Trust Hub on Mar 31, 2026

Risk Level: SAFEPROMPT_INJECTION
Full Analysis
  • [PROMPT_INJECTION]: The skill demonstrates RAG (Retrieval-Augmented Generation) and prompt engineering patterns that create a surface for indirect prompt injection.
  • Ingestion points: The ragQuery function and prompt templates in SKILL.md take untrusted input from variables like question (user input) and context (retrieved from a vector database).
  • Boundary markers: The code snippets lack robust delimiters (such as XML tags or unique string markers) or instructions to the model to ignore embedded instructions within the interpolated variables.
  • Capability inventory: The skill includes integration patterns for the OpenAI and Anthropic SDKs, which facilitate active interaction with LLMs.
  • Sanitization: No sanitization, escaping, or validation logic is present in the provided examples to mitigate the risk of a user or a retrieved document overriding the application's intended logic.
Audit Metadata
Risk Level
SAFE
Analyzed
Mar 31, 2026, 04:39 PM