llm-application-dev

Pass

Audited by Gen Agent Trust Hub on Mar 13, 2026

Risk Level: SAFEPROMPT_INJECTION
Full Analysis
  • [PROMPT_INJECTION]: The skill demonstrates prompt construction patterns that directly interpolate untrusted data into system and user prompts without the use of boundary markers or sanitization logic.
  • Ingestion Points: Data is ingested via the context, question, and customerFeedback variables in SKILL.md.
  • Boundary Markers: Example prompts (e.g., Answer based on this context:\n${context}) lack delimiters or instructions to ignore embedded commands, increasing the risk of the model following instructions contained within the data.
  • Capability Inventory: The patterns interact with external LLM providers (OpenAI, Anthropic) which could be manipulated to leak context or bypass intended application logic.
  • Sanitization: No input validation or escaping mechanisms are demonstrated in the provided code snippets.
Audit Metadata
Risk Level
SAFE
Analyzed
Mar 13, 2026, 10:28 AM