llm-application-dev

Pass

Audited by Gen Agent Trust Hub on Feb 17, 2026

Risk Level: SAFEPROMPT_INJECTION
Full Analysis
  • PROMPT_INJECTION (LOW): The provided code snippets demonstrate insecure prompt construction patterns, creating a surface for Indirect Prompt Injection.
  • Ingestion points: Variables such as 'question', 'customerFeedback', and 'context' (derived from vector database searches) are interpolated directly into prompt strings in SKILL.md.
  • Boundary markers: The snippets use labels like 'CONTEXT:' and 'Question:' but lack robust delimiters or system instructions to ignore potential commands embedded in the data.
  • Capability inventory: The code facilitates network requests to external LLM providers (OpenAI and Anthropic).
  • Sanitization: There is no evidence of sanitization, escaping, or validation of the untrusted inputs before they are incorporated into the prompts.
  • CREDENTIALS_UNSAFE (SAFE): The skill correctly uses environment variables for authentication rather than hardcoding sensitive credentials.
Audit Metadata
Risk Level
SAFE
Analyzed
Feb 17, 2026, 05:03 PM