llm-application-dev

Pass

Audited by Gen Agent Trust Hub on Feb 25, 2026

Risk Level: SAFEPROMPT_INJECTION
Full Analysis
  • [PROMPT_INJECTION]: The skill demonstrates patterns that are vulnerable to indirect prompt injection.
  • Ingestion points: Untrusted data is interpolated into prompts via 'customerFeedback', 'question', and 'context' variables in SKILL.md.
  • Boundary markers: The prompt templates lack delimiters or instructions to isolate data from instructions.
  • Capability inventory: The code integrates with OpenAI and Anthropic APIs.
  • Sanitization: No input validation or escaping is present in the provided snippets.
Audit Metadata
Risk Level
SAFE
Analyzed
Feb 25, 2026, 08:20 PM