NYC

llm-application-dev

Fail

Audited by Gen Agent Trust Hub on Feb 16, 2026

Risk Level: HIGHPROMPT_INJECTION
Full Analysis
  • Indirect Prompt Injection (HIGH): The skill provides templates that ingest untrusted data into LLM prompts.
  • Ingestion points: External content enters via the question variable in ragQuery and customerFeedback in the few-shot classification example within SKILL.md.
  • Boundary markers: Absent. Prompts are constructed using direct template literals (e.g., ${context}) without delimiters, XML tags, or system instructions to ignore embedded instructions.
  • Capability inventory: The code includes functional snippets using openai and @anthropic-ai/sdk to perform network operations and process these unsafe prompts.
  • Sanitization: No sanitization, escaping, or schema validation logic is implemented for the data before it is interpolated into the prompts.
Recommendations
  • AI detected serious security threats
Audit Metadata
Risk Level
HIGH
Analyzed
Feb 16, 2026, 08:54 AM