llm-application-dev
Fail
Audited by Gen Agent Trust Hub on Feb 16, 2026
Risk Level: HIGHPROMPT_INJECTION
Full Analysis
- Indirect Prompt Injection (HIGH): The skill provides templates that ingest untrusted data into LLM prompts.
- Ingestion points: External content enters via the
questionvariable inragQueryandcustomerFeedbackin the few-shot classification example withinSKILL.md. - Boundary markers: Absent. Prompts are constructed using direct template literals (e.g.,
${context}) without delimiters, XML tags, or system instructions to ignore embedded instructions. - Capability inventory: The code includes functional snippets using
openaiand@anthropic-ai/sdkto perform network operations and process these unsafe prompts. - Sanitization: No sanitization, escaping, or schema validation logic is implemented for the data before it is interpolated into the prompts.
Recommendations
- AI detected serious security threats
Audit Metadata