llm-patterns
Pass
Audited by Gen Agent Trust Hub on Feb 17, 2026
Risk Level: SAFEPROMPT_INJECTION
Full Analysis
- Indirect Prompt Injection (LOW): The skill demonstrates a pattern where untrusted user input is interpolated directly into an LLM prompt without sanitization or boundary delimiters, creating a surface for indirect prompt injection attacks.\n
- Ingestion points: The
ticketparameter in theclassifyTicketPromptfunction withinSKILL.md.\n - Boundary markers: Absent; the input is appended directly to the end of the prompt instructions.\n
- Capability inventory: The skill demonstrates network capabilities via the
@anthropic-ai/sdkto make external LLM calls.\n - Sanitization: Input sanitization is absent; however, the skill correctly demonstrates output validation using the Zod library to ensure data integrity of LLM responses.
Audit Metadata