llm-patterns

Pass

Audited by Gen Agent Trust Hub on Feb 17, 2026

Risk Level: SAFEPROMPT_INJECTION
Full Analysis
  • Indirect Prompt Injection (LOW): The skill demonstrates a pattern where untrusted user input is interpolated directly into an LLM prompt without sanitization or boundary delimiters, creating a surface for indirect prompt injection attacks.\n
  • Ingestion points: The ticket parameter in the classifyTicketPrompt function within SKILL.md.\n
  • Boundary markers: Absent; the input is appended directly to the end of the prompt instructions.\n
  • Capability inventory: The skill demonstrates network capabilities via the @anthropic-ai/sdk to make external LLM calls.\n
  • Sanitization: Input sanitization is absent; however, the skill correctly demonstrates output validation using the Zod library to ensure data integrity of LLM responses.
Audit Metadata
Risk Level
SAFE
Analyzed
Feb 17, 2026, 05:11 PM