llm-patterns

Pass

Audited by Gen Agent Trust Hub on Apr 9, 2026

Risk Level: SAFEPROMPT_INJECTION
Full Analysis
  • [PROMPT_INJECTION]: The classifyTicketPrompt function in SKILL.md demonstrates an unsafe pattern for prompt construction where untrusted user input is directly interpolated into the prompt string.\n
  • Ingestion points: The ticket parameter in classifyTicketPrompt (SKILL.md).\n
  • Boundary markers: Absent. The prompt template does not use delimiters (like XML tags or triple quotes) to isolate the user input.\n
  • Capability inventory: The skill includes examples using an Anthropic client to execute these prompts via client.messages.create (SKILL.md).\n
  • Sanitization: Absent. There is no evidence of input validation, escaping, or filtering before the input is used in the prompt template.
Audit Metadata
Risk Level
SAFE
Analyzed
Apr 9, 2026, 07:27 PM