llm-application-dev

Pass

Audited by Gen Agent Trust Hub on Feb 17, 2026

Risk Level: SAFE
Full Analysis
  • SAFE (SAFE): The skill consists of documentation and TypeScript code examples for integrating with AI APIs and implementing RAG patterns. No malicious behavior was detected.
  • Indirect Prompt Injection (LOW): The skill demonstrates prompt interpolation of untrusted inputs (e.g., {context}, {question}, ${customerFeedback}) in SKILL.md. This highlights a known attack surface for applications built with these templates, but the skill itself is passive documentation.
  • Ingestion points: Variables question, context, and customerFeedback within the code snippets in SKILL.md.
  • Boundary markers: Uses header labels (e.g., RULES:, CONTEXT:) but lacks robust escaping or delimited blocks.
  • Capability inventory: The templates involve network requests to external LLM providers (OpenAI, Anthropic) and database interactions (Supabase).
  • Sanitization: No sanitization logic is present in the boilerplate snippets, though the "Best Practices" section explicitly recommends implementing guardrails.
Audit Metadata
Risk Level
SAFE
Analyzed
Feb 17, 2026, 06:37 PM