llm-application-dev
Pass
Audited by Gen Agent Trust Hub on Feb 17, 2026
Risk Level: SAFE
Full Analysis
- SAFE (SAFE): The skill consists of documentation and TypeScript code examples for integrating with AI APIs and implementing RAG patterns. No malicious behavior was detected.
- Indirect Prompt Injection (LOW): The skill demonstrates prompt interpolation of untrusted inputs (e.g.,
{context},{question},${customerFeedback}) inSKILL.md. This highlights a known attack surface for applications built with these templates, but the skill itself is passive documentation. - Ingestion points: Variables
question,context, andcustomerFeedbackwithin the code snippets inSKILL.md. - Boundary markers: Uses header labels (e.g.,
RULES:,CONTEXT:) but lacks robust escaping or delimited blocks. - Capability inventory: The templates involve network requests to external LLM providers (OpenAI, Anthropic) and database interactions (Supabase).
- Sanitization: No sanitization logic is present in the boilerplate snippets, though the "Best Practices" section explicitly recommends implementing guardrails.
Audit Metadata