ai-engineer
Fail
Audited by Gen Agent Trust Hub on Feb 16, 2026
Risk Level: HIGHPROMPT_INJECTION
Full Analysis
- PROMPT_INJECTION (HIGH): The skill is vulnerable to Indirect Prompt Injection through unsafe interpolation of untrusted data in multiple functions.
- Ingestion points: The
textparameter inextract_structuredand thecontext(retrieved from a database) andquestionparameters inrag_queryinSKILL.md. - Boundary markers: Absent. The prompts use simple string interpolation (e.g.,
Text: {text}) without delimiters like XML tags or explicit instructions for the AI to ignore embedded commands within the data. - Capability inventory: The functions return text and structured JSON intended to drive agent reasoning or tool use. If used in a multi-step pipeline, an injection could manipulate the agent's logic or trigger unauthorized actions.
- Sanitization: Absent. No validation, escaping, or filtering is performed on the input strings before they are injected into the system prompts.
Recommendations
- AI detected serious security threats
Audit Metadata