llm-app-development
Pass
Audited by Gen Agent Trust Hub on Mar 5, 2026
Risk Level: SAFEPROMPT_INJECTION
Full Analysis
- [PROMPT_INJECTION]: The provided code patterns for RAG and AI agents are susceptible to indirect prompt injection.
- Ingestion points: Untrusted inputs such as
queryin the RAG pipeline (Pattern 1) andreviewsin the Structured Output example (Pattern 2) are directly interpolated into LLM prompts. - Boundary markers: While triple dashes are used as delimiters in the prompts, there are no explicit instructions for the model to ignore commands potentially embedded within the user-provided context.
- Capability inventory: The 'AI Agents' pattern (Pattern 3) demonstrates a
create_support_tickettool with write capabilities that could be triggered maliciously if the model obeys instructions hidden in the processed data. - Sanitization: The examples do not demonstrate sanitization or validation of input strings before they are interpolated into the prompt templates.
Audit Metadata