llm-app-development

Pass

Audited by Gen Agent Trust Hub on Mar 5, 2026

Risk Level: SAFEPROMPT_INJECTION
Full Analysis
  • [PROMPT_INJECTION]: The provided code patterns for RAG and AI agents are susceptible to indirect prompt injection.
  • Ingestion points: Untrusted inputs such as query in the RAG pipeline (Pattern 1) and reviews in the Structured Output example (Pattern 2) are directly interpolated into LLM prompts.
  • Boundary markers: While triple dashes are used as delimiters in the prompts, there are no explicit instructions for the model to ignore commands potentially embedded within the user-provided context.
  • Capability inventory: The 'AI Agents' pattern (Pattern 3) demonstrates a create_support_ticket tool with write capabilities that could be triggered maliciously if the model obeys instructions hidden in the processed data.
  • Sanitization: The examples do not demonstrate sanitization or validation of input strings before they are interpolated into the prompt templates.
Audit Metadata
Risk Level
SAFE
Analyzed
Mar 5, 2026, 02:40 PM