llm-app-patterns
Pass
Audited by Gen Agent Trust Hub on Feb 28, 2026
Risk Level: SAFEPROMPT_INJECTIONCOMMAND_EXECUTIONEXTERNAL_DOWNLOADS
Full Analysis
- [PROMPT_INJECTION]: The skill's architectural patterns for RAG and Agents introduce a surface for indirect prompt injection.\n
- Ingestion points: In
SKILL.md, thegenerate_with_ragfunction interpolates retrieved documents into a prompt, and theReActAgentfeeds tool observations back into the LLM prompt.\n - Boundary markers: The prompt templates use basic headers like
Context:orObservation:but do not provide instructions to the LLM to ignore instructions contained within that external data.\n - Capability inventory: The agent patterns describe tool execution capabilities which often include network access or system operations in real applications.\n
- Sanitization: The provided code snippets do not include logic for sanitizing or validating the content of the retrieved context or tool results before prompt interpolation.\n- [COMMAND_EXECUTION]: The skill outlines logic for autonomous agents that execute tools based on LLM output.\n
- Evidence: The
ReActAgentclass inSKILL.mdcontains an execution loop where actions parsed from the LLM response are passed to_execute_tool.\n- [EXTERNAL_DOWNLOADS]: References external resources and documentation from established and trusted sources.\n - Evidence: Links to the official Anthropic Cookbook repository and references the Dify platform and libraries like LangChain.
Audit Metadata