llm-app-patterns

Pass

Audited by Gen Agent Trust Hub on Feb 28, 2026

Risk Level: SAFEPROMPT_INJECTIONCOMMAND_EXECUTIONEXTERNAL_DOWNLOADS
Full Analysis
  • [PROMPT_INJECTION]: The skill's architectural patterns for RAG and Agents introduce a surface for indirect prompt injection.\n
  • Ingestion points: In SKILL.md, the generate_with_rag function interpolates retrieved documents into a prompt, and the ReActAgent feeds tool observations back into the LLM prompt.\n
  • Boundary markers: The prompt templates use basic headers like Context: or Observation: but do not provide instructions to the LLM to ignore instructions contained within that external data.\n
  • Capability inventory: The agent patterns describe tool execution capabilities which often include network access or system operations in real applications.\n
  • Sanitization: The provided code snippets do not include logic for sanitizing or validating the content of the retrieved context or tool results before prompt interpolation.\n- [COMMAND_EXECUTION]: The skill outlines logic for autonomous agents that execute tools based on LLM output.\n
  • Evidence: The ReActAgent class in SKILL.md contains an execution loop where actions parsed from the LLM response are passed to _execute_tool.\n- [EXTERNAL_DOWNLOADS]: References external resources and documentation from established and trusted sources.\n
  • Evidence: Links to the official Anthropic Cookbook repository and references the Dify platform and libraries like LangChain.
Audit Metadata
Risk Level
SAFE
Analyzed
Feb 28, 2026, 02:35 AM