langchain-architecture

Pass

Audited by Gen Agent Trust Hub on Mar 8, 2026

Risk Level: SAFEEXTERNAL_DOWNLOADSCOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
  • [EXTERNAL_DOWNLOADS]: The skill utilizes integrations with well-known technology providers including Anthropic, Pinecone, Voyage AI, and LangSmith. It also incorporates persistence layers using PostgreSQL and Redis.
  • [COMMAND_EXECUTION]: Implements a tool for mathematical calculations using the Python ast module to safely evaluate expressions, which effectively mitigates the risk of arbitrary code execution often associated with native evaluation functions.
  • [PROMPT_INJECTION]: This skill presents an indirect prompt injection surface as it ingests untrusted data through state variables like 'question' and 'text' within its workflow graphs. Mandatory Evidence: (1) Ingestion points: 'RAGState' and 'WorkflowState' fields in 'SKILL.md'. (2) Boundary markers: Prompt templates use triple-quotes and labels to separate context from user input. (3) Capability inventory: Database searching, email sending, and mathematical calculation tools. (4) Sanitization: The mathematical tool uses AST parsing for input validation; other string inputs are handled via structured prompt templates.
Audit Metadata
Risk Level
SAFE
Analyzed
Mar 8, 2026, 03:30 PM