LangChain Fundamentals

Warn

Audited by Gen Agent Trust Hub on Mar 3, 2026

Risk Level: MEDIUMCOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
  • [COMMAND_EXECUTION]: The example calculate tool provided in both the Python and TypeScript code blocks utilizes the eval() function to process the expression input. This is a dangerous coding pattern in AI agents because if an attacker provides a malicious string instead of a mathematical expression, the system will execute it as code.
  • [PROMPT_INJECTION]: The skill documents the creation of agents that ingest untrusted data from user messages (via agent.invoke) without demonstrating best practices for input sanitization or the use of protective boundary markers. This creates an attack surface for indirect prompt injection where malicious instructions embedded in user data could influence the agent's behavior.
Audit Metadata
Risk Level
MEDIUM
Analyzed
Mar 3, 2026, 12:03 AM