langgraph-agent

Warn

Audited by Gen Agent Trust Hub on Feb 25, 2026

Risk Level: MEDIUMCOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
  • [COMMAND_EXECUTION]: The calculator tool in examples/02_react_agent.py utilizes the Python eval() function to process user-supplied mathematical expressions. Because the input to this function is derived from LLM responses that process untrusted user messages, an attacker can craft a prompt to execute arbitrary code on the host system.
  • [PROMPT_INJECTION]: The skill contains an attack surface for indirect prompt injection by combining untrusted data ingestion with dangerous tool capabilities. (1) Ingestion points: The messages list in examples/02_react_agent.py receives raw user input. (2) Boundary markers: There are no delimiters or instructions provided to separate user-controlled data from system instructions or tool execution triggers. (3) Capability inventory: The agent has access to the eval()-based calculator tool in the same example file. (4) Sanitization: The skill does not perform any validation, escaping, or filtering of the expression string before passing it to the eval() function.
Audit Metadata
Risk Level
MEDIUM
Analyzed
Feb 25, 2026, 02:03 AM