langgraph
Warn
Audited by Gen Agent Trust Hub on Feb 28, 2026
Risk Level: MEDIUMCOMMAND_EXECUTIONREMOTE_CODE_EXECUTIONEXTERNAL_DOWNLOADSPROMPT_INJECTION
Full Analysis
- [COMMAND_EXECUTION]: The 'Basic Agent Graph' code example defines a 'calculator' tool that uses the Python 'eval()' function on the 'expression' argument. This is a high-risk pattern as 'eval()' executes any string passed to it as Python code, providing an attacker with a way to execute arbitrary system commands if they can influence the input.
- [REMOTE_CODE_EXECUTION]: By combining an LLM-driven agent with a tool that uses 'eval()', the skill creates a path for remote code execution. If an attacker provides a prompt that causes the LLM to generate a malicious string for the calculator tool, that string will be executed by the Python interpreter on the host system.
- [PROMPT_INJECTION]: The skill is susceptible to indirect prompt injection because it processes untrusted data (messages) and routes them to a tool with execution capabilities without proper sanitization.
- Ingestion points: The 'agent' node processes 'state["messages"]', which typically includes external user input.
- Boundary markers: None are present in the example code to distinguish between safe data and instructions.
- Capability inventory: The 'calculator' tool uses 'eval()'.
- Sanitization: No input validation or sanitization is performed on the 'expression' variable before execution.
- [EXTERNAL_DOWNLOADS]: The skill references standard Python packages including 'langgraph', 'langchain-openai', and 'langchain-core'. These are official packages from LangChain AI, a well-known organization, and are considered safe dependencies.
Audit Metadata