langgraph-error-handling
Warn
Audited by Gen Agent Trust Hub on Feb 17, 2026
Risk Level: MEDIUMCOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
- [Dynamic Execution] (MEDIUM): The script
scripts/classify_error.pyutilizesimportlib.import_moduleandgetattrto load exception classes dynamically based on command-line arguments. An attacker could potentially cause the script to import any module available in the environment by providing a crafted string (e.g.,os:system), although the script's subsequentissubclasscheck provides some mitigation against direct execution of non-exception types. - [Indirect Prompt Injection] (LOW): The graph implementation patterns ingest raw error strings from tool/search outputs directly into the graph state and subsequent LLM prompts. This creates an attack surface where a malicious external service or data source could return an error message containing instructions that influence the agent's behavior.
- Ingestion points:
state.get("error")in theagentnode ofassets/examples/retry-example/python/graph.pyandlatestUserMessageinassets/examples/human-loop-example/js/index.js. - Boundary markers: None identified; untrusted error strings are interpolated directly into messages.
- Capability inventory: Graph nodes include capabilities for network search (simulated) and sensitive action execution (e.g.,
delete_records). - Sanitization: No sanitization or validation of the error messages or user-provided content is performed before processing.
Audit Metadata