langgraph-error-handling

Warn

Audited by Gen Agent Trust Hub on Feb 17, 2026

Risk Level: MEDIUMCOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
  • [Dynamic Execution] (MEDIUM): The script scripts/classify_error.py utilizes importlib.import_module and getattr to load exception classes dynamically based on command-line arguments. An attacker could potentially cause the script to import any module available in the environment by providing a crafted string (e.g., os:system), although the script's subsequent issubclass check provides some mitigation against direct execution of non-exception types.
  • [Indirect Prompt Injection] (LOW): The graph implementation patterns ingest raw error strings from tool/search outputs directly into the graph state and subsequent LLM prompts. This creates an attack surface where a malicious external service or data source could return an error message containing instructions that influence the agent's behavior.
  • Ingestion points: state.get("error") in the agent node of assets/examples/retry-example/python/graph.py and latestUserMessage in assets/examples/human-loop-example/js/index.js.
  • Boundary markers: None identified; untrusted error strings are interpolated directly into messages.
  • Capability inventory: Graph nodes include capabilities for network search (simulated) and sensitive action execution (e.g., delete_records).
  • Sanitization: No sanitization or validation of the error messages or user-provided content is performed before processing.
Audit Metadata
Risk Level
MEDIUM
Analyzed
Feb 17, 2026, 06:42 PM