langgraph-functional

Fail

Audited by Gen Agent Trust Hub on Feb 16, 2026

Risk Level: HIGH
Full Analysis
  • SAFE (SAFE): The skill is composed of informational markdown files that document the functional API of the LangGraph framework. These files serve as a developer resource and do not contain executable malicious scripts or instructions.\n
  • The provided Python code examples demonstrate standard and recommended usage of the library, specifically focusing on handling non-determinism and side effects in a secure and predictable manner.\n
  • The documentation promotes the use of the interrupt() function to ensure that sensitive operations (like sending emails or database writes) require explicit human approval, which is a key security control.\n- Indirect Prompt Injection (INFO): The documentation identifies and addresses the vulnerability surface associated with processing external data.\n
  • Ingestion points: Examples show data being ingested from user inputs and external URLs (e.g., fetch_external_data(url)).\n
  • Boundary markers: The code snippets do not explicitly show boundary delimiters, but the architectural pattern of using tasks and entrypoints provides a structured execution environment.\n
  • Capability inventory: The hypothetical examples involve capabilities such as database updates and email notifications.\n
  • Sanitization: The skill directly mitigates injection risks by teaching developers to use interrupt() for human verification before executing any action derived from external content.\n- EXTERNAL_DOWNLOADS (INFO): An automated scanner flagged the URL 'tool.in' as malicious. However, a thorough review of the provided files indicates that this URL does not appear in the text. This finding is likely a false positive or refers to external metadata not included in the skill body.
Recommendations
  • Contains 1 malicious URL(s) - DO NOT USE
Audit Metadata
Risk Level
HIGH
Analyzed
Feb 16, 2026, 01:10 AM