dive-into-langgraph

Pass

Audited by Gen Agent Trust Hub on Mar 7, 2026

Risk Level: SAFEEXTERNAL_DOWNLOADSCOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
  • [SAFE]: The skill serves as a comprehensive educational guide for building AI agents with the LangGraph framework and does not contain any malicious logic.
  • [SAFE]: Implements a robust SafeEvaluator in scripts/tools/tool_math.py that utilizes Python's ast module and a whitelist-based visitor pattern to safely calculate mathematical expressions, preventing arbitrary code execution.
  • [EXTERNAL_DOWNLOADS]: Fetches documentation and tutorial content from the author's official blog (luochang212.github.io) and well-known services like Tavily and DuckDuckGo for search functionality.
  • [COMMAND_EXECUTION]: Demonstrates how to run local Model Context Protocol (MCP) servers using MultiServerMCPClient and supervisord, which involves spawning local subprocesses for tool integration as part of the tutorial workflow.
  • [PROMPT_INJECTION]: Provides explicit examples of creating defensive middleware to protect against prompt injection and sensitive data exposure, including keyword filtering and PII detection logic.
  • [SAFE]: Recognizes an indirect prompt injection surface in the RAG and search tutorials (references/10.rag.md, references/11.web_search.md). Data is ingested via WebBaseLoader and search APIs, with boundary markers established using metadata labels. The capability inventory includes tool calling through MCP. Sanitization is addressed as a theoretical requirement within the educational context of the skill.
Audit Metadata
Risk Level
SAFE
Analyzed
Mar 7, 2026, 06:34 PM