dive-into-langgraph
Pass
Audited by Gen Agent Trust Hub on Mar 7, 2026
Risk Level: SAFEEXTERNAL_DOWNLOADSCOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
- [SAFE]: The skill serves as a comprehensive educational guide for building AI agents with the LangGraph framework and does not contain any malicious logic.
- [SAFE]: Implements a robust
SafeEvaluatorinscripts/tools/tool_math.pythat utilizes Python'sastmodule and a whitelist-based visitor pattern to safely calculate mathematical expressions, preventing arbitrary code execution. - [EXTERNAL_DOWNLOADS]: Fetches documentation and tutorial content from the author's official blog (
luochang212.github.io) and well-known services like Tavily and DuckDuckGo for search functionality. - [COMMAND_EXECUTION]: Demonstrates how to run local Model Context Protocol (MCP) servers using
MultiServerMCPClientandsupervisord, which involves spawning local subprocesses for tool integration as part of the tutorial workflow. - [PROMPT_INJECTION]: Provides explicit examples of creating defensive middleware to protect against prompt injection and sensitive data exposure, including keyword filtering and PII detection logic.
- [SAFE]: Recognizes an indirect prompt injection surface in the RAG and search tutorials (
references/10.rag.md,references/11.web_search.md). Data is ingested viaWebBaseLoaderand search APIs, with boundary markers established using metadata labels. The capability inventory includes tool calling through MCP. Sanitization is addressed as a theoretical requirement within the educational context of the skill.
Audit Metadata