langchain-langgraph-coding-assistant
Fail
Audited by Gen Agent Trust Hub on Feb 28, 2026
Risk Level: HIGHCREDENTIALS_UNSAFECOMMAND_EXECUTION
Full Analysis
- [COMMAND_EXECUTION]: The file
resources/工具定义/tool_define.pyincludes acalculatortool that uses the Pythoneval()function on theexpressionargument. Since this tool is designed to process user-supplied strings, it creates a significant vulnerability for arbitrary code execution. - [CREDENTIALS_UNSAFE]: A hardcoded Google API key (
AIzaSyDJi6ax5l9vc4Z_-rUGTqwNjkYVQNbqdws) is exposed in the test block ofresources/大模型调用/ChatOpenAIModel_LangChian.py. - [COMMAND_EXECUTION]: The skill uses Python's
picklemodule for state management and checkpoints, as evidenced by numerous.pcklfiles (e.g.,resources/langchang-smith-studio/.langgraph_api/.langgraph_checkpoint.1.pckl). Deserializing untrusted pickle data is a known vector for arbitrary code execution. - [DATA_EXPOSURE & EXFILTRATION]: While most API keys in the
.envand script files are placeholders, the aforementioned Google key appears to be a real credential rather than a generic placeholder.
Recommendations
- AI detected serious security threats
Audit Metadata