langchain-langgraph-coding-assistant

Fail

Audited by Gen Agent Trust Hub on Feb 28, 2026

Risk Level: HIGHCREDENTIALS_UNSAFECOMMAND_EXECUTION
Full Analysis
  • [COMMAND_EXECUTION]: The file resources/工具定义/tool_define.py includes a calculator tool that uses the Python eval() function on the expression argument. Since this tool is designed to process user-supplied strings, it creates a significant vulnerability for arbitrary code execution.
  • [CREDENTIALS_UNSAFE]: A hardcoded Google API key (AIzaSyDJi6ax5l9vc4Z_-rUGTqwNjkYVQNbqdws) is exposed in the test block of resources/大模型调用/ChatOpenAIModel_LangChian.py.
  • [COMMAND_EXECUTION]: The skill uses Python's pickle module for state management and checkpoints, as evidenced by numerous .pckl files (e.g., resources/langchang-smith-studio/.langgraph_api/.langgraph_checkpoint.1.pckl). Deserializing untrusted pickle data is a known vector for arbitrary code execution.
  • [DATA_EXPOSURE & EXFILTRATION]: While most API keys in the .env and script files are placeholders, the aforementioned Google key appears to be a real credential rather than a generic placeholder.
Recommendations
  • AI detected serious security threats
Audit Metadata
Risk Level
HIGH
Analyzed
Feb 28, 2026, 05:07 AM