langchain
Warn
Audited by Gen Agent Trust Hub on Mar 3, 2026
Risk Level: MEDIUMCOMMAND_EXECUTIONPROMPT_INJECTIONEXTERNAL_DOWNLOADS
Full Analysis
- [COMMAND_EXECUTION]: The skill defines a calculator tool using the Python eval() function on string input.
- Evidence: Section 8 (Agents) contains a tool definition
@tool def calculator(expression: str): return str(eval(expression)). - Risk: This allows for arbitrary code execution if an agent is manipulated into passing malicious code into the tool via prompt injection.
- [PROMPT_INJECTION]: The skill demonstrates patterns for Retrieval-Augmented Generation (RAG) and autonomous agents that are susceptible to indirect prompt injection from untrusted external data.
- Ingestion points:
WebBaseLoader(fetching web content) andPyPDFLoader(loading local documents) in Section 6. - Boundary markers: The prompt templates use basic interpolation (e.g.,
Context: {context}) without robust delimiters or explicit instructions to ignore embedded commands. - Capability inventory: The skill showcases agents with tool-use capabilities, including the unsafe calculator tool.
- Sanitization: No evidence of input validation, sanitization, or content filtering is provided in the guide's code snippets.
- [EXTERNAL_DOWNLOADS]: The skill instructs users to install various packages from the LangChain ecosystem.
- Evidence:
pip install langchain,langchain-community,langchain-openai, etc. - Context: These are well-known, official libraries for LLM application development and are considered safe sources.
Audit Metadata