LangChain RAG Pipeline
Warn
Audited by Gen Agent Trust Hub on Mar 3, 2026
Risk Level: MEDIUMCOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
- [COMMAND_EXECUTION]: The skill encourages bypassing a security check that prevents arbitrary code execution during data loading.
- Evidence: In the section of SKILL.md, the author provides a example that sets allow_dangerous_deserialization=True when calling FAISS.load_local. This flag enables the use of Python's pickle module, which is known to be insecure and can execute arbitrary code if the index file being loaded is malicious or has been tampered with.
- [PROMPT_INJECTION]: The RAG pipeline architecture described in the skill is vulnerable to indirect prompt injection.
- Ingestion points: The skill utilizes WebBaseLoader, PyPDFLoader, and DirectoryLoader in SKILL.md to ingest content from external URLs, PDFs, and local files.
- Boundary markers: External content is directly interpolated into the LLM system prompt using a f-string template without explicit delimiters or instructions to treat the context as untrusted data.
- Capability inventory: The agent context includes capabilities for network requests (WebBaseLoader) and file system persistence through vector store integrations (Chroma, FAISS).
- Sanitization: There is no evidence of sanitization, filtering, or validation of the retrieved documents before they are presented to the language model.
Audit Metadata