langchain-rag

Pass

Audited by Gen Agent Trust Hub on Mar 10, 2026

Risk Level: SAFEREMOTE_CODE_EXECUTIONEXTERNAL_DOWNLOADSPROMPT_INJECTION
Full Analysis
  • [Unsafe Deserialization Pattern]: The skill demonstrates loading local FAISS indexes using a configuration that allows dangerous deserialization.
  • Evidence: FAISS.load_local("./faiss_index", embeddings, allow_dangerous_deserialization=True) in SKILL.md.
  • Context: This flag is required by the LangChain FAISS implementation to load indexes from disk as it utilizes Python's pickle module. This presents a security consideration if the index file originates from an untrusted source, as it could lead to the execution of arbitrary code during the loading process.
  • [Indirect Prompt Injection Surface]: The skill outlines a RAG pipeline that ingests content from external sources (Web URLs, PDFs, local directories) and interpolates it directly into LLM prompts.
  • Ingestion points: WebBaseLoader, PyPDFLoader, and DirectoryLoader in SKILL.md.
  • Boundary markers: The provided examples do not use explicit delimiters or "ignore instructions" markers when joining retrieved document content into the prompt.
  • Capability inventory: The skill demonstrates an agent using a search_docs tool that fetches content from the vector store and returns it to the system prompt in SKILL.md.
  • Sanitization: No explicit sanitization or filtering of the retrieved document content is shown before it is sent to the LLM. This could allow maliciously crafted external data to influence the agent's behavior.
  • [External Content Retrieval]: The skill utilizes tools to fetch and parse data from external network locations.
  • Evidence: WebBaseLoader("https://docs.langchain.com") and CheerioWebBaseLoader in SKILL.md.
  • Context: This is a standard feature for document indexing. While the examples target well-known domains, the pattern enables the retrieval of content from any user-provided or dynamically computed URL.
Audit Metadata
Risk Level
SAFE
Analyzed
Mar 10, 2026, 01:44 PM