faiss
Warn
Audited by Gen Agent Trust Hub on Apr 4, 2026
Risk Level: MEDIUMREMOTE_CODE_EXECUTIONCOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
- [REMOTE_CODE_EXECUTION]: The LangChain integration example in 'SKILL.md' includes the configuration
allow_dangerous_deserialization=Truewhen callingFAISS.load_local(). This parameter bypasses security checks and allows the underlyingpicklelibrary to execute arbitrary Python code that may be embedded in a malicious index file. - [COMMAND_EXECUTION]: By encouraging the use of unsafe deserialization flags, the skill creates a path for system command execution if the agent is instructed to load a vector index from an untrusted or compromised source.
- [PROMPT_INJECTION]: The skill describes an attack surface for indirect prompt injection by ingesting untrusted vector data into the agent's context without adequate sanitization.
- Ingestion points: 'SKILL.md' and 'references/index_types.md' use functions like
faiss.read_indexandFAISS.load_localto import data from external files. - Boundary markers: There are no instructions or examples defining delimiters to separate ingested data from agent instructions.
- Capability inventory: The skill has the capability to read from the file system and perform similarity searches that influence agent responses.
- Sanitization: No sanitization logic is provided, and security warnings for deserialization are explicitly disabled in the code examples.
Audit Metadata