embeddings
Fail
Audited by Gen Agent Trust Hub on Feb 16, 2026
Risk Level: HIGHREMOTE_CODE_EXECUTIONCOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
- REMOTE_CODE_EXECUTION (HIGH): The implementation of 'Late Chunking' in
references/advanced-patterns.mdutilizes thetransformerslibrary withtrust_remote_code=True. Evidence:self.model = AutoModel.from_pretrained(model_name, trust_remote_code=True)inreferences/advanced-patterns.md. Risk: This flag enables the execution of arbitrary Python code contained within the model repository (e.g., from Hugging Face). If a model is compromised or malicious, it can result in full system compromise. - COMMAND_EXECUTION (LOW): The skill interacts with local services via network requests. Evidence:
OllamaEmbedderinreferences/advanced-patterns.mduseshttpxto send requests tohttp://localhost:11434. Context: This is standard for local LLM orchestration but increases the attack surface if the local service is vulnerable. - PROMPT_INJECTION (MEDIUM): The skill provides an ingestion surface for untrusted external data (Category 8). 1. Ingestion points:
process_documentinscripts/embedding-pipeline.pyandsemantic_chunkinreferences/chunking-strategies.mdtake raw strings as input. 2. Boundary markers: None present in the code snippets. The input text is split and processed directly. 3. Capability inventory: Performs network requests to OpenAI and local Ollama APIs; manages a local cache inscripts/embedding-pipeline.py. 4. Sanitization: No evidence of sanitization or filtering of input text before embedding. Risk: While the skill itself does not 'obey' commands, it generates embeddings for potentially malicious instructions which are then used in downstream RAG pipelines, potentially poisoning the agent context.
Recommendations
- AI detected serious security threats
Audit Metadata