rlama
Pass
Audited by Gen Agent Trust Hub on Apr 29, 2026
Risk Level: SAFEDATA_EXFILTRATIONPROMPT_INJECTIONCOMMAND_EXECUTIONEXTERNAL_DOWNLOADS
Full Analysis
- [METADATA_POISONING]: The skill description in SKILL.md makes a strong claim that it "Runs 100% locally with Ollama
- no cloud, no data leaving your machine." However, the skill includes optional features and implementation in scripts/rlama_retrieve.py to send document chunks to external providers like OpenRouter and TogetherAI. While marked as optional, the absolute privacy claim is technically inaccurate and could lead to a misjudgment of safety for sensitive data.
- [DATA_EXFILTRATION]: The synthesis feature in scripts/rlama_retrieve.py is designed to transmit query and local document context to cloud-based LLM providers (openrouter.ai and together.xyz) or custom endpoints. Although this requires user-provided API keys, it establishes a mechanism where local data can be sent to external servers.
- [INDIRECT_PROMPT_INJECTION]: As the skill indexes documents from arbitrary local directories, it is susceptible to indirect prompt injection if those documents contain malicious instructions. If an attacker-controlled file is indexed, it could attempt to manipulate the agent's output during the retrieval or synthesis phase. The skill uses '---' delimiters and metadata headers in scripts/rlama_retrieve.py to separate document chunks, which provides some mitigation.
- [COMMAND_EXECUTION]: Multiple scripts (rlama_manage.py, rlama_query.py, etc.) execute the 'rlama' CLI tool using the subprocess module. These calls use the list-based argument format which safely prevents shell command injection. The skill assumes the 'rlama' binary is pre-installed on the host system.
- [EXTERNAL_DOWNLOADS]: The skill documentation and CLI reference describe mechanisms for downloading models from Hugging Face (hf.co) and Ollama's registry. These are well-known and trusted platforms for machine learning assets.
Audit Metadata