rlm-curator

Pass

Audited by Gen Agent Trust Hub on Mar 9, 2026

Risk Level: SAFEPROMPT_INJECTIONDATA_EXFILTRATION
Full Analysis
  • [SAFE]: The skill implements operational safety instructions known as the "Electric Fence" in SKILL.md. These negative constraints prevent manual cache manipulation to avoid data corruption, representing a benign use of prompt-based behavioral control.
  • [DATA_EXFILTRATION]: The distiller.py script transmits repository file content to an Ollama API endpoint for summarization. While the default endpoint is localhost:11434, the destination is configurable via the OLLAMA_HOST environment variable. This is a functional requirement for local LLM integration and does not represent an exfiltration threat in standard configurations.
  • [PROMPT_INJECTION]: The skill exhibits an indirect prompt injection surface (Category 8) because it ingests untrusted repository data to generate summaries that are then stored in a shared cache for future retrieval. * Ingestion points: Repository files are read by distiller.py and inject_summary.py for processing. * Boundary markers: The prompt templates in resources/prompts/rlm/ use Markdown headers to separate file content from distillation instructions. * Capability inventory: The skill performs file writes to the .agent/learning/ directory and network POST requests to the Ollama service. * Sanitization: distiller.py includes logic to truncate large files and clean common LLM output artifacts.
Audit Metadata
Risk Level
SAFE
Analyzed
Mar 9, 2026, 06:56 PM