rlm-curator
Dependencies
This skill requires Python 3.8+ and standard library only. No external packages needed.
To install this skill's dependencies:
pip-compile ./requirements.in
pip install -r ./requirements.txt
See ./requirements.txt for the dependency lockfile (currently empty — standard library only).
Identity: The Knowledge Curator 🧠
You are the Knowledge Curator. Your goal is to keep the recursive language model (RLM) semantic ledger up to date so that other agents can retrieve accurate context without reading every file.
Tools (Plugin Scripts)
| Script | Role | Ollama? |
|---|---|---|
distiller.py |
The Writer (Ollama) — local LLM batch summarization | Required |
inject_summary.py |
The Writer (Agent/Swarm) -- direct agent-generated injection, no Ollama | None |
inventory.py |
The Auditor -- coverage reporting | None |
cleanup_cache.py |
The Janitor -- stale entry removal | None |
rlm_config.py |
Shared Config -- manifest & profile mgmt | None |
Searching the cache? Use the
rlm-searchskill and itsquery_cache.pyscript.
Architectural Constraints (The "Electric Fence")
The RLM Cache is a highly concurrent JSON file read/written by multiple agents simultaneously.
❌ WRONG: Manual Cache Manipulation (Negative Instruction Constraint)
NEVER manually edit the .agent/learning/rlm_summary_cache.json or .agent/learning/rlm_tool_cache.json using raw bash commands, sed, awk, or native LLM tool block writes.
Doing so bypasses the Python fcntl.flock concurrency lock. If multiple agents attempt this structureless write, the JSON file will be silently corrupted and destroyed.
✅ CORRECT: Curatorial Scripts
ALWAYS use inject_summary.py or distiller.py to write to the cache. These scripts handle the fcntl.flock locks inherently, guaranteeing data integrity.
Delegated Constraint Verification (L5 Pattern)
When executing distiller.py:
- If the script throws an error mentioning
Connection refused(usually pointing to port11434), it means the Ollama AI server is down. Do not attempt to retry indefinitely or modify python. You MUST IMMEDIATELY refer to./fallback-tree.md.
📂 Execution Protocol
1. Assessment (Always First)
python3 .agents/skills/rlm-curator/scripts/inventory.py --type legacy
Check: Is coverage < 100%? Are there missing files?
2. Retrieval (Read -- Fast)
Use the rlm-search skill for all cache queries:
python3 .agents/skills/rlm-curator/scripts/query_cache.py --profile plugins "search_term"
python3 .agents/skills/rlm-curator/scripts/query_cache.py --profile tools --list
3. Distillation (Write)
Option A: Zero-Cost Swarm (Preferred for bulk > 10 files)
Use the Copilot swarm (free, gpt-5-mini) or Gemini swarm (free).
Delegate to the agent-loops:agent-swarm skill, providing:
- Engine:
copilot(free default) orgemini(higher throughput) - Job: provide a job file describing the summarization task
- Files: gap list from
inventory.py --missing - Workers:
2for copilot (rate-limit safe),5for gemini
Option B: Ollama Batch (requires Ollama running locally)
python3 .agents/skills/rlm-curator/scripts/distiller.py
Option C: Manual Agent Injection (< 5 files)
python3 .agents/skills/rlm-curator/scripts/inject_summary.py \
--profile project \
--file path/to/file.md \
--summary "Your dense summary here..."
4. Cleanup (Curate)
python3 .agents/skills/rlm-curator/scripts/cleanup_cache.py --type legacy --apply
Quality Guidelines
Every summary injected should answer "Why does this file exist?"
- BAD: "This script runs the server"
- GOOD: "Launches backend on port 3001 handling Questrade auth"
More from richfrem/agent-plugins-skills
markdown-to-msword-converter
Converts Markdown files to one MS Word document per file using plugin-local scripts. V2 includes L5 Delegated Constraint Verification for strict binary artifact linting.
52excel-to-csv
>
32zip-bundling
Create technical ZIP bundles of code, design, and documentation for external review or context sharing. Use when you need to package multiple project files into a portable `.zip` archive instead of a single Markdown file.
29learning-loop
(Industry standard: Loop Agent / Single Agent) Primary Use Case: Self-contained research, content generation, and exploration where no inner delegation is required. Self-directed research and knowledge capture loop. Use when: starting a session (Orientation), performing research (Synthesis), or closing a session (Seal, Persist, Retrospective). Ensures knowledge survives across isolated agent sessions.
26ollama-launch
Start and verify the local Ollama LLM server. Use when Ollama is needed for RLM distillation, seal snapshots, embeddings, or any local LLM inference — and it's not already running. Checks if Ollama is running, starts it if not, and verifies the health endpoint.
26spec-kitty-checklist
A standard Spec-Kitty workflow routine.
26