rlm-distill-agent
Dependencies
This skill requires Python 3.8+ and standard library only. No external packages needed.
To install this skill's dependencies:
pip-compile ./requirements.in
pip install -r ./requirements.txt
See ./requirements.txt for the dependency lockfile (currently empty — standard library only).
RLM Distill Agent
Role
You ARE the distillation engine. Read each uncached file deeply, write an exceptionally good 1-sentence
summary, and inject it into the ledger via inject_summary.py.
When to Use
- Files are missing from the ledger (as reported by
inventory.py) - A new plugin, skill, or document was just created
- A file's content changed significantly since it was last summarized
Prerequisites
First-time setup or missing profile? Run the rlm-init skill first:
# See: ../SKILL.md
# Creates rlm_profiles.json, manifest, and empty cache
Execution Protocol
1. Identify missing files
python3 ./scripts/inventory.py --profile project
python3 ./scripts/inventory.py --profile tools
2. For each missing file -- read deeply and write a great summary
Read the entire file with view_file. Do not skim.
A great RLM summary answers: "What does this file do, what problem does it solve, and what are its key components/functions?" in one dense sentence.
3. Inject the summary
python3 ./scripts/inject_summary.py \
--profile project \
--file ../SKILL.md \
--summary "Provides atomic file CRUD operations for markdown notes using POSIX rename and fcntl.flock."
The script handles atomic writes safely. Never write to the Markdown files manually.
4. Batching -- if 50+ files are missing
Do not attempt manual distillation for large batches. Choose an engine based on the user's CLI context and cost profile, then delegate to the agent swarm:
CRITICAL: Determine User's CLI Context First!
Before blindly using --engine copilot, determine which agent CLI the user is running (Claude Code, GitHub Copilot CLI, or Google Gemini CLI). You can often tell from the terminal process or simply by asking the user which AI CLI they have access to.
| User's CLI Tool | Recommended Engine Flag | Cost Profile | Workers |
|---|---|---|---|
| GitHub Copilot CLI | --engine copilot (gpt-5-mini nano tier) |
$0 free | --workers 2 (rate-limit safe) |
| Google Gemini CLI | --engine gemini (gemini-3-flash-preview) |
$0 free | --workers 5 (high throughput) |
| Claude Code | --engine claude (Haiku / Sonnet) |
Low-Medium | --workers 3 |
Default Protocol: Ask the user: "I noticed we have over 50 files to distill. Do you have access to Copilot CLI or Gemini CLI for zero-cost batch processing, or should I use Claude Code?"
Then, run the swarm job based on their answer. For example, if they use Gemini:
python3 ./scripts/swarm_run.py --engine gemini --workers 5 --files-from rlm_distill_tasks_project.md
Provide a job file describing the summarization task and the gap file from inventory.py --missing.
See SKILL.md for full swarm configuration options.
Quality Standard for Summaries
| Good | Bad |
|---|---|
| "Atomic file CRUD using POSIX rename + flock, preserving YAML frontmatter via ruamel.yaml." | "This file handles file operations." |
| "3-phase search skill: RLM ledger -> ChromaDB -> grep, escalating from O(1) to exact match." | "Searches for things in the codebase." |
Rules
- Never write to
*_cache/*.mddirectory manualy -- always useinject_summary.py. - Read the whole file -- skimming produces summaries that miss key details.
- Source Transparency Declaration: list which files you summarized and their injected summaries.
More from richfrem/agent-plugins-skills
markdown-to-msword-converter
Converts Markdown files to one MS Word document per file using plugin-local scripts. V2 includes L5 Delegated Constraint Verification for strict binary artifact linting.
52excel-to-csv
>
32zip-bundling
Create technical ZIP bundles of code, design, and documentation for external review or context sharing. Use when you need to package multiple project files into a portable `.zip` archive instead of a single Markdown file.
29learning-loop
(Industry standard: Loop Agent / Single Agent) Primary Use Case: Self-contained research, content generation, and exploration where no inner delegation is required. Self-directed research and knowledge capture loop. Use when: starting a session (Orientation), performing research (Synthesis), or closing a session (Seal, Persist, Retrospective). Ensures knowledge survives across isolated agent sessions.
26ollama-launch
Start and verify the local Ollama LLM server. Use when Ollama is needed for RLM distillation, seal snapshots, embeddings, or any local LLM inference — and it's not already running. Checks if Ollama is running, starts it if not, and verifies the health endpoint.
26spec-kitty-checklist
A standard Spec-Kitty workflow routine.
26