distill
Distill
Purpose
Extract reusable knowledge from closed tasks and archive their context directories. Operates on closed beads, leveraging session linkage to pull full work history.
When to Use
- User says "distill", "archive", "review closed tasks"
- Periodic knowledge extraction from completed work
- Before a project retrospective
Process
Step 1: Find Candidates
bd query "status=closed AND NOT label=distilled" -a
Show the list to the user. If specific bead IDs were provided, use those instead.
Step 2: Select Scope
Ask the user which beads to distill, or confirm "all" if the list is small (<=5).
Step 3: Per Bead — Gather Context
For each selected bead:
bd show <id> --json— get description, close_reason, sessions, metadata- Check for context dir:
projects/<project>/contexts/<bead-id>/— read notes.md and any artifacts - Identify the project from the bead's labels
Step 4: Per Bead — Extract Knowledge
Review gathered context and identify:
- Reusable patterns — workflows, techniques, approaches that worked
- Decisions made — architectural choices, tradeoffs, rationale
- Preset candidates — conventions or practices that should be codified in CLAUDE.md Presets
- Time-sink patterns — what took the most time, could it be automated?
- Workflow gaps — places where configured workflows were not followed (see Step 6)
- Interruption signals — see Step 4a
- User suggestions — see Step 4b
Step 4a: Interruption Analysis
Manual interruptions (user hit Ctrl+C, said "stop", "wait", "no", redirected mid-task) are high-signal events. They mean the agent was doing something the user didn't want. For each interruption found:
- What was the agent doing? — the action that got interrupted
- Why did the user interrupt? — wrong direction, too slow, unnecessary work, wrong approach, scope creep
- What happened after? — did the user redirect to a different approach? Give up on the task?
- Root cause — classify as one of:
- Missing knowledge — agent didn't know something it should have (→ add to shared context or presets)
- Wrong default — agent chose a reasonable-sounding but incorrect approach (→ add preset or skill guidance)
- Scope creep — agent expanded beyond what was asked (→ note as over-engineering pattern)
- Tool misuse — agent used the wrong tool or approach (→ update skill or add to presets)
- Slow/verbose — agent was too thorough where speed was needed (→ note preference)
- User changed mind — not an agent issue, just context
Step 4b: User Suggestion Analysis
When the user provides mid-session suggestions ("try X instead", "use Y", "you should Z", "what about..."), these are course corrections that reveal knowledge gaps. For each suggestion found:
- What did the user suggest? — the specific guidance
- Was this a correction or an enhancement? — fixing wrong direction vs. adding info the agent couldn't know
- Is this generalizable? — would this help in future similar tasks?
- Where should it live? — classify as:
- Preset — a convention to always follow (→ CLAUDE.md Presets)
- Shared knowledge — domain info for the project (→
shared/<topic>.md) - Skill update — procedural improvement (→ update relevant skill)
- One-off — specific to this task, not generalizable
Step 4c: Session Efficiency Analysis
Examine how work was done, not just what was learned. For each bead's session:
- Tool usage patterns:
- Over-reliance on Bash vs dedicated tools (Read/Grep/Glob when cat/grep/find were used)
- Missed skill invocations — manual steps that a skill already automates
- Repeated tool failures — same tool called multiple times with wrong args or approach
- Friction indicators:
- Long back-and-forth clarifications before work started
- Repeated retries of the same action (e.g., retry loop on a failing command)
- Context switches mid-task (started one approach, abandoned, tried another)
- Quantitative signals (approximate counts):
- Interruptions per bead (from Step 4a)
- Tool failure rate (failed tool calls / total tool calls)
- Clarification rounds before productive work began
Step 5: Per Bead — Write to Shared
Distill findings to projects/<project>/shared/<topic>.md:
- If the file exists, append a new section referencing the bead
- If new, create with a clear heading and bead reference
- Format: concise bullets, not prose
- Include bead ID as provenance:
(from <bead-id>) - If nothing worth distilling, skip this step and note why
Step 6: Workflow Improvements
Review the work session for improvements to the workspace system itself. Two categories:
6a: Compliance Gaps — configured workflows not followed
Compare what happened in the session against what's configured in:
- CLAUDE.md (root + project) — presets, conventions, workflow steps
- Shared context (
projects/<project>/shared/) — known patterns, decisions - Skills — established procedures that were skipped or done ad-hoc
For each gap found, determine:
- Was the workflow unclear, outdated, or wrong? → Propose an update to the source
- Was it simply missed? → Note it as a reminder (no system change needed)
6b: New Knowledge Opportunities — patterns to codify
Look for repetitive work that could be accelerated:
- Recurring patterns — same sequence of steps done multiple times → candidate for a new skill or shared knowledge
- Manual lookups — information repeatedly searched for → add to shared context or CLAUDE.md
- Boilerplate — repeated code/config/commands → candidate for a template or script
- Implicit conventions — decisions made consistently but not written down → candidate for presets
- Tool misuse patterns — same wrong tool choice across beads (e.g., Bash grep instead of Grep tool, cat instead of Read) → candidate for preset or feedback memory
- Missed skill invocations — manual multi-step sequences that an existing skill already automates → reminder or skill discoverability improvement
6c: Interruption & Suggestion Patterns
Roll up findings from Step 4a and 4b across all beads being distilled. Look for:
- Repeated interruption root causes — same type of mistake across beads → systemic issue
- Cluster of similar suggestions — user keeps teaching the same thing → must be codified
- Project-specific vs. global — does the fix belong in project CLAUDE.md or root CLAUDE.md?
- Friction hotspots — recurring slow/painful sequences across beads (e.g., same setup steps repeatedly failing)
- Tool failure clusters — same tool failing the same way repeatedly → systemic config or knowledge issue
Output
Present findings to the user as a table:
| # | Category | Finding | Proposed Action |
|---|-------------|---------|-----------------|
| 1 | gap | Didn't use worktree for code changes | Reminder (workflow exists) |
| 2 | gap | Preset X is outdated | Update CLAUDE.md |
| 3 | pattern | Repeated Meegle field updates | New skill or script |
| 4 | pattern | Always look up same roster info | Add to shared context |
| 5 | interruption| Agent over-researched before acting | Add preset: "bias toward action for simple tasks" |
| 6 | suggestion | User taught API pagination pattern 3x | Add to shared/api-patterns.md |
Ask the user which actions to apply. Execute confirmed actions (edit CLAUDE.md, create shared docs, file new beads for larger improvements).
Step 7: Preset Candidates
If any preset candidates were identified in Step 4 or Step 6:
- Show them to the user
- Ask if they should be added to CLAUDE.md Presets section
- Apply if confirmed
Step 8: Mark Distilled
bd label add <id> distilled
Step 9: Archive Context
If a context directory exists at projects/<project>/contexts/<bead-id>/:
git mv projects/<project>/contexts/<bead-id> projects/<project>/contexts/archived/<bead-id>
If no context directory exists, skip this step.
Step 10: Commit
git add projects/<project>/shared/
git add projects/<project>/contexts/
bd sync
git commit -m "chore: distill and archive <bead-id(s)>"
Step 11: Summary
Distilled:
- <bead-id>: <title> → shared/<topic>.md
- <bead-id>: <title> → (nothing to distill, archived only)
Interruptions analyzed: <count> found, root causes: <list top causes>
User suggestions extracted: <count> found, <count> generalizable
Workflow improvements: <count> gaps found, <count> patterns identified, <count> actions applied
Session efficiency: <count> friction points, <count> tool misuse patterns, <count> missed skill invocations
Preset candidates: <count> proposed, <count> applied
Archived: <count> context dirs moved
Tips
- Batch multiple beads in one run for efficiency
- Not every bead produces distillable knowledge — quick fixes and chores often don't
- Focus on "what would help the next person (or future session) doing similar work?"
- The
distilledlabel prevents re-processing — don't skip it even if nothing was distilled - Use
bd query "status=closed AND label=distilled" -ato review past distillations