plan
Plan Skill
Quick Ref: Decompose goal into trackable issues with waves. Output:
.agents/plans/*.md+ bd issues.
YOU MUST EXECUTE THIS WORKFLOW. Do not just describe it.
CLI dependencies: bd (issue creation). If bd is unavailable, write the plan to .agents/plans/ as markdown with issue descriptions, and use TaskList for tracking instead. The plan document is always created regardless of bd availability.
Flags
| Flag | Default | Description |
|---|---|---|
--auto |
off | Skip human approval gate. Used by /rpi --auto for fully autonomous lifecycle. |
Execution Steps
Given /plan <goal> [--auto]:
Step 1: Setup
mkdir -p .agents/plans
Step 2: Check for Prior Research
Look for existing research on this topic:
ls -la .agents/research/ 2>/dev/null | head -10
Use Grep to search .agents/ for related content. If research exists, read it with the Read tool to understand the context before planning.
Search knowledge flywheel for prior planning patterns:
if command -v ao &>/dev/null; then
ao search "<topic> plan decomposition patterns" 2>/dev/null | head -10
fi
If ao returns relevant learnings or patterns, incorporate them into the plan. Skip silently if ao is unavailable or returns no results.
Step 3: Explore the Codebase (if needed)
USE THE TASK TOOL to dispatch an Explore agent:
Tool: Task
Parameters:
subagent_type: "Explore"
description: "Understand codebase for: <goal>"
prompt: |
Explore the codebase to understand what's needed for: <goal>
1. Find relevant files and modules
2. Understand current architecture
3. Identify what needs to change
Return: key files, current state, suggested approach
Pre-Planning Baseline Audit (Mandatory)
Before decomposing into issues, run a quantitative baseline audit to ground the plan in verified numbers. This is mandatory for ALL plans — not just cleanup/refactor. Any plan that makes quantitative claims (counts, sizes, coverage) must verify them mechanically.
Run grep/wc/ls commands to count the current state of what you're changing:
- Files to change: count with
ls/find/wc -l - Sections to add/remove: count with
grep -l/grep -L - Code to modify: count LOC, packages, import references
- Coverage gaps: count missing items with
grep -Lorfind
Record the verification commands alongside their results. These become pre-mortem evidence and acceptance criteria.
| Bad | Good |
|---|---|
| "14 missing refs/" | "14 missing refs/ (verified: ls -d skills/*/references/ | wc -l = 20 of 34)" |
| "clean up dead code" | "Delete 3,003 LOC across 3 packages (verified: find src/old -name '*.go' | xargs wc -l)" |
| "update stale docs" | "Rewrite 4 specs (verified: ls docs/specs/*.md | wc -l = 4)" |
| "add missing sections" | "Add Examples to 27 skills (verified: grep -L '## Examples' skills/*/SKILL.md | wc -l = 27)" |
Ground truth with numbers prevents scope creep and makes completion verifiable. In ol-571, the audit found 5,752 LOC to remove — without it, the plan would have been vague. In ag-dnu, wrong counts (11 vs 14, 0 vs 7) caused a pre-mortem FAIL that a simple grep audit would have prevented.
Step 4: Decompose into Issues
Analyze the goal and break it into discrete, implementable issues. For each issue define:
- Title: Clear action verb (e.g., "Add authentication middleware")
- Description: What needs to be done
- Dependencies: Which issues must complete first (if any)
- Acceptance criteria: How to verify it's done
Design Briefs for Rewrites
For any issue that says "rewrite", "redesign", or "create from scratch": Include a design brief (3+ sentences) covering:
- Purpose — what does this component do in the new architecture?
- Key artifacts — what files/interfaces define success?
- Workflows — what sequences must work?
Without a design brief, workers invent design decisions. In ol-571, a spec rewrite issue without a design brief produced output that diverged from the intended architecture.
Issue Granularity
- 1-2 independent files → 1 issue
- 3+ independent files with no code deps → split into sub-issues (one per file)
- Example: "Rewrite 4 specs" → 4 sub-issues (4.1, 4.2, 4.3, 4.4)
- Enables N parallel workers instead of 1 serial worker
- Shared files between issues → serialize or assign to same worker
Conformance Checks
For each issue's acceptance criteria, derive at least one mechanically verifiable conformance check using validation-contract.md types. These checks bridge the gap between spec intent and implementation verification.
| Acceptance Criteria | Conformance Check |
|---|---|
| "File X exists" | files_exist: ["X"] |
| "Function Y is implemented" | content_check: {file: "src/foo.go", pattern: "func Y"} |
| "Tests pass" | tests: "go test ./..." |
| "Endpoint returns 200" | command: "curl -s -o /dev/null -w '%{http_code}' localhost:8080/api | grep 200" |
| "Config has setting Z" | content_check: {file: "config.yaml", pattern: "setting_z:"} |
Rules:
- Every issue MUST have at least one conformance check
- Checks MUST use validation-contract.md types:
files_exist,content_check,command,tests,lint - Prefer
content_checkandfiles_exist(fast, deterministic) overcommand(slower, environment-dependent) - If acceptance criteria cannot be mechanically verified, flag it as underspecified
Step 5: Compute Waves
Group issues by dependencies for parallel execution:
- Wave 1: Issues with no dependencies (can run in parallel)
- Wave 2: Issues depending only on Wave 1
- Wave 3: Issues depending on Wave 2
- Continue until all issues assigned
Validate Dependency Necessity
For EACH declared dependency, verify:
- Does the blocked issue modify a file that the blocker also modifies? → Keep
- Does the blocked issue read output produced by the blocker? → Keep
- Is the dependency only logical ordering (e.g., "specs before roles")? → Remove
False dependencies reduce parallelism. Pre-mortem judges will also flag these. In ol-571, unnecessary serialization between independent spec rewrites was caught by pre-mortem.
Step 6: Write Plan Document
Write to: .agents/plans/YYYY-MM-DD-<goal-slug>.md
# Plan: <Goal>
**Date:** YYYY-MM-DD
**Source:** <research doc if any>
## Overview
<1-2 sentence summary of what we're building>
## Boundaries
**Always:** <non-negotiable requirements — security, backward compat, testing, etc.>
**Ask First:** <decisions needing human input before proceeding — in auto mode, logged only>
**Never:** <explicit out-of-scope items preventing scope creep>
## Baseline Audit
| Metric | Command | Result |
|--------|---------|--------|
| <what was measured> | `<grep/wc/ls command used>` | <result> |
## Conformance Checks
| Issue | Check Type | Check |
|-------|-----------|-------|
| Issue 1 | content_check | `{file: "src/auth.go", pattern: "func Authenticate"}` |
| Issue 1 | tests | `go test ./src/auth/...` |
| Issue 2 | files_exist | `["docs/api-v2.md"]` |
## Issues
### Issue 1: <Title>
**Dependencies:** None
**Acceptance:** <how to verify>
**Description:** <what to do>
### Issue 2: <Title>
**Dependencies:** Issue 1
**Acceptance:** <how to verify>
**Description:** <what to do>
## Execution Order
**Wave 1** (parallel): Issue 1, Issue 3
**Wave 2** (after Wave 1): Issue 2, Issue 4
**Wave 3** (after Wave 2): Issue 5
## Next Steps
- Run `/crank` for autonomous execution
- Or `/implement <issue>` for single issue
Step 7: Create Tasks for In-Session Tracking
Use TaskCreate tool for each issue:
Tool: TaskCreate
Parameters:
subject: "<issue title>"
description: |
<Full description including:>
- What to do
- Acceptance criteria
- Dependencies: [list task IDs that must complete first]
activeForm: "<-ing verb form of the task>"
After creating all tasks, set up dependencies:
Tool: TaskUpdate
Parameters:
taskId: "<task-id>"
addBlockedBy: ["<dependency-task-id>"]
IMPORTANT: Create persistent issues for ratchet tracking:
If bd CLI available, create beads issues to enable progress tracking across sessions:
# Create epic first
bd create --title "<goal>" --type epic --label "planned"
# Create child issues (note the IDs returned)
bd create --title "<wave-1-task>" --body "<description>" --parent <epic-id> --label "planned"
# Returns: na-0001
bd create --title "<wave-2-task-depends-on-wave-1>" --body "<description>" --parent <epic-id> --label "planned"
# Returns: na-0002
# Add blocking dependencies to form waves
bd dep add na-0001 na-0002
# Now na-0002 is blocked by na-0001 → Wave 2
Include conformance checks in issue bodies:
When creating beads issues, embed the conformance checks from the plan as a fenced validation block in the issue description. This flows to worker validation metadata via /crank:
bd create --title "<task>" --body "Description...
\`\`\`validation
{\"files_exist\": [\"src/auth.go\"], \"content_check\": {\"file\": \"src/auth.go\", \"pattern\": \"func Authenticate\"}}
\`\`\`
" --parent <epic-id>
Include cross-cutting constraints in epic description:
"Always" boundaries from the plan should be added to the epic's description as a ## Cross-Cutting Constraints section. /crank reads these from the epic (not per-issue) and injects them into every worker task's validation metadata.
Waves are formed by blocks dependencies:
- Issues with NO blockers → Wave 1 (appear in
bd readyimmediately) - Issues blocked by Wave 1 → Wave 2 (appear when Wave 1 closes)
- Issues blocked by Wave 2 → Wave 3 (appear when Wave 2 closes)
bd ready returns the current wave - all unblocked issues that can run in parallel.
Without bd issues, the ratchet validator cannot track gate progress. This is required for /crank autonomous execution and /post-mortem validation.
Step 8: Request Human Approval (Gate 2)
Skip this step if --auto flag is set. In auto mode, proceed directly to Step 9.
USE AskUserQuestion tool:
Tool: AskUserQuestion
Parameters:
questions:
- question: "Plan complete with N tasks in M waves. Approve to proceed?"
header: "Gate 2"
options:
- label: "Approve"
description: "Proceed to /pre-mortem or /crank"
- label: "Revise"
description: "Modify the plan before proceeding"
- label: "Back to Research"
description: "Need more research before planning"
multiSelect: false
Wait for approval before reporting completion.
Step 9: Record Ratchet Progress
ao ratchet record plan 2>/dev/null || true
Step 10: Report to User
Tell the user:
- Plan document location
- Number of issues identified
- Wave structure for parallel execution
- Tasks created (in-session task IDs)
- Next step:
/pre-mortemfor failure simulation, then/crankfor execution
Key Rules
- Read research first if it exists
- Explore codebase to understand current state
- Identify dependencies between issues
- Compute waves for parallel execution
- Always write the plan to
.agents/plans/
Examples
Plan from Research
User says: /plan "add user authentication"
What happens:
- Agent reads recent research from
.agents/research/2026-02-13-authentication-system.md - Explores codebase to identify integration points
- Decomposes into 5 issues: middleware, session store, token validation, tests, docs
- Creates epic
ag-5k2with 5 child issues in 2 waves - Output written to
.agents/plans/2026-02-13-add-user-authentication.md
Result: Epic with dependency graph, conformance checks, and wave structure for parallel execution.
Plan with Auto Mode
User says: /plan --auto "refactor payment module"
What happens:
- Agent skips human approval gates
- Searches knowledge base for refactoring patterns
- Creates epic and child issues automatically
- Records ratchet progress
Result: Fully autonomous plan creation with 3 waves, 8 issues, ready for /crank.
Plan Cleanup Epic with Audit
User says: /plan "remove dead code"
What happens:
- Agent runs quantitative audit: 3,003 LOC across 3 packages
- Creates issues grounded in audit numbers (not vague "cleanup")
- Each issue specifies exact files and line count reduction
- Output includes deletion verification checks
Result: Scoped cleanup plan with measurable completion criteria (e.g., "Delete 1,500 LOC from pkg/legacy").
Troubleshooting
| Problem | Cause | Solution |
|---|---|---|
| bd create fails | Beads not initialized in repo | Run bd init --prefix <prefix> first |
| Dependencies not created | Issues created without explicit bd dep add calls |
Verify plan output includes dependency commands. Re-run to regenerate |
| Plan too large | Research scope was too broad, resulting in >20 issues | Narrow the goal or split into multiple epics |
| Wave structure incorrect | False dependencies declared (logical ordering, not file conflicts) | Review dependency necessity: does blocked issue modify blocker's files? |
| Conformance checks missing | Acceptance criteria not mechanically verifiable | Add files_exist, content_check, tests, or command checks per validation-contract.md |
| Epic has no children | Plan created but bd commands failed silently | Check bd list --type epic output; re-run plan with bd CLI available |