plan-feature
Plan a new task
โ ๏ธ CRITICAL INSTRUCTIONS
DO NOT EXECUTE THE PLAN AFTER CREATING IT.
After the plan is created:
- Show the user where the plan was saved
- Tell them to run
/execute .agents/plans/[feature-name].md - STOP
Execution agent rules (include verbatim in every generated plan):
- Make ALL code changes required by the plan
- Delete debug logs added during execution (keep pre-existing ones)
- Leave ALL changes UNSTAGED โ do NOT run
git addorgit commit
๐จ WRITE PLAN TO FILE ONLY โ NEVER TO CLI ๐จ
- Write to
.agents/plans/[feature-name].mdusing Write/Edit tool - Do NOT output plan content in your response
- CLI output: summary only (2โ3 sentences), questions, confirmation, final report
Feature: $ARGUMENTS
Step 0: Log Planning Start
Before writing anything, check if PROGRESS.md already has a relevant section for this feature โ especially if this skill was triggered by referencing something in PROGRESS.md (e.g. the user pointed at a section, or the feature description matches an existing entry).
If a relevant section already exists: update its top fields only (status, add Plan File line). Do NOT create a duplicate entry.
**Status**: โ
Planned
**Plan File**: .agents/plans/[feature-name].md
If no relevant section exists: add a new entry using the template below.
## Feature: [Feature Name]
### Planning Phase
**Status**: In Progress
**Started**: [date]
**Plan File**: .agents/plans/[feature-name].md
Use Write if PROGRESS.md doesn't exist, Edit if it does.
Planning Process
Phase 1: Feature Understanding
- Extract core problem, user value, feature type (New Capability/Enhancement/Refactor/Bug Fix), complexity
- Map affected systems
- Draft user story: As a / I want / So that
Phase 2: Codebase Intelligence
Check for similar features first. If found, use AskUserQuestion before proceeding.
Check existing patterns for: integrations, endpoints, auth, schema, tools, UI components.
Gather in parallel:
- Project structure, frameworks, service boundaries, config files
- Naming conventions, error handling, logging patterns, CLAUDE.md
- External libraries, docs/, ai_docs/, .agents/reference
- Test framework, organization, coverage requirements
- Integration points: routers, models, auth patterns
Use AskUserQuestion if ANY of these exist:
- Unclear, ambiguous, or contradictory requirements
- Multiple valid approaches without clear preference
- Conflicts between request and existing patterns
- Uncertainty about data models, schemas, or interfaces
Default to existing patterns. Document any divergences. Design for parallel execution.
Phase 3: External Research
APIs (if applicable): Run /explore-api [name] for each.
Verify: features available, version compatible, rate limits sufficient, ToS permits use, auth accessible, no blockers.
Dependencies: Test in isolated venv first. Check conflict tree with pipdeptree/npm list. Document compatible versions and known conflicts.
Technology: Latest versions, official docs (with section anchors), common gotchas, breaking changes.
Phase 4: Strategic Design
Think through:
- How does this fit existing architecture?
- Critical dependencies and order of operations?
- What could go wrong? (edge cases, race conditions, errors)
- Performance, security, maintainability implications?
Phase 5: Write the Plan
Read ~/.claude/skills/plan-feature/PLAN_TEMPLATE.md and use it as the structure for the output plan file, filling every section with feature-specific content.
Output: .agents/plans/{kebab-case-name}.md
Length: 500โ700 lines โ verify with wc -l and adjust
Phase 6: Coverage Review Pass
Run automatically after plan is written. No user input needed unless a gap can't be resolved.
-
Map new code paths โ for every file created/modified: functions, branches, async flows, error paths, API surface, data mutations
-
Map project impact โ existing files that import/call changed code; existing tests that may need updating after the change
-
Gap analysis โ for each path, verify a test in the plan covers it: โ Covered or โ ๏ธ Gap
-
Fill gaps โ add automated test by default; if genuinely impossible to automate, add manual test with one-sentence justification ("requires physical hardware", "CAPTCHA blocks automation"). "Hard to automate" and "requires a browser" are NOT valid reasons โ use Playwright.
-
Re-verify โ repeat until all paths are โ or documented manual-only. Update Test Automation Summary in plan.
-
Script deliverables check โ if the plan introduces or modifies a runnable script (demo runner, CLI, orchestrator), verify the plan includes ALL of the following criteria (distinct from scenario-logic criteria):
- "Running
<script>completes the setup phase without raising an exception." (runnability โ unit tests do not substitute for this) - "All user-visible output uses ASCII-safe characters, or the script explicitly reconfigures stdout encoding at startup." (cross-platform compatibility)
- If the script spawns
claudeas a subprocess: "The subprocess environment stripsCLAUDECODE(and any other launcher sentinels) before invoking claude." (env isolation)
These criteria test that the script runs at all, before any criteria about what it reports. A script can be entirely broken (encoding errors, misconfigured subprocesses) while every unit test passes โ they are separate surfaces requiring separate validation.
- "Running
Post-Planning Verification
test -f .agents/plans/[feature-name].md && echo "โ
Plan exists" || echo "โ Missing"
wc -l .agents/plans/[feature-name].md
- Plan written to file (not CLI)
- File exists and is 500โ700 lines
- Coverage Review Pass complete (Steps 1โ5)
- All ambiguities resolved via AskUserQuestion
- Tasks in parallel execution waves with WAVE, DEPENDS_ON, AGENT_ROLE
- Interface contracts defined; 30%+ tasks parallelizable (or explain why not)
- Every test marked โ /โ ๏ธ with tool, file path, and run command
- Manual tests have automation-impossibility justification
- Test Automation Summary updated after gap-filling
Final Report
Output to CLI after saving the plan. Do NOT include plan content in this message.
โ
Plan Created
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
๐ Feature: [name]
๐ Plan: .agents/plans/[feature-name].md
๐ Lines: [n] (target: 500โ700)
โก Complexity: [Low/Medium/High]
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
๐ [2โ3 sentence summary of feature and approach]
โก Parallel Execution:
- Waves: [#] | Concurrent tasks: [#] | Max speedup: [x]x | Sequential: [#]
๐งช Coverage Summary:
- New code paths: [#]/[#] covered ([XX]%)
- Existing code re-validated: [#] areas, [#] tests added/updated
- Automated: [#] ([XX]%) โ [tools used]
- Manual: [#] ([XX]%) โ [one-line reason each, or "None"]
- Gaps remaining: [# or "None"] โ [reason for each, if any]
โ ๏ธ Risks: [2โ4 with mitigations]
๐ Patterns used: [similar features referenced, or "None โ new pattern"]
๐ Tasks: [#] total ([x] parallel, [y] sequential)
๐ฏ Confidence: [x]/10 for one-pass success
๐ฅ Optimal team: [#] agents
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
๐ /execute .agents/plans/[feature-name].md
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Phase 7: Define Acceptance Criteria
After the Final Report has been output above (giving the user a chance to see the plan), invoke the acceptance-criteria-define skill if it is available in this system:
skill: "acceptance-criteria-define"
context: "<absolute path to the plan file just created>"
Pass the plan file path as the context. The skill will read the plan, derive proposed acceptance criteria, confirm them with the user, and write the agreed criteria into the plan file.
If the acceptance-criteria-define skill is not available: skip this phase and proceed directly to STOP below.
Do NOT execute the plan after this phase completes.
STOP โ DO NOT EXECUTE