plan-feature

SKILL.md

Plan a new task

โš ๏ธ CRITICAL INSTRUCTIONS

DO NOT EXECUTE THE PLAN AFTER CREATING IT.

After the plan is created:

  1. Show the user where the plan was saved
  2. Tell them to run /execute .agents/plans/[feature-name].md
  3. STOP

Execution agent rules (include verbatim in every generated plan):

  • Make ALL code changes required by the plan
  • Delete debug logs added during execution (keep pre-existing ones)
  • Leave ALL changes UNSTAGED โ€” do NOT run git add or git commit

๐Ÿšจ WRITE PLAN TO FILE ONLY โ€” NEVER TO CLI ๐Ÿšจ

  • Write to .agents/plans/[feature-name].md using Write/Edit tool
  • Do NOT output plan content in your response
  • CLI output: summary only (2โ€“3 sentences), questions, confirmation, final report

Feature: $ARGUMENTS

Step 0: Log Planning Start

Before writing anything, check if PROGRESS.md already has a relevant section for this feature โ€” especially if this skill was triggered by referencing something in PROGRESS.md (e.g. the user pointed at a section, or the feature description matches an existing entry).

If a relevant section already exists: update its top fields only (status, add Plan File line). Do NOT create a duplicate entry.

**Status**: โœ… Planned
**Plan File**: .agents/plans/[feature-name].md

If no relevant section exists: add a new entry using the template below.

## Feature: [Feature Name]
### Planning Phase
**Status**: In Progress
**Started**: [date]
**Plan File**: .agents/plans/[feature-name].md

Use Write if PROGRESS.md doesn't exist, Edit if it does.


Planning Process

Phase 1: Feature Understanding

  • Extract core problem, user value, feature type (New Capability/Enhancement/Refactor/Bug Fix), complexity
  • Map affected systems
  • Draft user story: As a / I want / So that

Phase 2: Codebase Intelligence

Check for similar features first. If found, use AskUserQuestion before proceeding.

Check existing patterns for: integrations, endpoints, auth, schema, tools, UI components.

Gather in parallel:

  1. Project structure, frameworks, service boundaries, config files
  2. Naming conventions, error handling, logging patterns, CLAUDE.md
  3. External libraries, docs/, ai_docs/, .agents/reference
  4. Test framework, organization, coverage requirements
  5. Integration points: routers, models, auth patterns

Use AskUserQuestion if ANY of these exist:

  • Unclear, ambiguous, or contradictory requirements
  • Multiple valid approaches without clear preference
  • Conflicts between request and existing patterns
  • Uncertainty about data models, schemas, or interfaces

Default to existing patterns. Document any divergences. Design for parallel execution.

Phase 3: External Research

APIs (if applicable): Run /explore-api [name] for each. Verify: features available, version compatible, rate limits sufficient, ToS permits use, auth accessible, no blockers.

Dependencies: Test in isolated venv first. Check conflict tree with pipdeptree/npm list. Document compatible versions and known conflicts.

Technology: Latest versions, official docs (with section anchors), common gotchas, breaking changes.

Phase 4: Strategic Design

Think through:

  • How does this fit existing architecture?
  • Critical dependencies and order of operations?
  • What could go wrong? (edge cases, race conditions, errors)
  • Performance, security, maintainability implications?

Phase 5: Write the Plan

Read ~/.claude/skills/plan-feature/PLAN_TEMPLATE.md and use it as the structure for the output plan file, filling every section with feature-specific content.

Output: .agents/plans/{kebab-case-name}.md Length: 500โ€“700 lines โ€” verify with wc -l and adjust

Phase 6: Coverage Review Pass

Run automatically after plan is written. No user input needed unless a gap can't be resolved.

  1. Map new code paths โ€” for every file created/modified: functions, branches, async flows, error paths, API surface, data mutations

  2. Map project impact โ€” existing files that import/call changed code; existing tests that may need updating after the change

  3. Gap analysis โ€” for each path, verify a test in the plan covers it: โœ… Covered or โš ๏ธ Gap

  4. Fill gaps โ€” add automated test by default; if genuinely impossible to automate, add manual test with one-sentence justification ("requires physical hardware", "CAPTCHA blocks automation"). "Hard to automate" and "requires a browser" are NOT valid reasons โ€” use Playwright.

  5. Re-verify โ€” repeat until all paths are โœ… or documented manual-only. Update Test Automation Summary in plan.

  6. Script deliverables check โ€” if the plan introduces or modifies a runnable script (demo runner, CLI, orchestrator), verify the plan includes ALL of the following criteria (distinct from scenario-logic criteria):

    • "Running <script> completes the setup phase without raising an exception." (runnability โ€” unit tests do not substitute for this)
    • "All user-visible output uses ASCII-safe characters, or the script explicitly reconfigures stdout encoding at startup." (cross-platform compatibility)
    • If the script spawns claude as a subprocess: "The subprocess environment strips CLAUDECODE (and any other launcher sentinels) before invoking claude." (env isolation)

    These criteria test that the script runs at all, before any criteria about what it reports. A script can be entirely broken (encoding errors, misconfigured subprocesses) while every unit test passes โ€” they are separate surfaces requiring separate validation.


Post-Planning Verification

test -f .agents/plans/[feature-name].md && echo "โœ… Plan exists" || echo "โŒ Missing"
wc -l .agents/plans/[feature-name].md
  • Plan written to file (not CLI)
  • File exists and is 500โ€“700 lines
  • Coverage Review Pass complete (Steps 1โ€“5)
  • All ambiguities resolved via AskUserQuestion
  • Tasks in parallel execution waves with WAVE, DEPENDS_ON, AGENT_ROLE
  • Interface contracts defined; 30%+ tasks parallelizable (or explain why not)
  • Every test marked โœ…/โš ๏ธ with tool, file path, and run command
  • Manual tests have automation-impossibility justification
  • Test Automation Summary updated after gap-filling

Final Report

Output to CLI after saving the plan. Do NOT include plan content in this message.

โœ… Plan Created

โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”
๐Ÿ“‹ Feature: [name]
๐Ÿ“„ Plan: .agents/plans/[feature-name].md
๐Ÿ“ Lines: [n] (target: 500โ€“700)
โšก Complexity: [Low/Medium/High]
โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”

๐Ÿ“ [2โ€“3 sentence summary of feature and approach]

โšก Parallel Execution:
- Waves: [#] | Concurrent tasks: [#] | Max speedup: [x]x | Sequential: [#]

๐Ÿงช Coverage Summary:
- New code paths: [#]/[#] covered ([XX]%)
- Existing code re-validated: [#] areas, [#] tests added/updated
- Automated: [#] ([XX]%) โ€” [tools used]
- Manual: [#] ([XX]%) โ€” [one-line reason each, or "None"]
- Gaps remaining: [# or "None"] โ€” [reason for each, if any]

โš ๏ธ  Risks: [2โ€“4 with mitigations]
๐Ÿ” Patterns used: [similar features referenced, or "None โ€” new pattern"]
๐Ÿ“Š Tasks: [#] total ([x] parallel, [y] sequential)
๐ŸŽฏ Confidence: [x]/10 for one-pass success
๐Ÿ‘ฅ Optimal team: [#] agents

โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”

๐Ÿš€ /execute .agents/plans/[feature-name].md

โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”

Phase 7: Define Acceptance Criteria

After the Final Report has been output above (giving the user a chance to see the plan), invoke the acceptance-criteria-define skill if it is available in this system:

skill: "acceptance-criteria-define"
context: "<absolute path to the plan file just created>"

Pass the plan file path as the context. The skill will read the plan, derive proposed acceptance criteria, confirm them with the user, and write the agreed criteria into the plan file.

If the acceptance-criteria-define skill is not available: skip this phase and proceed directly to STOP below.

Do NOT execute the plan after this phase completes.


STOP โ€” DO NOT EXECUTE

Weekly Installs
5
GitHub Stars
1
First Seen
10 days ago
Installed on
opencode5
claude-code5
github-copilot5
codex5
kimi-cli5
gemini-cli5