skills/fradser/dotclaude/writing-plans

writing-plans

SKILL.md

Writing Plans

Create executable implementation plans that reduce ambiguity for whoever executes them.

Initialization

  1. Resolve Design Path:
    • If $ARGUMENTS provides a path (e.g., docs/plans/YYYY-MM-DD-topic-design/), use it as the design source.
    • If no argument is provided:
      • Search docs/plans/ for the most recent *-design/ folder matching the pattern YYYY-MM-DD-*-design/
      • If found, confirm with user: "Use this design: [path]?"
      • If not found or user declines, ask the user for the design folder path.
  2. Design Check: Verify the folder contains _index.md and bdd-specs.md.
  3. Context: Read bdd-specs.md completely. This is the source of truth for your tasks.

Background Knowledge

Core Concept: Explicit over implicit, granular tasks, verification-driven, context independence. PROHIBITED: Do not generate actual code - focus on what to do, not implementation details.

  • MANDATORY: Tasks must be driven by BDD scenarios (Given/When/Then).
  • MANDATORY: Test-First (Red-Green) workflow. Verification tasks must precede implementation tasks.
  • MANDATORY: When plans include unit tests, require external dependency isolation with test doubles (DB/network/third-party APIs).
  • PROHIBITED: Do not generate actual code - describe what to implement, not the implementation itself.
  • MANDATORY: One task per file. Each task gets its own .md file.
  • MANDATORY: _index.md contains overview and references to all task files.

Phase 1: Plan Structure

Define goal, architecture, constraints.

  1. Read Specs: Read bdd-specs.md from the design folder (generated by brainstorming).
  2. Draft Structure: Use ./references/plan-structure-template.md to outline the plan.

Phase 2: Task Decomposition

Break into small tasks mapped to specific BDD scenarios.

  1. Reference Scenarios: CRITICAL: Every task must explicitly include the full BDD Scenario content in the task file using Gherkin syntax. For example:

    ## BDD Scenario
    
    Scenario: [concise scenario title]
      Given [context or precondition]
      When [action or event occurs]
      Then [expected outcome]
      And [additional conditions or outcomes]
    

    The scenario content should be self-contained in the task file, not just a reference to bdd-specs.md. This allows the executor to see the complete scenario without switching files.

  2. Define Verification: CRITICAL: Verification steps must run the BDD specs (e.g., npm test tests/login.spec.ts).

  3. Enforce Ordering: Task N (Test/Red) -> Task N+1 (Implementation/Green).

  4. Declare Dependencies: MANDATORY: Each task file must include a **depends-on** field listing only true technical prerequisites — tasks whose output is required before this task can start. Rules:

    • A test task (Red) for feature X has no dependency on test tasks for other features
    • An implementation task (Green) depends only on its paired test task (Red), not on other features' implementations
    • Tasks that touch different files and test different scenarios are independent by default
    • PROHIBITED: Do not chain tasks sequentially just to impose execution order — use depends-on only when there is a real technical reason (e.g., "implement auth middleware" must precede "implement protected route test")
  5. Ensure Compatibility: Ensure tasks are compatible with superpowers:behavior-driven-development.

  6. Create Task Files: MANDATORY: Create one .md file per task. Filename pattern: task-<NNN>-<feature>-<type>.md.

    • Example: task-001-setup.md, task-002-feature-test.md, task-002-feature-impl.md
    • <NNN>: Sequential number (001, 002, ...)
    • <feature>: Feature identifier (e.g., auth-handler, user-profile)
    • <type>: Type (test, impl, config, refactor)
    • Test and implementation tasks for the same feature share the same NN prefix, e.g., 002-feature-test and 002-feature-impl
  7. Describe What, Not How: PROHIBITED: Do not generate actual code. Describe what to implement (e.g., "Create a function that validates user credentials"), not the implementation (e.g., "def validate_credentials(username, password): ...").

Phase 3: Validation & Documentation

Verify completeness, confirm with user, and save.

  1. Verify: Check for valid commit boundaries and no vague tasks.
  2. Confirm: Get user approval on the plan.
  3. Save: Write to docs/plans/YYYY-MM-DD-<topic>-plan/ folder.
    • CRITICAL: _index.md MUST include "Execution Plan" section with references to all task files
    • CRITICAL: _index.md MUST include "BDD Coverage" section confirming all scenarios are covered
    • CRITICAL: _index.md MUST include "Dependency Chain" section with visual dependency graph (will be populated in Phase 4)
    • Example: - [Task 001: Setup project structure](./task-001-setup-project-structure.md)
    • Test and implementation tasks for the same feature share the same NN prefix, e.g., [Task 002: Whale Discovery Test](./task-002-whale-discovery-test.md) and [Task 002: Whale Discovery Impl](./task-002-whale-discovery-impl.md)

Phase 4: Plan Reflection

Before committing, launch sub-agents in parallel to verify plan quality and identify gaps.

Core reflection sub-agents (always required):

Sub-agent 1: BDD Coverage Review

  • Focus: Verify every BDD scenario from design has corresponding tasks
  • Output: Coverage matrix, orphaned scenarios, extra tasks without scenarios

Sub-agent 2: Dependency Graph Review

  • Focus: Verify depends-on fields are correct, check for cycles, identify missing dependencies
  • Output: Dependency graph, cycle detection, incorrect dependencies

Sub-agent 3: Task Completeness Review

  • Focus: Verify each task has required structure (BDD scenario, files, steps, verification)
  • Output: Incomplete tasks list, missing sections by task

Additional sub-agents (launch as needed):

  • Red-Green Pairing Review - Verify test tasks have corresponding impl tasks
  • File Conflict Review - Identify tasks that modify the same files

Integrate and Update:

  1. Collect all sub-agent findings
  2. Prioritize issues by impact
  3. Update plan files to fix issues
  4. MANDATORY: Add dependency graph from Sub-agent 2 to _index.md in "Dependency Chain" section
  5. Re-verify updated sections

Output: Updated plan with issues resolved and dependency graph included in _index.md.

See ./references/plan-reflection.md for sub-agent prompts and integration workflow.

Phase 5: Git Commit

Commit the plan folder to git with proper message format.

Critical requirements:

  • Commit the entire folder: git add docs/plans/YYYY-MM-DD-<topic>-plan/
  • Prefix: docs: (lowercase)
  • Subject: Under 50 characters, lowercase
  • Footer: Co-Authored-By with model name

See ../../skills/references/git-commit.md for detailed patterns.

Phase 6: Transition to Execution

Prompt the user to use superpowers:executing-plans to execute the plan.

Example prompt: "Plan complete. To execute this plan, use /superpowers:executing-plans."

PROHIBITED: Do NOT offer to start implementation directly.

Exit Criteria

Plan created with clear goal/constraints, decomposed tasks with file lists and verification, BDD steps, commit boundaries, no vague tasks, reflection completed, user approval.

References

  • ./references/plan-structure-template.md - Template for plan structure
  • ./references/task-granularity-and-verification.md - Guide for task breakdown and verification
  • ./references/plan-reflection.md - Sub-agent prompts for plan reflection
  • ../../skills/references/git-commit.md - Git commit patterns and requirements
Weekly Installs
23
GitHub Stars
357
First Seen
Feb 11, 2026
Installed on
opencode23
gemini-cli23
github-copilot23
amp23
codex23
kimi-cli23