plan-crafting

Installation
SKILL.md

Plan Crafting

Writes an executable plan document from a clearly defined work scope. Designed so tasks can be spawned as "worker-validator" pairs in parallel.

Core Principle

A plan document must be executable by a worker with zero codebase context, without any additional questions. All ambiguity must be resolved at the planning stage.

Hard Gates

  1. Every step must be executable. Placeholders (TBD, TODO, "implement later") are never allowed.
  2. Task conflicts must be prevented. Tasks modifying the same file must not run in parallel. Tasks with dependencies must wait for predecessor completion.
  3. Self-Review is mandatory. After writing the plan, verify its completeness yourself.
  4. Tasks decompose to minimal feature units. One task produces one clear deliverable.

When To Use

  • After the clarification skill completes and a Context Brief file has been generated
  • When the user explicitly requests plan creation with a clear prompt
  • When multi-step implementation is needed and task ordering with dependencies must be defined

Input

This skill takes a Context Brief file as input. The Context Brief generated by the clarification skill is used to populate the plan header:

Context Brief Field Plan Header Mapping
Goal Goal
Scope (In/Out) Work Scope (included/excluded)
Technical Context Architecture + Tech Stack + basis for file structure mapping
Constraints Reflected as constraints during task decomposition
Success Criteria Used as Self-Review criteria
Open Questions Reflected as assumptions in the plan, then confirmed with the user

If no Context Brief file exists (user directly requests a plan): confirm essential information (goal, work scope, tech stack) with the user before writing the plan.

When NOT To Use

  • When work scope is still ambiguous (return to the clarification skill)
  • Single-file edits, simple bug fixes, or other single-step tasks
  • When the user explicitly says "skip the plan, just do it"

Plan Document Structure

Save Location

docs/engineering-discipline/plans/YYYY-MM-DD-<feature-name>.md

(User preferences for plan location override this default.)

Header

# [Feature Name] Implementation Plan

> **Worker note:** Execute this plan task-by-task using the run-plan skill or subagents. Each step uses checkbox (`- [ ]`) syntax for progress tracking.

**Goal:** [One sentence describing what this plan builds]

**Architecture:** [2-3 sentences about approach]

**Tech Stack:** [Key technologies/libraries]

**Work Scope:**
- **In scope:** [What will be implemented]
- **Out of scope:** [What is explicitly excluded]

---

File Structure Mapping

Before defining tasks, map out which files will be created or modified. Decomposition decisions are locked in at this stage.

  • Each file should have one clear responsibility.
  • Files that change together should live together. Split by responsibility, not by technical layer.
  • Follow existing codebase patterns. If the codebase uses large files, don't unilaterally restructure — but if a file you're modifying has grown unwieldy, including a split in the plan is reasonable.
  • File structure informs task decomposition. Each task should produce a self-contained change that makes sense independently.

Verification Discovery

Before decomposing tasks, discover the project's highest-level verification capability. This determines the Final Verification Task that closes every plan.

Discovery order (use the first match):

  1. Existing e2e tests — search for e2e/, tests/e2e/, cypress/, playwright/, test:e2e in package.json, e2e targets in Makefile/Taskfile
  2. Integration tests — search for tests/integration/, integration_test, test:integration scripts
  3. Verification skill or agent — check .claude/skills/, .claude/agents/, and installed plugins for anything named verify, validate, e2e, or test
  4. Project test suite — any test runner (pytest, jest, go test, cargo test, etc.) with broad coverage
  5. Build + lint — if no tests exist, the highest available verification is a successful build + lint pass

If no meaningful verification exists (level 5 only): Add a Task 0: Create Verification Infrastructure that sets up the minimal verification needed for this plan:

  • Identify the project's tech stack and appropriate test framework
  • Create an e2e or integration test that exercises the plan's core success criteria
  • This test should fail before implementation and pass after all tasks complete

Record the discovery result in the plan header:

**Verification Strategy:**
- **Level:** [e2e | integration | skill/agent | test-suite | build-only]
- **Command:** [exact command to run the verification]
- **What it validates:** [what passing this verification proves]

Project Capability Discovery

Before decomposing tasks, also discover project-level agents and skills that workers can leverage:

  1. Project agents — check .claude/agents/ for agents relevant to the task domain (e.g., a test-runner agent, a db-migration agent)
  2. Plugin agents — check installed plugins for specialized agents (e.g., build-validator, lint-fixer)
  3. Project skills — check .claude/skills/ for skills that match task operations

If useful agents/skills are found, reference them in task steps where applicable:

- [ ] **Step N: Run migration**

Use the project's `db-migration` agent for this step if available.
Run: `<migration command>`

Workers are not required to use discovered agents — they are hints for efficiency. The worker may execute steps directly if the agent is unavailable or unsuitable.

Task Decomposition

When decomposing tasks, consider the following:

1. Parallelism and Dependencies

Tasks should be designed for maximum parallel execution. However, the following cases require waiting for a predecessor task to complete:

  • Tasks modifying the same file (prevents file conflicts)
  • Tasks where one task's output is referenced by another (interface dependency)
  • Tasks that modify shared state (database schema, config files, etc.)

Dependencies are stated in the task header:

### Task N: [Task Name]

**Dependencies:** Runs after Task K completes
**Files:**
- Create: `path/to/file`
- Modify: `path/to/existing-file:line-range`
- Test: `path/to/test-file`

Tasks with no dependencies are marked as parallelizable:

### Task N: [Task Name]

**Dependencies:** None (can run in parallel)
**Files:**
- Create: `path/to/file`
- Test: `path/to/test-file`

2. Worker-Validator Structure

Each task is designed so an independent worker (subagent) can execute it and a separate validator can verify it:

  • Worker: Executes the task's steps exactly as written. Makes no judgments beyond what the plan specifies.
  • Validator: Reviews the worker's output after completion. Checks test pass/fail, code quality, and spec compliance.

This structure enables spawning multiple tasks simultaneously, each independently verifiable.

3. Task Granularity

Each step is one action (2-5 minutes):

  • "Write the failing test" — one step
  • "Run it to make sure it fails" — one step
  • "Write the minimal code to make the test pass" — one step
  • "Run the tests and make sure they pass" — one step
  • "Commit" — one step

Task Format

### Task N: [Component Name]

**Dependencies:** [Predecessor task or "None (can run in parallel)"]
**Files:**
- Create: `exact/path/to/file`
- Modify: `exact/path/to/existing-file:123-145`
- Test: `tests/exact/path/to/test-file`

- [ ] **Step 1: Write the failing test**

```python
def test_specific_behavior():
    result = function(input)
    assert result == expected
```

- [ ] **Step 2: Run test to verify it fails**

Run: `pytest tests/path/test.py::test_name -v`
Expected: FAIL with "function not defined"

- [ ] **Step 3: Write minimal implementation**

```python
def function(input):
    return expected
```

- [ ] **Step 4: Run test to verify it passes**

Run: `pytest tests/path/test.py::test_name -v`
Expected: PASS

- [ ] **Step 5: Commit**

```bash
git add tests/path/test.py src/path/file.py
git commit -m "feat: add specific feature"
```

Final Verification Task

Every plan must end with a Final Verification Task that runs the discovered highest-level verification. This is always the last task, depends on all other tasks, and cannot be parallelized.

### Task N (Final): End-to-End Verification

**Dependencies:** All preceding tasks
**Files:** None (read-only verification)

- [ ] **Step 1: Run highest-level verification**

Run: `[verification command from Verification Strategy]`
Expected: ALL PASS

- [ ] **Step 2: Verify plan success criteria**

Manually check each success criterion from the plan header:
- [ ] [criterion 1]
- [ ] [criterion 2]
- ...

- [ ] **Step 3: Run full test suite for regressions**

Run: `[full test suite command]`
Expected: No regressions — all pre-existing tests still pass

If the final verification fails, the plan is not complete. The worker-validator loop in run-plan will handle failure response (see run-plan's E2E Failure Response Protocol).

No Placeholders

Every step must contain the actual content a worker needs. These are plan failures — never write them:

  • "TBD", "TODO", "implement later", "fill in details"
  • "Add appropriate error handling" / "add validation" / "handle edge cases"
  • "Write tests for the above" (without actual test code)
  • "Similar to Task N" (repeat the code — the worker may be reading tasks out of order)
  • Steps that describe what to do without showing how (code blocks required for code steps)
  • References to types, functions, or methods not defined in any task

Remember

  • Exact file paths always
  • Complete code in every step — if a step changes code, show the code
  • Exact commands with expected output
  • DRY, YAGNI, TDD, frequent commits

Self-Review

After writing the complete plan, look at the spec with fresh eyes and check the plan against it. This is a checklist you run yourself — not a subagent dispatch.

1. Spec coverage: Skim each section/requirement in the spec. Can you point to a task that implements it? List any gaps.

2. Placeholder scan: Search your plan for red flags — any of the patterns from the "No Placeholders" section above. Fix them.

3. Type consistency: Do the types, method signatures, and property names you used in later tasks match what you defined in earlier tasks? A function called clearLayers() in Task 3 but clearFullLayers() in Task 7 is a bug.

4. Dependency verification: Verify that parallel tasks don't modify the same file. Verify that no dependency chain is missing.

5. Verification coverage: Does the plan include a Final Verification Task? Does it reference the discovered verification command? If no verification was discovered, is there a Task 0 creating verification infrastructure?

If you find issues, fix them inline. No need to re-review — just fix and move on. If a spec requirement has no corresponding task, add the task.

Execution Handoff

After saving the plan, offer execution choice:

"Plan complete and saved to docs/engineering-discipline/plans/<filename>.md."

"How would you like to proceed?"

1. Subagent execution (recommended) — dispatch a fresh subagent per task, review between tasks, fast iteration

2. Inline execution — execute tasks in this session using the run-plan skill, batch execution with checkpoints

Anti-Patterns

Anti-Pattern Why It Fails
Marking tasks that modify the same file as parallel File conflicts, unmergeable changes
Listing tasks without dependencies Execution order tangles, interface mismatches
Steps that assume "the worker will figure it out" Worker's arbitrary interpretation → spec drift
Approving a plan with placeholders Blocked at execution stage, must return to planning
Completing a plan without Self-Review Missing spec coverage, type mismatches, dependency errors go undetected

Minimal Checklist

Self-check when plan writing is complete:

  • Do all tasks have exact file paths?
  • Do all steps contain executable code/commands?
  • Are there no file conflicts between parallel tasks?
  • Are dependency chains accurately stated?
  • Does the plan cover all spec requirements?
  • Are there no placeholders?
  • Is there a Verification Strategy in the plan header?
  • Is the Final Verification Task the last task in the plan?

Transition

After plan approval:

  • When the plan is ready to execute → run-plan skill
  • If ambiguity is discovered in the work scope → return to the clarification skill to resolve

This skill itself does not invoke the next skill. It ends by presenting the plan document and letting the user choose the next step.

Related skills

More from tmdgusya/engineering-discipline

Installs
31
GitHub Stars
75
First Seen
Mar 31, 2026