planning-patterns

Installation
SKILL.md

Writing Plans

Overview

Write comprehensive implementation plans assuming the engineer has zero context for the codebase and questionable taste. Document everything they need to know: which files to touch for each task, code, testing, docs they might need to check, how to test it. Give them the whole plan as bite-sized tasks. DRY. YAGNI. TDD. Frequent commits.

Plans are prompts. The plan will be consumed by an AI agent with zero prior context. Write it as you would write a prompt: specific, complete, unambiguous. If a different agent could misinterpret a step, the step is underspecified.

Assume they are a skilled developer, but know almost nothing about the toolset or problem domain. Assume they don't know good test design very well.

Core principle: Plans must be executable without asking questions.

The Iron Law

NO VAGUE STEPS - EVERY STEP IS A SPECIFIC ACTION

"Add validation" is not a step. "Write test for empty email, run it, implement check, run it, commit" - that's 5 steps.

Bite-Sized Task Granularity

Each step is one action (2-5 minutes):

  • "Write the failing test" - step
  • "Run it to make sure it fails" - step
  • "Implement the minimal code to make the test pass" - step
  • "Run the tests and make sure they pass" - step
  • "Commit" - step

Not a step:

  • "Add authentication" (too vague)
  • "Implement the feature" (multiple actions)
  • "Test it" (which tests? how?)

Treat plan phases as a directed acyclic graph — each phase may only depend on predecessors, never on future phases. The plan-review-gate enforces this ordering.

Decomposition trigger: If a step touches 3+ files or takes more than one sentence to describe, split it. If a phase has >5 tasks, consider splitting the phase. Individual steps target 2-5 minutes; task clusters (a full RED-GREEN-REFACTOR cycle) up to 15 minutes.

Plan Document Header

Every plan MUST start with this header:

# [Feature Name] Implementation Plan

> **For Claude:** REQUIRED: Follow this plan task-by-task using TDD.
> **Design:** See `docs/plans/YYYY-MM-DD-<feature>-design.md` for full specification.

**Goal:** [One sentence describing what this builds]

**Architecture:** [2-3 sentences about approach]

**Tech Stack:** [Key technologies/libraries]

**Prerequisites:** [What must exist before starting]

**Durable Decisions:** [Foundational choices that apply across all phases — route structures, DB schema shape, key data models, auth approach, third-party service boundaries. Every phase references these.]

---

If a design document exists, always reference it in the header.

Task Structure

### Task N: [Component Name]

**Files:**
- Create: `exact/path/to/file.ts`
- Modify: `exact/path/to/existing.ts:123-145`
- Test: `tests/exact/path/to/test.ts`

**Step 1: Write the failing test**

```typescript
test('specific behavior being tested', () => {
  const result = functionName(input);
  expect(result).toBe(expected);
});

Step 2: Run test to verify it fails

Run: npm test tests/path/test.ts -- --grep "specific behavior" Expected: FAIL with "functionName is not defined"

Step 3: Write minimal implementation

function functionName(input: InputType): OutputType {
  return expected;
}

Step 4: Run test to verify it passes

Run: npm test tests/path/test.ts -- --grep "specific behavior" Expected: PASS

Step 5: Commit

git add tests/path/test.ts src/path/file.ts
git commit -m "feat: add specific feature"

## Context is King (Cole Medin Principle)

**The plan must contain ALL information for a single-pass implementation.**

A developer with zero codebase context should be able to execute the plan WITHOUT asking any questions.

### Context References Section (MUST READ!)

**Every plan MUST include a Context References section:**

```markdown
## Relevant Codebase Files

### Patterns to Follow
- `src/components/Button.tsx` (lines 15-45) - Component structure pattern
- `src/services/api.ts` (lines 23-67) - API service pattern

### Configuration Files
- `tsconfig.json` - TypeScript settings
- `.env.example` - Environment variables needed

### Related Documentation
- `docs/architecture.md#authentication` - Auth flow overview
- `README.md#running-tests` - Test commands

Why: Claude forgets context. External docs get stale. File:line references are always accurate.

Distillation Rule

Plans MUST use distilled content, not summarized. Distilled = lossless compression preserving every entity, relationship, decision, and constraint. Summarized = lossy reduction that drops specifics. If a plan section can be shortened without losing any fact the builder needs, distill it. If shortening requires dropping facts, keep the full version. Test: Could a builder agent execute from this plan alone without re-reading source files? If not, the plan is under-distilled.

Validation Levels

Match validation depth to plan complexity:

Level Name Commands When
1 Syntax & Style npm run lint, tsc --noEmit Every task
2 Unit Tests npm test Low-Medium risk
3 Integration Tests npm run test:integration Medium-High risk
4 Manual Validation User flow walkthrough High-Critical risk

Include specific validation commands in each task step.

Production-Like Verification Planning

When the request needs real APIs, seeded data, browser flows, background jobs, or stress/load behavior, read references/live-verification-strategy.md before finalizing the plan.

Use that reference to add a ### Live Verification Strategy section that names:

  • harness manifest path
  • setup, reset, seed, health, and cleanup commands
  • first-party system boundaries vs external dependencies
  • named proof scenarios with Given/When/Then
  • stress profile and pass thresholds when load behavior matters

Do not silently downgrade production-like verification into replay-only, unit-only, or manual-only steps. If the live harness does not exist yet, keep that gap explicit in the plan.

Requirements Checklist

Before writing a plan:

  • Problem statement clear
  • Users identified
  • Functional requirements listed
  • Non-functional requirements listed (performance, security, scale)
  • Constraints documented
  • Success criteria defined
  • Existing code patterns understood
  • Context References section prepared with file:line references

Plan Completeness Gate

Before saving, verify every phase passes:

Criterion Test
Definition of done Exit criteria are demonstrable and testable (not "Foundation complete")
Measurable deliverables Each task names exact files to create/modify with file:line
Realistic estimates No task exceeds 5 minutes; no phase exceeds 1 hour
Dependencies explicit Phase N references only predecessors, never future phases
Risk mitigation present Every risk with Score > 8 has a mitigation row
Testable at each phase Each phase exit criteria can be verified by running a command
Cross-phase contracts Phase N exit criteria include the exact data shape, API contract, or file structure that Phase N+1 expects

A plan missing any row is incomplete. Revise before saving.

Exit criteria by example: Do not write "Phase 1 complete when auth works." Write "Phase 1 complete when curl -X POST /api/auth/login -d '{"email":"test@test.com","password":"pass"}' returns 200 with {token: string}`." Concrete input/output examples prevent misinterpretation.

Risk Assessment Table

Classify each risk into one of four dimensions before scoring:

Dimension What to assess
Technical Complexity beyond team experience, unknown libraries, integration unknowns
Timeline Estimation uncertainty, resource availability, external dependency delivery dates
Quality Testing gaps, regression surface area, manual-only verification steps
Security Vulnerability exposure, auth surface, data sensitivity, compliance requirements

For each identified risk:

Risk Probability (1-5) Impact (1-5) Score Mitigation
API timeout 3 4 12 Retry with backoff
Invalid input 4 2 8 Validation layer
Auth bypass 2 5 10 Security review

Score = Probability × Impact. Address risks with score > 8 first.

Risk-Based Testing Matrix

Match testing depth to task risk:

Task Risk Example Tests Required
Trivial Typo fix, docs update None
Low Single file change, utility function Unit tests only
Medium Multi-file feature, new component Unit + Integration tests
High Cross-service, auth, state management Unit + Integration + E2E tests
Critical Payments, security, data migrations All tests + Security audit

How to assess risk:

  • How many files touched? (1 = low, 3+ = medium, cross-service = high)
  • Is it auth/payments/security? (always high or critical)
  • Is it user-facing? (medium minimum)
  • Can it cause data loss? (high or critical)

Use this matrix when planning test steps. Don't over-test trivial changes. Don't under-test critical ones.

Functionality Flow Mapping

Before planning, document all flows:

User Flow:

1. User clicks [button]
2. System shows [form]
3. User enters [data]
4. System validates [input]
5. System saves [data]
6. System shows [confirmation]

Admin Flow:

1. Admin opens [dashboard]
2. Admin selects [item]
3. System shows [details]
4. Admin changes [setting]
5. System applies [change]

System Flow:

1. Request arrives at [endpoint]
2. Middleware validates [auth]
3. Controller calls [service]
4. Service queries [database]
5. Response returns [data]

Architecture Decision Records (ADR)

When comparing approaches, document the decision formally:

Use this format when a plan involves choosing between multiple valid approaches:

## ADR: [Decision Title]

**Context:** What situation or requirement prompted this decision?

**Decision:** What approach did we choose?

**Consequences:**
- **Positive:** [benefits of this choice]
- **Negative:** [tradeoffs we accept]
- **Alternatives Considered:** [what we didn't choose and why]

When to use ADR:

  • Choosing between architectures (monolith vs microservices)
  • Selecting libraries/frameworks (React vs Vue)
  • Database decisions (SQL vs NoSQL)
  • Authentication approaches (JWT vs sessions)

Save ADRs to: docs/decisions/ADR-NNN-title.md

Red Flags - STOP and Revise

If you find yourself:

  • Writing "add feature" without exact file paths
  • Skipping the test step
  • Combining multiple actions into one step
  • Using "etc." or "similar" instead of explicit steps
  • Planning without understanding existing code patterns
  • Creating steps that take more than 5 minutes
  • Not including expected output for test commands

STOP. Revise the plan with more specific steps.

Rationalization Prevention

Excuse Reality
"They'll know what I mean" No they won't. Be explicit.
"Too much detail is annoying" Vague plans cause bugs.
"Testing is obvious" Write the test command.
"File paths are discoverable" Write the exact path.
"Commits are implied" Write when to commit.
"They can figure out edge cases" List every edge case.
"Plan isn't perfect yet" Good plan now beats perfect plan never. Ship it, iterate in execution.

Output Format

# [Feature Name] Implementation Plan

> **For Claude:** REQUIRED: Follow this plan task-by-task using TDD.

**Goal:** [One sentence]

**Architecture:** [2-3 sentences]

**Tech Stack:** [Technologies]

**Prerequisites:** [Requirements]

**Durable Decisions:** [Foundational choices that apply across all phases — route structures, DB schema shape, key data models, auth approach, third-party service boundaries]

---

<!-- Phase 1 must be a tracer bullet — a thin vertical slice through ALL integration layers (schema → API → UI → tests). Proves architecture works before subsequent phases widen the slice. For infrastructure-only plans, Phase 1 may be setup. -->

## Phase 1: [Demonstrable Milestone]

> **Exit Criteria:** [What must be true when this phase is complete - e.g., "User can log in and receive JWT"]

### Task 1: [First Component]

**Files:**
- Create: `src/path/file.ts`
- Test: `tests/path/file.test.ts`

**Step 1:** Write failing test
[code block with test]

**Step 2:** Run test, verify fails
Run: `[command]`
Expected: FAIL

**Step 3:** Implement
[code block with implementation]

**Step 4:** Run test, verify passes
Run: `[command]`
Expected: PASS

**Step 5:** Commit
```bash
git add [files]
git commit -m "feat: [description]"

Task 2: [Second Component]

...


Risks

Risk P I Score Mitigation
... ... ... ... ...

Success Criteria

  • All tests pass
  • Feature works as specified
  • No regressions
  • Code reviewed

## Save the Plan (MANDATORY)

**One direct save is required - the plan file. Memory indexing is router-owned.**

### Step 1: Save Plan File (Use Write tool - NO PERMISSION NEEDED)

First create directory

Bash(command="mkdir -p docs/plans")

Then save plan using Write tool (permission-free)

Write(file_path="docs/plans/YYYY-MM-DD--plan.md", content="[full plan content from output format above]")

Do NOT auto-commit — let the user decide when to commit


### Step 2: Emit Router-Owned Memory Intent (CRITICAL)

Do **NOT** edit `.claude/cc10x/v10/*.md` from this skill.

Instead, use the planner agent's existing `MEMORY_NOTES` contract only for durable learnings, deferred items, or verification context that is not already derivable from the planner contract.

**WHY BOTH:** The plan file is the artifact. Router-owned memory finalization derives plan indexing from `PLAN_FILE` and workflow state, so this skill must not invent a second memory serialization path.

## Router-Owned Task Tracking

The planner does **not** create execution tasks. Its job is to create the plan artifact and emit a router contract.

After planning:
- Save the plan file.
- Update memory with the plan reference.
- Let the router create workflow and execution tasks from the approved plan.

Do not instruct the planner to call `TaskCreate` or `TaskUpdate` for phase tracking.

## Plan-Task Linkage (Context Preservation)

**The relationship between Plan Files and Tasks:**

┌─────────────────────────────────────────────────────────────────────┐ │ PLAN FILE (Persistent - Source of Truth) │ │ Location: docs/plans/YYYY-MM-DD-{feature}-plan.md │ │ Contains: Full implementation details, TDD steps, file paths │ │ Survives: Session close, context compaction, conversation reset │ └─────────────────────────────────────────────────────────────────────┘ ↕ task description includes the plan file path ┌─────────────────────────────────────────────────────────────────────┐ │ TASKS (Execution Engine) │ │ Contains: Status, dependencies, progress tracking │ │ Survives: Context compaction; can be shared across sessions via │ │ task list configuration (official Claude Code) │ └─────────────────────────────────────────────────────────────────────┘


**Plan path-in-description is CRITICAL:** If context compacts mid-execution, the task description contains enough info to:
1. Find the plan file
2. Locate the exact phase/task
3. Continue without asking questions

**Phase Exit Criteria are CRITICAL:** Each phase MUST have a demonstrable milestone (not arbitrary naming):
- ❌ "Phase 1: Foundation" - Vague, when is it done?
- ✅ "Phase 1: User can authenticate" - Demonstrable, testable

## Execution Handoff

After saving the plan, offer execution choice:

**"Plan complete and saved to `docs/plans/<filename>.md`. Two execution options:**

**1. Subagent-Driven (this session)** - Fresh subagent per task, review between tasks, fast iteration

**2. Manual Execution** - Follow plan step by step, verify each step

**Which approach?"**
Related skills
Installs
43
Repository
romiluz13/cc10x
GitHub Stars
142
First Seen
Jan 29, 2026