create-prompt-plan
Create Prompt Plan
Read a specification file and decompose it into a series of implementation prompts. Each prompt represents one unit of work for a separate Claude Code session. Save the output to .turbo/prompts.md.
General skill assignment happens later by /pick-next-prompt when each prompt is planned for implementation. However, if the spec implies domain-specific skills, mention those specific skills in the prompt text as hints.
Step 1: Read the Spec
Read the spec file. Default location: .turbo/spec.md. Accept a different path if provided by the user.
Identify:
- Scope — total surface area of work
- Work categories — UI, backend, data layer, infrastructure, tests, documentation, tooling
- Dependencies — which pieces must exist before others can start
- Greenfield vs existing — is there an established codebase to work within
Step 2: Decompose Into Prompts
Split the spec into prompts where each prompt fits a single Claude Code context session.
Sizing
- One prompt = one logical unit of work (a feature, a subsystem, a layer)
- Never split tightly-coupled pieces across prompts (if UI + API + tests are inseparable, keep them together)
- Split independent subsystems into separate prompts
- If a prompt would touch more than ~15-20 files or span 3+ unrelated subsystems, split further
- If the entire scope fits one session, produce a single prompt
- Each prompt must leave the codebase fully integrated, with no components unreachable from the project's entry points
Ordering
Order by dependency, foundational work before dependent work:
- Setup and scaffolding (project init, config, CI)
- Data and domain layer (models, schemas, types)
- Core business logic
- API and service layer
- UI and frontend
- Integration and end-to-end concerns
Status tracking
Each prompt gets a status: pending, in-progress, done.
Step 3: Write .turbo/prompts.md
Create the .turbo/ directory if it does not exist. Write the output using this format:
# Prompt Plan: [Project/Feature Name]
Source: `.turbo/spec.md`
Generated: [date]
Total prompts: N
---
## Prompt 1: [Descriptive Title]
**Status:** pending
**Context:** [What state the project is in before this session starts]
**Depends on:** none
### Prompt
```
[What to build — specific files, features, acceptance criteria.
What "done" looks like — tests passing, endpoints working, etc.
Reference to spec sections if helpful.]
```
---
## Prompt 2: [Descriptive Title]
**Status:** pending
**Context:** [What prior prompts built that this one depends on]
**Depends on:** Prompt 1
### Prompt
```
[What to build...]
```
Step 4: Review Against Spec
After writing, spawn a subagent (model: "opus", do not set run_in_background) to review the prompt plan against the source spec. The subagent should:
- Read references/prompt-plan-reviewer.md for review guidelines
- Read the prompt plan (
.turbo/prompts.md) and the source spec in full - Produce a review report following the format in the guidelines
After the subagent returns its review report, run /evaluate-findings on the recommendations to triage issues and apply fixes to .turbo/prompts.md.
Step 5: Present Summary
After writing and verification, present a brief summary: number of prompts, one-line description of each prompt's scope, and any assumptions made about ambiguities.
Rules
- Never merge setup and finalization into the same prompt
- If the spec is ambiguous about what belongs together, split conservatively (smaller prompts are safer than oversized ones)
- Each prompt must be self-contained with enough context to understand the work without reading the full spec
- The
.turbo/prompts.mdfile is the only output — do not modify the spec or project files
More from tobihagemann/turbo
find-dead-code
Find dead code using parallel subagent analysis and optional CLI tools, treating code only referenced from tests as dead. Use when the user asks to \"find dead code\", \"find unused code\", \"find unused exports\", \"find unreferenced functions\", \"clean up dead code\", or \"what code is unused\". Analysis-only — does not modify or delete code.
30simplify-code
Run a multi-agent review of changed files for reuse, quality, efficiency, and clarity issues followed by automated fixes. Use when the user asks to \"simplify code\", \"review changed code\", \"check for code reuse\", \"review code quality\", \"review efficiency\", \"simplify changes\", \"clean up code\", \"refactor changes\", or \"run simplify\".
23smoke-test
Launch the app and hands-on verify that it works by interacting with it. Use when the user asks to \"smoke test\", \"test it manually\", \"verify it works\", \"try it out\", \"run a smoke test\", \"check it in the browser\", or \"does it actually work\". Not for unit/integration tests.
22finalize
Run the post-implementation quality assurance workflow including tests, code polishing, review, and commit. Use when the user asks to \"finalize implementation\", \"finalize changes\", \"wrap up implementation\", \"finish up\", \"ready to commit\", or \"run QA workflow\".
22self-improve
Extract lessons from the current session and route them to the appropriate knowledge layer (project AGENTS.md, auto memory, existing skills, or new skills). Use when the user asks to \"self-improve\", \"distill this session\", \"save learnings\", \"update CLAUDE.md with what we learned\", \"capture session insights\", \"remember this for next time\", \"extract lessons\", \"update skills from session\", or \"what did we learn\".
22evaluate-findings
Critically assess external feedback (code reviews, AI reviewers, PR comments) and decide which suggestions to apply using adversarial verification. Use when the user asks to \"evaluate findings\", \"assess review comments\", \"triage review feedback\", \"evaluate review output\", or \"filter false positives\".
22