whiteboarding
Skill: whiteboarding
Brainstorm → Design → Save → Handoff
Quick Reference
| Phase | Goal | Output |
|---|---|---|
| UNDERSTAND | Search codebase + clarify problem | Pattern summary + problem statement |
| EXPLORE | Research technologies + compare 2-3 approaches | Research summary + chosen approach |
| DETAIL | Break into implementation steps | Checklist with files/functions |
| VALIDATE | User confirms each section | Approval |
| SAVE | Write to docs/plans/ | Plan file ready for /code-foundations:building |
Key change: Phases 1 and 2 now SEARCH before asking/proposing. No relying on user to know patterns.
Crisis Invariants - NEVER SKIP
| Check | Why Non-Negotiable |
|---|---|
| Search codebase BEFORE questions | Patterns exist that user may not know about |
| Research BEFORE proposing approaches | Uninformed proposals waste user's decision-making |
| One question at a time | Multiple questions = cognitive overload = shallow answers |
| 2-3 approaches before committing | First idea is rarely best; comparison reveals trade-offs |
| User confirms each section | Unvalidated plans diverge from user intent |
| Save before executing | Plan file enables context refresh + checklist tracking |
Phase 1: UNDERSTAND (Discovery + Research)
Step 1a: Pattern Discovery (MANDATORY - Do First)
Before asking ANY questions, search the codebase:
SEARCH FOR:
1. Similar features/functionality (grep for keywords)
2. Same directory/module patterns (read nearby files)
3. Related components (how do similar things work?)
4. Naming conventions (what patterns exist?)
| Search | Action |
|---|---|
| Similar features | grep -r "keyword" across codebase |
| Module patterns | Read 2-3 files in target directory |
| Related components | Find how similar problems were solved |
| Conventions | Note naming, structure, error handling patterns |
Output: Pattern Summary
## Existing Patterns Found
- [pattern 1]: [where found, how it works]
- [pattern 2]: [where found, how it works]
## Conventions to Follow
- Naming: [observed pattern]
- Structure: [observed pattern]
- Error handling: [observed pattern]
## Similar Implementations
- [file]: [what it does, relevance]
If no patterns found: State "No existing patterns found for [topic]. This will establish a new pattern."
Step 1b: Adaptive Questioning
After pattern discovery, classify complexity:
| Signal | Complexity | Question Count |
|---|---|---|
| Single file, clear scope | Simple | 2-3 questions |
| Multiple files, some unknowns | Medium | 4-5 questions |
| Architecture changes, many unknowns | Complex | 6-8 questions |
State classification: "This seems [simple/medium/complex]. Based on pattern discovery, I'll ask [N] questions."
Question Sequence (Ask ONE at a time via AskUserQuestion)
ENFORCEMENT: Each question below MUST use AskUserQuestion tool. Do NOT output questions as text — the tool forces a stop and wait. No proceeding until the user answers.
Simple (2-3 questions):
- What specific outcome do you want?
- What constraints should I know about?
- What does "done" look like?
Medium (add these): 4. Who/what will use this? 5. What could go wrong?
Complex (add these): 6. What other systems does this touch? 7. What's the rollback plan if it fails? 8. What's the testing strategy?
NOTE: Questions about "existing patterns" removed - we searched instead of asking.
Question Format
Use multiple-choice when possible:
Which authentication approach fits best?
- [ ] JWT tokens (stateless, scalable)
- [ ] Session cookies (simpler, server-state)
- [ ] OAuth2 (if external providers needed)
- [ ] Other (describe)
Questioning Gate
STOP. You CANNOT proceed to Phase 2 until ALL of the following are true:
- Complexity classified (simple/medium/complex)
- Minimum questions asked: Simple=2, Medium=4, Complex=6
- Each question asked via
AskUserQuestion(not text output) - Each answer received and recorded
If you catch yourself about to skip to approaches — STOP. Count questions asked. If below minimum, ask the next one.
Output: Problem Statement
After ALL questions answered, summarize:
## Problem Statement
[1-2 sentences describing what we're building]
## Constraints
- [constraint 1]
- [constraint 2]
## Success Criteria
- [criterion 1]
- [criterion 2]
Get user confirmation via AskUserQuestion: "Does this capture what you want?"
Phase 2: EXPLORE (Research + Approaches)
Step 2a: Technology Research (Before Proposing)
Before proposing approaches, gather data:
Codebase Research
SEARCH FOR:
1. How similar problems are solved in this codebase
2. What libraries/patterns are already in use
3. What the codebase is NOT using (intentional omissions?)
| Check | Why |
|---|---|
| Existing dependencies | Don't propose new lib if similar exists |
| Rejected patterns | Check git history/comments for "we tried X" |
| Team conventions | Match what's already working |
Web Research (When technology choice is involved)
Use WebSearch/WebFetch when:
- Comparing libraries/frameworks
- Evaluating technology trade-offs
- Checking current best practices (your knowledge may be outdated)
SEARCH FOR:
1. "[technology A] vs [technology B] [current year]"
2. "[problem domain] best practices [current year]"
3. "[framework] [specific feature] implementation"
Output: Research Summary
## Codebase Findings
- Already using: [libraries, patterns]
- Similar solutions: [where, how]
## Web Research (if applicable)
- [Technology A]: [pros, cons, current status]
- [Technology B]: [pros, cons, current status]
- Recommendation: [based on research]
Step 2b: Generate Alternatives
You MUST present 2-3 approaches before proceeding.
CRITICAL: Approaches must be STRUCTURALLY different (different technology, pattern, or architecture). Variations of the same approach do NOT count:
- ❌ "JWT with refresh tokens" vs "JWT without refresh tokens" = same approach
- ✅ "JWT tokens" vs "Session cookies" vs "OAuth2" = different approaches
Approaches must be informed by research. Don't propose technologies you didn't research.
If user mentioned a solution in their initial request (e.g., "I'm thinking JWT"), this is exploratory input, NOT a decision. Still present 2-3 structurally different alternatives, informed by research.
| Approach | Trade-offs | Best When | Research Source |
|---|---|---|---|
| Option A | [pros/cons] | [conditions] | [codebase/web] |
| Option B | [pros/cons] | [conditions] | [codebase/web] |
| Option C | [pros/cons] | [conditions] | [codebase/web] |
Presentation Format
## Approach A: [Name] (Recommended)
**Idea:** [1-2 sentences]
**Pros:** [list]
**Cons:** [list]
**Effort:** [relative estimate]
## Approach B: [Name]
**Idea:** [1-2 sentences]
**Pros:** [list]
**Cons:** [list]
**Effort:** [relative estimate]
## Approach C: [Name] (if applicable)
...
Decision
Ask: "Which approach do you prefer, or should I elaborate on any?"
Record chosen approach and rationale:
## Chosen Approach: [Name]
**Rationale:** [why this over others]
Phase 3: DETAIL (Implementation-Ready Plan)
Break into Sections (200-300 words each)
For each section:
- Present the section
- Wait for user confirmation
- Proceed to next section
Section Template
### Section N: [Name]
**Goal:** [what this section accomplishes]
**Files to create/modify:**
- `path/to/file.ts` - [what changes]
- `path/to/other.ts` - [what changes]
**Implementation details:**
- [specific function/class/pattern]
- [key decisions]
- [edge cases to handle]
**Dependencies:** [what must be done first]
YAGNI Gate
Before each section, ask:
- Is this section actually needed?
- Could we ship without it?
- Are we building for hypothetical future needs?
If answer is "not needed now" → Remove from plan.
Phase 4: VALIDATE (Confirmation Loop)
Test Coverage Question (MANDATORY)
Before finalizing the plan, ask about test coverage:
How much test coverage do you want for this implementation?
1. 100% coverage (Recommended)
Unit tests for all new code + integration tests for critical paths
2. Backend only
Tests for server-side/API changes only
3. Backend + frontend
Tests for both server and client layers
4. None
Skip tests (not recommended - technical debt)
5. Ask at each phase
Decide test scope when building each phase
Record the answer in the plan file under ## Test Coverage.
Inform building: This choice affects POST-GATE behavior - reviewers will check for tests matching the chosen coverage level.
Full Plan Review
Present complete plan structure:
# Plan: [Topic]
## Sections
1. [Section 1 name] - [1 sentence]
2. [Section 2 name] - [1 sentence]
3. ...
## Test Plan
- [test 1]
- [test 2]
## Questions/Concerns
- [any remaining uncertainties]
Ask: "Does this plan look complete? Any sections to add, remove, or modify?"
Phase 5: SAVE (Write Plan File)
File Location
docs/plans/YYYY-MM-DD-<topic-slug>.md
Model Recommendations (Apply Per Phase)
When writing each phase, recommend a model based on the phase's content:
OPUS_KEYWORDS = [refactor, architect, migrate, redesign, rewrite, overhaul]
HAIKU_KEYWORDS = [config, rename, typo, bump, cleanup, delete, remove]
If tasks <= 2 AND files <= 2 AND no OPUS_KEYWORDS:
→ haiku (simple, mechanical work)
If tasks >= 6 OR files >= 6 OR any OPUS_KEYWORD:
→ opus (complex, architectural work)
Otherwise:
→ sonnet (default)
Write **Model:** [model] into each phase heading. This is not optional — the plan should make the model choice visible so the user can adjust before building begins.
Plan File Schema
# Plan: [Topic]
**Created:** YYYY-MM-DD
**Status:** ready
---
## Context
[Problem statement from Phase 1]
## Constraints
- [constraint 1]
- [constraint 2]
## Chosen Approach
**[Approach name]**
[Rationale from Phase 2]
---
## Implementation Checklist
### Phase 1: [Name]
**Model:** [recommended model]
- [ ] [Specific task with file path]
- [ ] [Specific task with file path]
**Files:**
- `path/to/file.ts`
**Details:**
[Implementation specifics]
---
### Phase 2: [Name]
**Model:** [recommended model]
...
## Test Coverage
**Level:** [100% / Backend only / Backend + frontend / None / Per-phase]
## Test Plan
- [ ] Unit: [specific tests]
- [ ] Integration: [specific tests]
- [ ] Manual: [verification steps]
---
## Notes
- [edge cases]
- [gotchas]
- [decisions made during planning]
---
## Execution Log
_To be filled during /code-foundations:building_
Save Command
mkdir -p docs/plans
# Write plan file
Phase 6: HANDOFF
Ask User How to Proceed
After saving the plan, use AskUserQuestion with these options:
Question: "Plan saved to docs/plans/YYYY-MM-DD-.md. How would you like to proceed?"
Options:
- Clear conversation and build (Recommended) - Fresh context for better execution
- Tell me what to do - Get step-by-step instructions to execute manually
If user selects option 1:
Execute /clear command, then immediately run /code-foundations:building docs/plans/YYYY-MM-DD-<topic>.md
If user selects option 2: Provide numbered steps the user can follow to implement the plan manually
Anti-Rationalization Table
| Rationalization | Reality |
|---|---|
| "I already know what to build" | Planning reveals unknowns you don't know you don't know |
| "This is too simple for planning" | Simple tasks have highest error rates |
| "Let's just start coding" | Code without plan = rework later |
| "One approach is obviously right" | If it's obvious, comparing takes 2 minutes |
| "User is waiting, skip questions" | Wrong solution fast < right solution slightly slower |
| "I'll figure out details during implementation" | Details in plan = checklist during execution |
| "Plan will be outdated by implementation" | Plan file tracks changes; no plan = no tracking |
| "Multiple choice is slower" | MC gets precise answers; open questions get vague ones |
| "I'll just plan in my head" | Mental plans don't persist. File = resumable artifact. Skip file = lose all planning work on context refresh. |
| "I'll batch questions to save time" | Batched questions get shallow, incomplete answers. One question = focused, complete answer. |
| "User mentioned X, so that's decided" | User-mentioned solutions are exploratory. Still compare 2-3 structurally different approaches. |
| "I'll ask user about patterns" | Search instead. User may not know all patterns. You have tools to find them. |
| "No need to search, I know this tech" | Your knowledge may be outdated. Search confirms current best practices. |
| "Searching takes too long" | 2 min search prevents 20 min wrong-approach rework. |
| "I'll research during implementation" | Research informs approach CHOICE. After choosing, it's too late. |
| "This codebase is new to me" | That's exactly why you search. Don't guess conventions - find them. |
| "The search results tell me enough" | Search informs YOUR understanding. Questions reveal USER intent. Both required. |
| "I can infer what they want from context" | Inference ≠ confirmation. Ask via AskUserQuestion or you'll plan the wrong thing. |
| "Questions will slow us down" | Wrong plan is slower. 2 minutes of questions saves 20 minutes of rework. |
Pressure Testing Scenarios
Scenario 1: User Wants to Skip Planning
Situation: User says "just build it" or "we don't need a plan."
Response: "I can build without planning, but past experience shows:
- Plans catch issues before code exists
- Plan files enable context refresh for better execution
- Checklist tracking reduces forgotten edge cases
How about a quick plan (3-4 questions, 5 minutes)? Or should I proceed without?"
Scenario 2: Vague Requirements
Situation: User gives unclear or incomplete requirements.
Response: Ask clarifying questions ONE AT A TIME. Do NOT guess or assume. Each question should narrow scope until requirements are concrete.
Scenario 3: User Rejects All Approaches
Situation: User doesn't like any of the 2-3 approaches presented.
Response: "What's missing from these approaches? I'll generate alternatives that address [specific concern]."
Chaining
- RECEIVES FROM: User request, feature description, user story
- CHAINS TO: building (via saved plan file)
- RELATED: oberplan, aposd-designing-deep-modules, cc-construction-prerequisites
More from ryanthedev/code-foundations
cc-defensive-programming
Use when auditing defensive code, designing barricades, choosing assertion vs error handling, or deciding correctness vs robustness strategy. Triggers on: empty catch blocks, missing input validation, assertions with side effects, wrong exception abstraction level, garbage in garbage out mentality, deadline pressure to skip validation, trusted source rationalization.
27building
Execute whiteboard plans through gated phases with subagent dispatch. Require feature branch. Each phase goes through PRE-GATE (discovery + pseudocode) -> IMPLEMENT -> POST-GATE (reviewer) -> CHECKPOINT. Produce per-phase commits, execution log, and working code with tests. Use after /code-foundations:whiteboarding to implement saved plans. Triggers on: build it, execute plan, implement the whiteboard, run the plan.
1cc-debugging
Guide systematic debugging using scientific method: STABILIZE -> HYPOTHESIZE -> EXPERIMENT -> FIX -> TEST -> SEARCH. Two modes: CHECKER audits debugging approach (outputs status table with violations/warnings), APPLIER guides when stuck (outputs stabilization strategy, hypothesis formation, fix verification). Use when encountering ANY bug, error, test failure, crash, wrong output, flaky behavior, race condition, regression, timeout, hang, or code behavior differing from intent. Triggers on: debug, fix, broken, failing, investigate, figure out why, not working, it doesn't work, something's wrong.
1prototype
Validate technical feasibility with minimum code before full implementation. Prove ONE atomic question ('Can I X?') through 6-phase workflow: SCOPE, CONTEXT, MINIMUM, EXECUTE, VERIFY, CAPTURE. Use when facing technical uncertainty, unsure if something is possible, or need proof before planning. Triggers on: prototype, POC, prove this works, spike, demo this, can I do X, is it possible, feasibility check. Produce prototype log in docs/prototypes/ with YES/NO/PARTIAL verdict and chain to whiteboarding.
1setup-ast
Configure tree-sitter CLI and language grammars for AST-powered code review. Use when AST extraction fails, tree-sitter not found, grammars missing, or setting up new machine. Triggers on: setup tree-sitter, install grammars, AST not working, tree-sitter not found, setup ast.
1cc-quality-practices
Execute quality checklists (112+ items) for code review, testing strategy, and debugging. CHECKER mode audits QA practices with evidence tables. APPLIER mode generates test cases (5:1 dirty ratio), runs Scientific Debugging Method (STABILIZE-HYPOTHESIZE-EXPERIMENT-FIX-VERIFY-SEARCH), or sets up inspection procedures. Use when planning QA, choosing review methods, designing tests, or debugging fails. Triggers on: defects found late, tests pass but production bugs, coverage disputes, review ineffective, spending excessive time debugging.
1