writing-plans
Writing Plans
Overview
Write comprehensive implementation plans assuming the engineer has zero context for our codebase and questionable taste. Document everything they need to know: which files to touch for each task, code, testing, docs they might need to check, how to test it. Give them the whole plan as bite-sized tasks. DRY. YAGNI. TDD. Frequent commits.
Assume they are a skilled developer, but know almost nothing about our toolset or problem domain. Assume they don't know good test design very well.
Announce at start: "I'm using the writing-plans skill to create the implementation plan."
Save plans to: The active conductor track directory: conductor/tracks/{track_id}/plan.md
If no conductor track exists, create one first using the /orchestrator-supaconductor:new-track flow.
Bite-Sized Task Granularity
Each step is one action (2-5 minutes):
- "Write the failing test" - step
- "Run it to make sure it fails" - step
- "Implement the minimal code to make the test pass" - step
- "Run the tests and make sure they pass" - step
- "Commit" - step
Plan Document Header
Every plan MUST start with this header:
# [Feature Name] Implementation Plan
> **For Claude:** REQUIRED SUB-SKILL: Use orchestrator-supaconductor:executing-plans to implement this plan task-by-task.
**Goal:** [One sentence describing what this builds]
**Architecture:** [2-3 sentences about approach]
**Tech Stack:** [Key technologies/libraries]
---
Task Structure
### Task N: [Component Name]
**Files:**
- Create: `exact/path/to/file.py`
- Modify: `exact/path/to/existing.py:123-145`
- Test: `tests/exact/path/to/test.py`
**Step 1: Write the failing test**
```python
def test_specific_behavior():
result = function(input)
assert result == expected
```
**Step 2: Run test to verify it fails**
Run: `pytest tests/path/test.py::test_name -v`
Expected: FAIL with "function not defined"
**Step 3: Write minimal implementation**
```python
def function(input):
return expected
```
**Step 4: Run test to verify it passes**
Run: `pytest tests/path/test.py::test_name -v`
Expected: PASS
**Step 5: Commit**
```bash
git add tests/path/test.py src/path/file.py
git commit -m "feat: add specific feature"
```
Remember
- Exact file paths always
- Complete code in plan (not "add validation")
- Exact commands with expected output
- Reference relevant skills with @ syntax
- DRY, YAGNI, TDD, frequent commits
Conductor Integration
This skill is part of the Conductor workflow. All plans are saved to the active track directory.
When invoked by the Conductor orchestrator (with --track-dir and --spec parameters):
- Read spec from
--specpath - Read project context from
conductor/product.md,conductor/tech-stack.md - Save plan to
{--track-dir}/plan.md - Include DAG section for parallel execution
- Update track's
metadata.jsoncheckpoint toPLAN: PASSED - Return control to orchestrator — do NOT start execution
When invoked standalone (no parameters):
- Look for the active track in
conductor/tracks.md - Read its
spec.mdfor requirements - Save plan to
conductor/tracks/{track_id}/plan.md - Update track's
metadata.jsoncheckpoint toPLAN: PASSED - After saving, announce the plan summary and HALT — do NOT start execution
- The user should run
/orchestrator-supaconductor:implementor/orchestrator-supaconductor:goto execute
More from ibrahim-3d/conductor-orchestrator-superpowers
board-of-directors
Simulate a 5-member expert board deliberation for major decisions. Use when evaluating plans, architecture choices, feature designs, or any decision requiring multi-perspective expert analysis. Triggers: 'board review', 'get expert opinions', 'board meeting', 'director evaluation', 'consensus review'.
9conductor-orchestrator
Master coordinator for the Evaluate-Loop workflow v3. Supports GOAL-DRIVEN entry, PARALLEL execution via worker agents, BOARD OF DIRECTORS deliberation, and message bus coordination. Dispatches specialized workers dynamically, monitors via message bus, aggregates results. Uses metadata.json v3 for parallel state tracking. Use when: '/go <goal>', '/conductor implement', 'start track', 'run the loop', 'orchestrate', 'automate track'.
8eval-business-logic
Specialized business logic evaluator for the Evaluate-Loop. Use this for evaluating tracks that implement core product logic — pipelines, dependency resolution, state machines, pricing/tier enforcement, packaging. Checks feature correctness against product rules, edge cases, state transitions, data flow, and user journey completeness. Dispatched by loop-execution-evaluator when track type is 'business-logic', 'generator', or 'core-feature'. Triggered by: 'evaluate logic', 'test business rules', 'verify business rules', 'check feature'.
8executing-plans
Use when you have a written implementation plan to execute in a separate session with review checkpoints
7eval-integration
Specialized integration evaluator for the Evaluate-Loop. Use this for evaluating tracks that integrate external services — Supabase auth/DB, Stripe payments, Gemini API, third-party APIs. Checks API contracts, auth flows, data persistence, error recovery, environment config, and end-to-end flow integrity. Dispatched by loop-execution-evaluator when track type is 'integration', 'auth', 'payments', or 'api'. Triggered by: 'evaluate integration', 'test auth flow', 'check API', 'verify payments'.
7agent-factory
Creates specialized worker agents dynamically from templates. Use when orchestrator needs to spawn task-specific workers for parallel execution. Handles agent lifecycle: create -> execute -> cleanup.
7