writing-plans
Writing Plans
Overview
Write comprehensive implementation plans assuming the engineer has zero context for our codebase and questionable taste. Document everything they need to know: which files to touch for each task, code, testing, docs they might need to check, how to test it. Give them the whole plan as bite-sized tasks. DRY. YAGNI. TDD. Frequent commits.
Assume they are a skilled developer, but know almost nothing about our toolset or problem domain. Assume they don't know good test design very well.
Announce at start: "I'm using the writing-plans skill to create the implementation plan."
Context: This should be run after design-stage work has passed design-readiness-check, ideally in a dedicated worktree prepared via using-git-worktrees.
Save plans to: docs/exec-plans/active/YYYY-MM-DD-<feature-name>.md
Bite-Sized Task Granularity
Each step is one action (2-5 minutes):
- "Write the failing test" - step
- "Run it to make sure it fails" - step
- "Implement the minimal code to make the test pass" - step
- "Run the tests and make sure they pass" - step
- "Commit" - step
Plan Document Header
Every plan MUST start with this header:
# [Feature Name] Implementation Plan
> **For Claude:** REQUIRED SUB-SKILL: Use loom:executing-plans to implement this plan task-by-task.
**Goal:** [One sentence describing what this builds]
**Architecture:** [2-3 sentences about approach]
**Tech Stack:** [Key technologies/libraries]
---
Task Structure
### Task N: [Component Name]
**Files:**
- Create: `exact/path/to/file.py`
- Modify: `exact/path/to/existing.py:123-145`
- Test: `tests/exact/path/to/test.py`
**Step 1: Write the failing test**
```python
def test_specific_behavior():
result = function(input)
assert result == expected
```
**Step 2: Run test to verify it fails**
Run: `pytest tests/path/test.py::test_name -v`
Expected: FAIL with "function not defined"
**Step 3: Write minimal implementation**
```python
def function(input):
return expected
```
**Step 4: Run test to verify it passes**
Run: `pytest tests/path/test.py::test_name -v`
Expected: PASS
**Step 5: Commit**
```bash
git add tests/path/test.py src/path/file.py
git commit -m "feat: add specific feature"
```
Remember
- Exact file paths always
- Complete code in plan (not "add validation")
- Exact commands with expected output
- Reference relevant skills with @ syntax
- DRY, YAGNI, TDD, frequent commits
Execution Handoff
After saving the plan, offer execution choice:
"Plan complete and saved to docs/exec-plans/active/<filename>.md. Two execution options:
1. Subagent-Driven (this session) - I dispatch fresh subagent per task, review between tasks, fast iteration
2. Parallel Session (separate) - Open new session with executing-plans, batch execution with checkpoints
Which approach?"
If Subagent-Driven chosen:
- REQUIRED SUB-SKILL: Use loom:subagent-driven-development
- Stay in this session
- Fresh subagent per task + code review
If Parallel Session chosen:
- Guide them to open new session in worktree
- REQUIRED SUB-SKILL: New session uses loom:executing-plans
Journal Integration
When operating on a task tracked under .agents/tasks/<task-id>/, append a journal entry at this skill's milestone.
- Trigger: after a plan file is persisted to docs/exec-plans/active/
- Reserved key(s):
saved - Entry shape:
## <ISO8601-timestamp> — writing-plans saved: docs/exec-plans/active/<file>.md [optional ≤ 15-line body; longer content goes to artifacts/]
Resolve the task id from the explicit caller argument or .agents/tasks/.current. If neither resolves, skip the append; do not guess.
See task-journal for the full convention.
More from freeacger/loom
writing-clearly-and-concisely
Use when writing prose humans will read—documentation, commit messages, error messages, explanations, reports, or UI text. Applies Strunk's timeless rules for clearer, stronger, more professional writing.
22test-driven-development
Use when implementing any feature or bugfix, before writing implementation code
22executing-plans
Use when you have a written implementation plan to execute in a separate session with review checkpoints
22systematic-debugging
Use when debugging bugs, test failures, build failures, performance regressions, or unexpected behavior and you need root-cause investigation before proposing fixes. Trigger on requests to debug, investigate why something broke, or find the source of a technical issue.
22finishing-a-development-branch
Use when implementation is complete, all tests pass, and you need to decide how to integrate the work - guides completion of development work by presenting structured options for merge, PR, or cleanup
20receiving-code-review
Use when receiving code review feedback, before implementing suggestions, especially if feedback seems unclear or technically questionable - requires technical rigor and verification, not performative agreement or blind implementation
20