tdd
Test-Driven Development
Philosophy
Core principle: Tests should verify behavior through public interfaces, not implementation details. Code can change entirely; tests shouldn't.
Good tests are integration-style: they exercise real code paths through public APIs. They describe what the system does, not how it does it. A good test reads like a specification - "user can checkout with valid cart" tells you exactly what capability exists. These tests survive refactors because they don't care about internal structure.
Bad tests are coupled to implementation. They mock internal collaborators, test private methods, or verify through external means (like querying a database directly instead of using the interface). The warning sign: your test breaks when you refactor, but behavior hasn't changed. If you rename an internal function and tests fail, those tests were testing implementation, not behavior.
See tests.md for examples and mocking.md for mocking guidelines.
Anti-Pattern: Horizontal Slices
DO NOT write all tests first, then all implementation. This is "horizontal slicing" - treating RED as "write all tests" and GREEN as "write all code."
This produces crap tests:
- Tests written in bulk test imagined behavior, not actual behavior
- You end up testing the shape of things (data structures, function signatures) rather than user-facing behavior
- Tests become insensitive to real changes - they pass when behavior breaks, fail when behavior is fine
- You outrun your headlights, committing to test structure before understanding the implementation
Correct approach: Vertical slices via tracer bullets. One test → one implementation → repeat. Each test responds to what you learned from the previous cycle. Because you just wrote the code, you know exactly what behavior matters and how to verify it.
WRONG (horizontal):
RED: test1, test2, test3, test4, test5
GREEN: impl1, impl2, impl3, impl4, impl5
RIGHT (vertical):
RED→GREEN: test1→impl1
RED→GREEN: test2→impl2
RED→GREEN: test3→impl3
...
Workflow
1. Planning
Before writing any code:
- Confirm with user what interface changes are needed
- Confirm with user which behaviors to test (prioritize)
- Identify opportunities for deep modules (small interface, deep implementation)
- Design interfaces for testability
- List the behaviors to test (not implementation steps)
- Get user approval on the plan
Ask: "What should the public interface look like? Which behaviors are most important to test?"
You can't test everything. Confirm with the user exactly which behaviors matter most. Focus testing effort on critical paths and complex logic, not every possible edge case.
2. Tracer Bullet
Write ONE test that confirms ONE thing about the system:
RED: Write test for first behavior → test fails
GREEN: Write minimal code to pass → test passes
This is your tracer bullet - proves the path works end-to-end.
3. Incremental Loop
For each remaining behavior:
RED: Write next test → fails
GREEN: Minimal code to pass → passes
Rules:
- One test at a time
- Only enough code to pass current test
- Don't anticipate future tests
- Keep tests focused on observable behavior
4. Refactor
After all tests pass, look for refactor candidates:
- Extract duplication
- Deepen modules (move complexity behind simple interfaces)
- Apply SOLID principles where natural
- Consider what new code reveals about existing code
- Run tests after each refactor step
Never refactor while RED. Get to GREEN first.
Checklist Per Cycle
[ ] Test describes behavior, not implementation
[ ] Test uses public interface only
[ ] Test would survive internal refactor
[ ] Code is minimal for this test
[ ] No speculative features added
More from rockclaver/systemcraft
code-graph
Builds and maintains a `.claude/codegraph.md` index of a codebase — a structured map of every module with purpose, key exports, and dependencies — so the agent can navigate any repo by reading one file instead of scanning dozens. Use when starting work on an unfamiliar codebase, when asked to index a repo, when context costs are high from repeated scans, or at the start of any task that will touch multiple files.
14find-code
Locate files and code using grep and shell scripts — never by AI scanning. Returns exact file paths and line numbers so the agent can jump directly to the location. Use whenever the agent needs to find a function, class, variable, import, file, or any pattern in the codebase. Code and file discovery must always be a tool call, never an AI guess.
14grill-me
Interview the user relentlessly about a plan or design until reaching shared understanding, resolving each branch of the decision tree. Use when user wants to stress-test a plan, get grilled on their design, or mentions "grill me".
13prd-to-plan
Turn a PRD into a multi-phase implementation plan using tracer-bullet vertical slices, saved as a local Markdown file in ./plans/. Use when user wants to break down a PRD, create an implementation plan, plan phases from a PRD, or mentions "tracer bullets".
13write-a-prd
Create a PRD through user interview, codebase exploration, and module design, then submit as a GitHub issue. Use when user wants to write a PRD, create a product requirements document, or plan a new feature.
13design-api
Design and implement consistent, DRY REST API endpoints for database models — handlers, routing, validation, error responses, and shared utilities — then generate test coverage for every endpoint. Use when the user asks to write an API, add endpoints for a model, build a REST layer, or create CRUD routes.
13