tdd
This skill contains shell command directives (!`command`) that may execute system commands. Review carefully before installing.
Test-Driven Development
Invocation Position
This is an invoked helper skill, not the normal first stop in the feature pipeline.
Use /tdd when backend implementation, bug-fix work, or behavior-heavy frontend logic should proceed through strict red-green-refactor cycles, usually because /execute delegated to it or a bug workflow produced a TDD-oriented fix plan.
Frontend examples that fit well here include reducers, state machines, validation flows, accessibility-critical behavior, and reproducible regressions in a user flow. Frontend work that is primarily visual, layout-driven, styling-focused, or about interaction feel should usually stay on the direct implementation path with browser-based verification.
Do not use it to replace shaping or decomposition. If the task is still unclear at the product, contract, or slice level, return to /write-a-prd, /prd-to-issues, or /execute first.
Philosophy
Core principle: Tests should verify behavior through public interfaces, not implementation details. Code can change entirely; tests shouldn't. Test difficulty is a design signal, not an obstacle to work around — when a test requires complex mock setup to reach domain logic, the production code has fused decisions with infrastructure. Refactor the production code, not the test scaffolding.
False positives (tests that fail on safe refactors) trigger a destructive sequence: developers investigate, find no real bug, stop trusting the suite, start ignoring failures, and a real regression slips through unnoticed. Coupling tests to implementation details is not a minor style issue — it is the primary mechanism by which test suites lose their value.
Good tests are integration-style: they exercise real code paths through public APIs. They describe what the system does, not how it does it. A good test reads like a specification - "user can checkout with valid cart" tells you exactly what capability exists. These tests survive refactors because they don't care about internal structure.
Bad tests are coupled to implementation. They mock internal collaborators, test private methods, or verify through external means (like querying a database directly instead of using the interface). The warning sign: your test breaks when you refactor, but behavior hasn't changed. If you rename an internal function and tests fail, those tests were testing implementation, not behavior.
See tests.md for examples and mocking.md for mocking guidelines.
Anti-Pattern: Horizontal Slices
DO NOT write all tests first, then all implementation. This is "horizontal slicing" - treating RED as "write all tests" and GREEN as "write all code."
This produces crap tests:
- Tests written in bulk test imagined behavior, not actual behavior
- You end up testing the shape of things (data structures, function signatures) rather than user-facing behavior
- Tests become insensitive to real changes - they pass when behavior breaks, fail when behavior is fine
- You outrun your headlights, committing to test structure before understanding the implementation
Correct approach: Vertical slices via tracer bullets. One test → one implementation → repeat. Each test responds to what you learned from the previous cycle. Because you just wrote the code, you know exactly what behavior matters and how to verify it.
WRONG (horizontal):
RED: test1, test2, test3, test4, test5
GREEN: impl1, impl2, impl3, impl4, impl5
RIGHT (vertical):
RED→GREEN: test1→impl1
RED→GREEN: test2→impl2
RED→GREEN: test3→impl3
...
Workflow
!mkdir -p .claude && touch .claude/.tdd-active && echo "TDD marker created — enforcement hook active"
1. Planning
Before writing any code:
- Confirm with user what interface changes are needed
- Confirm with user which behaviors to test (prioritize)
- Identify opportunities for deep modules (small interface, deep implementation)
- Classify code under test using the code classification quadrant: domain model → unit test, controller → integration test, trivial → skip, overcomplicated → refactor first
- Design interfaces for testability
- List the behaviors to test (not implementation steps)
- Get user approval on the plan
Ask: "What should the public interface look like? Which behaviors are most important to test?"
You can't test everything. Confirm with the user exactly which behaviors matter most. Focus testing effort on critical paths and complex logic, not every possible edge case.
2. Tracer Bullet
Write ONE test that confirms ONE thing about the system:
RED: Write test for first behavior → test fails
GREEN: Write minimal code to pass → test passes
This is your tracer bullet - proves the path works end-to-end.
Choose a first test that exercises the full vertical path. Prefer one that forces you to create the module, wire the interface, and return a result — even if the result is trivial. The purpose is to resolve where does this belong? before confronting correctness.
3. Incremental Loop
For each remaining behavior:
RED: Write next test → fails
GREEN: Minimal code to pass → passes
Rules:
- One test at a time
- Only enough code to pass current test
- Don't anticipate future tests
- Keep tests focused on observable behavior
- If getting to green requires more than one conceptual change, back out. Write a simpler test that isolates the prerequisite behavior. Get that green first, then return to the original test. A red bar lasting more than a few minutes is the signal to decompose.
4. Refactor
After all tests pass, look for refactor candidates:
- Extract duplication
- Deepen modules (move complexity behind simple interfaces)
- Apply SOLID principles where natural
- Consider what new code reveals about existing code
- Run tests after each refactor step
- [TypeScript projects, when implementing a library-provided callback] If the refactor produced a local wrapper type for the callback's return (e.g.
AdjacentStepOverridesfor a MastraprepareStepreturn), anchor the return to the library's declared shape usingsatisfies LibraryReturnTypeon the object expression, or return a fresh object literal, or derive the local type viaReturnType<typeof libraryCallback>/Parameters<…>. Do not return a typed local variable. TypeScript's excess-property check does not run on returns of typed values, so any field not declared by the library's signature is silently dropped at runtime — build passes, tests pass, the library never sees the field. This is the failure mode/researchPhase 1.25 and/pre-mergeDim 8 backstop, butsatisfiesat refactor time closes the gap at compile time. Cite: ts-essentials Rule 31, "Usesatisfiesfor type validation without losing inference precision."
Never refactor while RED. Get to GREEN first.
5. Harden with Assertions
After refactoring, consider where production assertions would catch future infections closer to their source. Assertions shorten the distance between a defect and the failure it causes — without them, corrupted state propagates silently until it surfaces in unrelated code.
- Preconditions on functions receiving external or untrusted input — fail fast on invalid state rather than propagating it
- Postconditions on functions with complex transformations — verify output invariants hold
- Invariant checkers for non-trivial data structures — a
isValid()orsane()method that checks structural properties through the public interface - Keep debugging assertions — if you added assertions during debugging to narrow the problem, keep them as permanent production guards. Removing them after the fix discards the detector along with the defect.
Not every cycle needs this step. Apply it when the code handles complex state, crosses trust boundaries, or was the site of a bug fix.
Timing-coupled primitives are test couplings
Any test that could reach code calling sleep, delay, retry, timeout, interval, or any other scheduled/debounced/throttled primitive has a hidden coupling to real wall-clock time. This is how tests that pass in milliseconds suddenly jump to multi-second runtime — or worse, start timing out — the moment a retry or backoff is added.
Two rules:
-
Every time-based primitive is a first-class configurable policy. Pass the delay, schedule, or timeout as a parameter or config value; never hardcode it. Tests pass zero-duration or no-retry policies; production passes real ones. This applies to
Effect.sleep/Schedule/Effect.retryin Effect projects,setTimeout/setIntervalin vanilla Node, RxJSdelay/timeroperators, any retry-wrapped fetch client, and any worker queue with a polling interval. -
If a test jumps from sub-second to multi-second runtime after adding a retry or sleep, the fix is never "bump
testTimeout." The fix is "inject the primitive so tests can disable it."testTimeoutbumps mask the coupling; injection removes it. Only bump if you have an affirmative reason — e.g. the test is genuinely exercising real-time behavior and cannot use a virtual clock.
Audit signal: before you decide a slow test is legitimate, grep the touched code for Effect\.sleep|Effect\.delay|Schedule\.|setTimeout|setInterval (or the equivalent in your stack). If any match is in a code path the test can reach and the primitive isn't injected, the coupling is the bug — not the timeout.
Checklist Per Cycle
[ ] Test describes behavior, not implementation
[ ] Test uses public interface only
[ ] Test would survive internal refactor
[ ] Code is minimal for this test
[ ] No speculative features added
Handoff
- Expected input: a concrete backend behavior, frontend interaction behavior, or fix path that is already scoped well enough to implement
- Produces: tested code increments built through red-green-refactor
- Usually invoked by:
/execute, or by bug-fix work prepared through/triage-issue - Returns control to: the calling implementation flow, usually
/execute, for final verification and handoff to/pre-merge - On exit:
/executeStep 6 removes.claude/.tdd-activeafter commit
More from chrislacey89/skills
pre-merge
Primary pipeline review step after verified implementation. Use to create a PR with lineage and run architectural review before merge. Not for QA intake, planning, or implementation work.
20execute
Primary pipeline execution step after /prd-to-issues or for clearly scoped implementation work. Use to build, verify, and commit a concrete slice, delegating to /tdd for backend work and behavior-heavy frontend logic when red-green-refactor will reduce risk. Not for shaping or pre-merge review.
19prd-to-issues
Primary pipeline decomposition step after /write-a-prd. Use when a shaped PRD is ready to become implementation-ready slices with boundary maps and dependency order. Not for unresolved scope, appetite, or solution direction.
12write-a-prd
Primary pipeline shaping step after /research and before /prd-to-issues. Use when the problem is understood well enough to turn into a bounded PRD issue. May invoke /design-an-interface or /api-design-review when interface or contract uncertainty remains. Not for discovery, decomposition, or implementation-ready work.
12research
Primary pipeline step after /shape and before /write-a-prd. Use to verify current docs, versions, repo patterns, and key unknowns before shaping. Invokes /api-design-review when API contract risk is high. Not for underdefined problems or implementation-ready work.
11setup-ralph-loop
Infrastructure skill for setting up Ralph loop scripts for Claude-driven AFK execution. Use when a repo wants a HITL-to-AFK runner around /execute with bounded iterations, GitHub-native durable state, and explicit feedback loops. Not a normal feature-delivery stage; it prepares the repo for safer autonomous execution.
10