tdd

SKILL.md

Test-Driven Development (TDD)

Strict Red-Green-Refactor workflow for robust, self-documenting, production-ready code.

Quick Navigation

Situation Go To
New to this codebase Step 1: Explore Environment
Know the framework, starting work Step 2: Select Mode
Need the core loop reference Step 3: Core TDD Loop
Complex edge cases to cover Property-Based Testing
Tests are flaky/unreliable Flaky Test Management
Need isolated test environment Hermetic Testing
Measuring test quality Mutation Testing

The Three Rules (Robert C. Martin)

  1. No Production Code without a failing test
  2. Write Only Enough Test to Fail (compilation errors count)
  3. Write Only Enough Code to Pass (no optimizations yet)

The Loop: πŸ”΄ RED (write failing test) β†’ 🟒 GREEN (minimal code to pass) β†’ πŸ”΅ REFACTOR (clean up) β†’ Repeat


Step 1: Explore Test Environment

Do NOT assume anything. Explore the codebase first.

Checklist:

  • Search for test files: glob("**/*.test.*"), glob("**/*.spec.*"), glob("**/test_*.py")
  • Check package.json scripts, Makefile, or CI workflows
  • Look for config: vitest.config.*, jest.config.*, pytest.ini, Cargo.toml

Framework Detection:

Language Config Files Test Command
Node.js package.json, vitest.config.* npm test, bun test
Python pyproject.toml, pytest.ini pytest
Go go.mod, *_test.go go test ./...
Rust Cargo.toml cargo test

Step 2: Select Mode

Mode When First Action
New Feature Adding functionality Read existing module tests, confirm green baseline
Bug Fix Reproducing issue Write failing reproduction test FIRST
Refactor Cleaning code Ensure β‰₯80% coverage on target code
Legacy No tests exist Add characterization tests before changing

Tie-breaker: If coverage <20% or tests absent β†’ use Legacy Mode first.

Mode: New Feature

  1. Read existing tests for the module
  2. Run tests to confirm green baseline
  3. Enter Core Loop for new behavior
  4. Commits: test(module): add test for X β†’ feat(module): implement X

Mode: Bug Fix

  1. Write failing reproduction test (MUST fail before fix)
  2. Confirm failure is assertion error, not syntax error
  3. Write minimal fix
  4. Run full test suite
  5. Commits: test: add failing test for bug #123 β†’ fix: description (#123)

Mode: Refactor

  1. Run coverage on the specific function you'll refactor
  2. If coverage <80% β†’ add characterization tests first
  3. Refactor in small steps (ONE change β†’ run tests β†’ repeat)
  4. Never change behavior during refactor

Mode: Legacy Code

  1. Find Seams - insertion points for tests (Sensing Seams, Separation Seams)
  2. Break Dependencies - use Sprout Method or Wrap Method
  3. Add characterization tests (capture current behavior)
  4. Build safety net: happy path + error cases + boundaries
  5. Then apply TDD for your changes

β†’ See references/examples.md for full code examples of each mode.


Step 3: The Core TDD Loop

Before Starting: Scenario List

List all behaviors to cover:

  • Happy path cases
  • Edge cases and boundaries
  • Error/failure cases
  • Pessimism: 3 ways this could fail (network, null, invalid state)

πŸ”΄ RED Phase

  1. Write ONE test (single behavior or edge case)
  2. Use AAA: Arrange β†’ Act β†’ Assert
  3. Run test, verify it FAILS for expected reason

Checks:

  • Is failure an assertion error? (Not SyntaxError/ModuleNotFoundError)
  • Can I explain why this should fail?
  • If test passes immediately β†’ STOP. Test is broken or feature exists.

🟒 GREEN Phase

  1. Write minimal code to pass
  2. Do NOT implement "perfect" solution
  3. Verify test passes

Checks:

  • Is this the simplest solution?
  • Can I delete any of this code and still pass?

πŸ”΅ REFACTOR Phase

  1. Look for duplication, unclear names, magic values
  2. Clean up without changing behavior
  3. Verify tests still pass

Repeat

Select next scenario, return to RED.

Triangulation: If implementation is too specific (hardcoded), write another test with different inputs to force generalization.


Stop Conditions

Signal Response
Test passes immediately Check assertions, verify feature isn't already built
Test fails for wrong reason Fix setup/imports first
Flaky test STOP. Fix non-determinism immediately
Slow feedback (>5s) Optimize or mock external calls
Coverage decreased Add tests for uncovered paths

Test Distribution: The Testing Trophy

The Testing Trophy (Kent C. Dodds) reflects modern testing reality: integration tests give the best confidence-to-effort ratio.

          _____________
         /   System    \      ← Few, slow, high confidence; brittle (E2E)
        /_______________\
       /                 \
      /    Integration    \   ← Real interactions between units β€” **BEST ROI** (Integration)
      \                   /
       \_________________/
         \    Unit     /      ← Fast & cheap but test in isolation (Unit) 
          \___________/
          /   Static  \       ← Typecheck, linting β€” typos/types (Static)
         /_____________\

Layer Breakdown

Layer What Tools When
Static Type errors, syntax, linting TypeScript, ESLint Always on, catches 50%+ of bugs for free
Unit Pure functions, algorithms, utilities vitest, jest, pytest Isolated logic with no dependencies
Integration Components + hooks + services together Testing Library, MSW, Testcontainers Real user flows, real(ish) data
E2E Full app in browser Playwright, Cypress Critical paths only (login, checkout)

Why Integration Tests Win

Unit tests prove code works in isolation. Integration tests prove code works together.

Concern Unit Test Integration Test
Component renders βœ… βœ…
Component + hook works ❌ βœ…
Component + API works ❌ βœ…
User flow works ❌ βœ…
Catches real bugs Sometimes Usually

The insight: Most bugs live in the seams between modules, not inside pure functions. Integration tests catch seam bugs; unit tests don't.

Practical Guidance

  1. Start with integration tests - Test the way users use your code
  2. Drop to unit tests for complex algorithms or edge cases
  3. Use E2E sparingly - Slow, flaky, expensive to maintain
  4. Let static analysis do the heavy lifting - TypeScript catches more bugs than most unit tests
  5. Prefer fakes over mocks - Fakes have real behavior; mocks just return canned data
  6. SMURF quality: Sustainable, Maintainable, Useful, Resilient, Fast

Anti-Patterns

Pattern Problem Fix
Mirror Blindness Same agent writes test AND code State test intent before GREEN
Happy Path Bias Only success scenarios Include errors in Scenario List
Refactoring While Red Changing structure with failing tests Get to GREEN first
The Mockery Over-mocking hides bugs Prefer fakes or real implementations
Coverage Theater Tests without meaningful assertions Assert behavior, not lines
Multi-Test Step Multiple tests before implementing One test at a time
Verification Trap πŸ€– AI tests what code does not what it should do State intent in plain language; separate agent review
Test Exploitation πŸ€– LLMs exploit weak assertions or overload operators Use PBT alongside examples; strict equality
Assertion Omission πŸ€– Missing edge cases (null, undefined, boundaries) Scenario list with errors; test.each
Hallucinated Mock πŸ€– AI generates fake mocks without proper setup Testcontainers for integration; real Fakes for unit

Critical: Verify tests by (1) running them, (2) having separate agent review, (3) never trusting generated tests blindly.


Advanced Techniques

Use these techniques at specific points in your workflow:

Technique Use During Purpose
Test Doubles πŸ”΄ RED phase Isolate dependencies when writing tests
Property-Based Testing πŸ”΄ RED phase Cover edge cases for complex logic
Contract Testing πŸ”΄ RED phase Define API expectations between services
Snapshot Testing πŸ”΄ RED phase Capture UI/response structure
Hermetic Testing πŸ”΅ Setup Ensure test isolation and determinism
Mutation Testing βœ… After GREEN Validate test suite effectiveness
Coverage Analysis βœ… After GREEN Find untested code paths
Flaky Test Management πŸ”§ Maintenance Fix unreliable tests blocking CI

Test Doubles (Use: Writing Tests with Dependencies)

When: Your code depends on something slow, unreliable, or complex (DB, API, filesystem).

Type Purpose When
Stub Returns canned answers Need specific return values
Mock Verifies interactions Need to verify calls made
Fake Simplified implementation Need real behavior without cost
Spy Records calls Need to observe without changing

Decision: Dependency slow/unreliable? β†’ Fake (complex) or Stub (simple). Need to verify calls? β†’ Mock/Spy. Otherwise β†’ real implementation.

β†’ See references/examples.md β†’ Test Double Examples


Hermetic Testing (Use: Test Environment Setup)

When: Setting up test infrastructure. Tests must be isolated and deterministic.

Principles:

  • Isolation: Unique temp directories/state per test
  • Reset: Clean up in setUp/tearDown
  • Determinism: No time-based logic or shared mutable state

Database Strategies:

Strategy Speed Fidelity Use When
In-memory (SQLite) Fast Low Unit tests, simple queries
Testcontainers Medium High Integration tests
Transactional Rollback Fast High Tests sharing schema (80x faster than TRUNCATE)

β†’ See references/examples.md β†’ Hermetic Testing Examples


Property-Based Testing (Use: Writing Tests for Complex Logic)

When: Writing tests for algorithms, state machines, serialization, or code with many edge cases.

Tools: fast-check (JS/TS), Hypothesis (Python), proptest (Rust)

Properties to Test:

  • Commutativity: f(a, b) == f(b, a)
  • Associativity: f(f(a, b), c) == f(a, f(b, c))
  • Identity: f(a, identity) == a
  • Round-trip: decode(encode(x)) == x
  • Metamorphic: If input changes by X, output changes by Y (useful when you don't know expected output)

How: Replace multiple example-based tests with one property test that generates random inputs.

Critical: Always log the seed on failure. Without it, you cannot reproduce the failing case.

β†’ See references/examples.md β†’ Property-Based Testing Examples


Mutation Testing (Use: Validating Test Quality)

When: After tests pass, to verify they actually catch bugs. Use for critical code (auth, payments) or before major refactors.

Tools: Stryker (JS/TS), PIT (Java), mutmut (Python)

How: Tool mutates your code (e.g., changes > to >=). If tests still pass β†’ your tests are weak.

Interpretation:

  • >80% mutation score = good test suite
  • Survived mutants = tests don't catch those changes β†’ add tests for these

Equivalent Mutant Problem: Some mutants change syntax but not behavior (e.g., i < 10 β†’ i != 10 in a loop where i only increments). These can't be killedβ€”100% score is often impossible. Focus on surviving mutants in critical paths, not chasing perfect scores.

When NOT to use: Tool-generated code (OpenAPI clients, Protobuf stubs, ORM models), simple DTOs/getters, legacy code with slow tests, or CI pipelines that must finish in <5 minutes. Use --incremental --since main for PR-focused runs. Note: This does NOT mean skip mutation testing on code you (the agent) wroteβ€”always validate your own work.

β†’ See references/examples.md β†’ Mutation Testing Examples


Flaky Test Management (Use: CI/CD Maintenance)

When: Tests fail intermittently, blocking CI or eroding trust in the test suite.

Root Causes:

Cause Fix
Timing (setTimeout, races) Fake timers, await properly
Shared state Isolate per test
Randomness Seed or mock
Network Use MSW or fakes
Order dependency Make tests independent
Parallel transaction conflicts Isolate DB connections per worker

How: Detect (--repeat 10) β†’ Quarantine (separate suite) β†’ Fix root cause β†’ Restore

Quarantine Rules:

  • Issue-linked: Every quarantined test MUST link to a tracking issue. Prevents "quarantine-and-forget."
  • Mute, don't skip: Prefer muting (runs but doesn't fail build) over skipping. You still collect failure data.
  • Reintroduction criteria: Test must pass N consecutive runs (e.g., 100) on main before leaving quarantine.

β†’ See references/examples.md β†’ Flaky Test Examples


Contract Testing (Use: Writing Tests for Service Boundaries)

When: Writing tests for code that calls or exposes APIs. Prevents integration breakage.

How (Pact): Consumer defines expected interactions β†’ Contract published β†’ Provider verifies β†’ CI fails if contract broken.

β†’ See references/examples.md β†’ Contract Testing Examples


Coverage Analysis (Use: Finding Gaps After Tests Pass)

When: After writing tests, to find untested code paths. NOT a goal in itself.

Metric Measures Threshold
Line Lines executed 70-80%
Branch Decision paths 60-70%
Mutation Test effectiveness >80%

Risk-Based Prioritization: P0 (auth, payments) β†’ P1 (core logic) β†’ P2 (helpers) β†’ P3 (config)

Warning: High coverage β‰  good tests. Tests must assert meaningful behavior.


Snapshot Testing (Use: Writing Tests for UI/Output Structure)

When: Writing tests for UI components, API responses, or error message formats.

Appropriate: UI structure, API response shapes, error formats. Avoid: Behavior testing, dynamic content, entire pages.

How: Capture output once, verify it doesn't change unexpectedly. Always review diffs carefully.

β†’ See references/examples.md β†’ Snapshot Testing Examples


Integration with Other Skills

Task Skill Usage
Committing git-commit test: for RED, feat: for GREEN
Code Quality code-quality Run during REFACTOR phase
Documentation docs-check Check if behavior changes need docs

References

Foundational:

Tools: Testcontainers | fast-check | Stryker | MSW | Pact

Weekly Installs
9
GitHub Stars
2
First Seen
Feb 2, 2026
Installed on
cursor9
opencode8
gemini-cli8
github-copilot8
codex8
amp8