testing
Testing — Strategy & Philosophy
Testing is not verification after the fact. It is feedback on design. A test that is hard to write reveals coupling in production code. A test that is hard to name reveals unclear thinking about behavior. The test suite is the living specification of the system—what the code is actually supposed to do.
Core Beliefs
| Belief | What it means in practice | Anti-pattern it prevents |
|---|---|---|
| Hard to test = design problem | Refactor the design, not the test | Mocking everything to force testability |
| Tests are specification, not verification | Name tests as sentences describing behavior | Tests named after implementation details |
| Mocks are a smell, not a strategy | Prefer real collaborators or fakes | Mock-heavy suites that survive bugs |
| Pure functions are the testability ideal | Push I/O to the edges; keep business logic pure | Business logic tangled with side effects |
The Testing Hierarchy
The test pyramid describes how many of each type to write and why. Unit tests form the base (many, fast, isolated). Integration tests in the middle (some, slower, collaborative). End-to-end tests at the top (few, slowest, highest value but hardest to maintain).
| Layer | Count | Speed | Purpose | Fragility |
|---|---|---|---|---|
| Unit | Many | ms | Behavior specification | Low if behavior-focused |
| Integration | Some | seconds | Collaboration contracts | Medium |
| E2E / Smoke | Few | seconds–minutes | User journey confidence | High — minimize |
Decision Guide
When deciding what tests to write:
| Question | Answer → Action |
|---|---|
| Is this a behavior specification or a verification check? | Behavior → write a unit test; Verification → consider integration test |
| Can I test this in isolation, or does it require collaboration? | Isolation → unit test; Collaboration → integration test with real collaborators |
| Is this a critical user journey? | Yes → also write e2e test (sparingly) |
| Do I need mocks, or can I use real collaborators? | Real collaborators preferred; mocks only if I/O is unavoidable |
| Is the test hard to write? | Yes → your design has coupling issues; refactor first, then test |
| Does the test depend on timing or non-deterministic I/O? | Yes → fix the design; tests must be deterministic |
Testing Anti-Patterns
| Anti-pattern | Problem | Fix |
|---|---|---|
| Over-mocking | Mocks hide integration bugs | Prefer real collaborators or fakes |
| Testing implementation | Tests break on refactor | Test behavior, not methods |
| Brittle/flaky tests | Tests are unreliable | Design for determinism; check coupling |
| Slow unit tests | Design is too coupled | Refactor, not test |
| Skip legacy testing | Risk increases over time | Start with integration tests for safety |
Reference Files
| Read When | File |
|---|---|
| Understanding the four cross-cutting testing principles | Testing Principles |
| Auditing test quality against the 12 desiderata properties | TestDesiderata — Quality Audit |
| Reviewing tests for BDD compliance and structure | BDD Test Review |
| Identifying and writing smoke tests for CI gates | Smoke Tests |
Specialist Skills
For deeper dives into specific testing contexts, route to these specialized skills:
| Situation | Specialist Skill | Why |
|---|---|---|
| Need to practice TDD workflow interactively | tcrdd |
Hands-on kata-style practice with immediate feedback |
Key Takeaway
The goal of testing is clarity, not coverage. A test is good if it:
- Clearly expresses the expected behavior (not implementation details)
- Fails when the behavior changes (not when code is refactored)
- Is easy to understand and maintain
- Runs fast and deterministically
- Lives with the code it tests
Tests are the interface between the developer's intent and the reader's understanding. Write them as if you are documenting the system to future maintainers—because you are.