goos-adonis

SKILL.md

GOOS-Style TDD for AdonisJS v7

Philosophy

Core principle: Tests verify behavior through public interfaces, not implementation details. Code can change entirely; tests shouldn't.

Good tests are integration-style: they exercise real code paths through public APIs. They describe what the system does, not how it does it. A good test reads like a specification — "authenticated user can create a post" tells you exactly what capability exists. These tests survive refactors because they don't care about internal structure.

Bad tests are coupled to implementation. They mock internal collaborators, test private methods, or verify through external means (like querying the database directly instead of using the API). The warning sign: your test breaks when you refactor, but behavior hasn't changed.

See tests.md for examples and mocking.md for mocking guidelines.

Anti-Pattern: Horizontal Slices

DO NOT write all tests first, then all implementation. This is "horizontal slicing" — treating RED as "write all tests" and GREEN as "write all code."

WRONG (horizontal):
  RED:   test1, test2, test3, test4, test5
  GREEN: impl1, impl2, impl3, impl4, impl5

RIGHT (vertical / tracer bullet):
  RED→GREEN: test1→impl1
  RED→GREEN: test2→impl2
  RED→GREEN: test3→impl3

Why horizontal slicing fails:

  • Tests written in bulk test imagined behavior, not actual behavior
  • You end up testing the shape of things (data structures, function signatures) rather than user-facing behavior
  • You outrun your headlights, committing to test structure before understanding the implementation
  • Tests become insensitive to real changes — they pass when behavior breaks, fail when behavior is fine

This connects directly to GOOS outer/inner loops: each cycle through the inner loop teaches you something about the design. You need that learning before writing the next test.

The Golden Rule

Never write new functionality without a failing test. No exceptions.

Quick Reference

GOOS Principle AdonisJS Pattern
Walking Skeleton Functional test (API) or browser test (rendered) on real route
Acceptance Test @japa/api-client (JSON APIs) or @japa/browser-client (rendered pages)
Page Objects Class-based pages (BasePage from @japa/browser-client)
Unit Test Japa test with container.swap() for isolation
Mock Objects app.container.swap(Service, () => fake)
Adapter Layer Service wrapping third-party API (only for services the framework doesn't wrap)
Ports & Adapters Services + IoC Container + @inject()
Test Data Builder AdonisJS model factories
Tell, Don't Ask Thin controllers delegating to injected services
Listen to Tests Difficulty testing = design feedback

Workflow

1. Planning

Before writing any code:

  • Confirm with user what interface changes are needed
  • Determine acceptance test type: API client (JSON APIs) or browser client (rendered pages)
  • Confirm which behaviors to test (prioritize — you can't test everything)
  • Identify opportunities for deep modules
  • Design interfaces for testability
  • List behaviors to test (not implementation steps)
  • Get user approval on the plan

Ask: "What should the public interface look like? Which behaviors matter most?"

2. Tracer Bullet

Write ONE failing acceptance test → minimal implementation → GREEN. This is your walking skeleton — proves the path works end-to-end. See acceptance-tests.md for tool selection and examples.

API app:    RED: client.post('/endpoint') → GREEN: Route → Controller → Service → Model
Rendered:   RED: visit('/page') → assertTextContains → GREEN: Route → Controller → View/Inertia

3. Incremental Loop

For each remaining behavior:

RED:   Write next test → fails
GREEN: Minimal code to pass → passes

Rules:

  • One test at a time
  • Only enough code to pass current test
  • Don't anticipate future tests
  • Keep tests focused on observable behavior

4. Refactor

After all tests pass, look for refactor candidates. Never refactor while RED — get to GREEN first.

Per-Cycle Checklist

[ ] Test describes behavior, not implementation
[ ] Test uses public interface only
[ ] Test would survive internal refactor
[ ] Only mocking types I own (adapters, not third-party)
[ ] Code is minimal for this test
[ ] No speculative features added

Common Mistakes

Mistake Fix
Mocking the framework (Route, HttpContext) Use functional tests via @japa/api-client
Testing methods instead of behavior Name tests by feature: "calculates tax for premium users"
Skipping the failing test step Watch it fail first — verify diagnostics are useful
Fat controllers with all logic inline Extract services, inject with @inject()
Mocking third-party libraries directly Write adapter service, mock the adapter
Writing all tests before implementation Vertical slices: one RED→GREEN cycle at a time

Detailed References

Weekly Installs
6
First Seen
13 days ago
Installed on
opencode6
gemini-cli6
claude-code6
github-copilot6
codex6
kimi-cli6