write-tests
/write-tests
Write failing tests that encode acceptance criteria.
Usage
/write-tests
/write-tests Add logout button to header
When to Use
- After creating a todo that requires implementation
- Before running
/implement - When you have clear acceptance criteria
When NOT to Use
- Implementation already exists (tests would pass immediately)
- You're exploring or prototyping (not TDD mode)
- Just adding to existing test coverage
What It Does
- Analyzes the current todo/requirement
- Reads existing tests to understand patterns
- Writes test files that verify acceptance criteria
- Verifies tests FAIL (proves they test something real)
- Returns test file paths for
/implement
Output
The tester agent will produce:
## Tests Written
### Files Created/Modified
- `tests/test_client.py`
### Tests Added
| Test | Verifies |
|------|----------|
| `test_client_reset_returns_observation` | Reset returns valid observation |
| `test_client_step_advances_state` | Step mutates state correctly |
| `test_client_handles_invalid_action` | Error handling for bad input |
### Verification
All tests FAIL as expected (no implementation yet).
### Next Step
Run `/implement` to make these tests pass.
Rules
- Read existing tests first to understand patterns and conventions
- Test behavior, not implementation - write from user's perspective
- Integration tests first, then unit tests if needed
- Each test verifies ONE thing clearly
- Run tests to verify they fail before returning
Anti-patterns (NEVER do these)
- Writing tests that pass without implementation
- Testing implementation details instead of behavior
- Writing overly complex test setups
- Adding implementation code (that's
/implement's job) - Writing tests that duplicate existing coverage
Completion Criteria
Before returning, verify:
- Tests compile/run successfully (pytest can collect them)
- Tests FAIL (no implementation yet)
- Test names clearly describe what they verify
- Tests follow existing project patterns (see
tests/for examples)
More from meta-pytorch/openenv
simplify
Refactor code after tests pass. The "Refactor" phase of Red-Green-Refactor.
20alignment-review
Review code changes for bugs and alignment with OpenEnv principles and RFCs. Use when reviewing PRs, checking code before commit, or when asked to review changes. Implements two-tier review model.
20pre-submit-pr
Validate changes before submitting a pull request. Run comprehensive checks including lint, tests, alignment review, and RFC analysis. Use before creating a PR, when asked if code is ready for review, or before pushing for PR.
20rfc-check
Determine if proposed changes require an RFC. Use when planning significant changes, before starting major work, or when asked whether an RFC is needed.
19implement
Make tests pass. Invoke after /write-tests produces failing tests.
17work-on-issue
Start work on a GitHub issue. Extracts requirements, creates worktree, sets up TDD workflow.
17