implement
Implement
User Input
PHASE_OR_TASK = $ARGUMENT
Accept and use user input (ARGUMENT). Valid formats:
- A phase name (e.g., "Phase 1", "US1", "Phase 3")
- A specific task ID (e.g., "T001", "T017-T028")
- A task range (e.g., "T001-T009")
Context Loading
- Identify the current feature ID from the git branch (e.g.,
feature/001-mcp-integration→001-mcp-integration) - Find the feature folder: check
backlog/plans/{feature-id}/first, thenbacklog/plans/_completed/{feature-id}/ - Read
{feature-path}/tasks.mdto get task list - Read
specs/tests/{feature-id}.mdif it exists
Phase Execution
If the input is a phase name:
Phase Entry: Spec Tests (RED)
- Find the phase's entry spec test task (e.g., "T016 [SPEC] Run spec tests for FR-3...")
- Run the spec tests specified in that task
- Expected: ALL FAIL (requirements not yet implemented)
- If any pass unexpectedly, report and ask user to verify
Task Execution Loop
- For each task in the phase:
- [TEST] tasks: Write failing tests first
- Run tests:
cargo test {module} --lib→ expect FAIL (RED)
- Run tests:
- [IMPL] tasks: Implement minimal code to pass tests
- Run tests:
cargo test {module} --lib→ expect PASS (GREEN)
- Run tests:
- [SPEC] tasks: Skip during main loop (handled at phase boundaries)
- Mark task as completed in
tasks.md(check the box) - Run
make lintto ensure code quality - Commit after each logical group of tasks (2-3 tasks)
- [TEST] tasks: Write failing tests first
Phase Exit: Spec Tests (GREEN)
- Find the phase's exit spec test task (e.g., "T029 [SPEC] Run spec tests for FR-3...")
- Run the spec tests specified in that task
- Expected: ALL PASS (requirements satisfied)
- If any fail, identify which implementation tasks need revision
Task-Specific Execution
If the input is a task ID or range:
- Read the task description from
tasks.md - Check dependencies (if task has prerequisites, verify they're completed)
- Execute based on task type:
- [TEST]: Write tests, run them → expect FAIL
- [IMPL]: Implement code, run tests → expect PASS
- [SPEC]: Run spec tests as specified in task description
- Mark task as completed in
tasks.md - Run
make check && make lintto verify
TDD Rules
For Rust code, strictly follow Red-Green-Refactor:
- RED: Write failing test first → run
cargo test→ see failure - GREEN: Write minimal code to pass → run
cargo test→ see pass - REFACTOR: Clean up code → run
cargo test→ keep passing
For bash/scripts, implement directly without TDD.
Extensive Mocking Detection
STOP and ask the user when a [TEST] task would require:
- Mocking an entire external system (MCP server, database, HTTP server)
- Creating complex fake implementations with multiple methods/traits
- Spawning actual processes or network connections in unit tests
- More than 50 lines of mock/stub code to test a single function
Signs you're over-mocking:
- The mock is as complex as the real implementation
- Simulating protocol handshakes or connection lifecycle is required
- Tests require async runtime setup for fake services
- Mock state management becomes a testing problem itself
When detected, ask the user:
⚠️ Task {TASK_ID} requires extensive mocking to unit test.
The test would need to mock: {what needs mocking}
Options:
1. **Defer to integration test** - Implement first, test with real/echo server later
2. **Simplify test scope** - Test only pure functions, defer interaction tests
3. **Proceed with mocking** - Accept complexity, write the mock
Which approach do you prefer?
Typical resolution:
- Change [TEST] task to [INTEG] and move to integration phase
- Implement the code first, then write integration tests with real servers
- Update tasks.md to reflect the change
Verification
Before marking phase complete:
- All unit tests pass:
make test-{module} - All spec tests pass:
python specs/tests/run_tests_claude.py specs/tests/{feature-id}.md - Linting passes:
make lint - Build succeeds:
make build
Finally (REQUIRED)
After implementation is complete, follow in this order:
- Run
/progressto generate session log in.ai/YYYY-MM-DD/ - Run
/committo commit any remaining changes - Close GitHub issue if specified
This step is mandatory - do not skip the progress log.
Example Usage
/implement "Phase 1" # Implement entire Phase 1 (Setup)
/implement "US1" # Implement User Story 1 phase
/implement "T001" # Implement single task
/implement "T017-T028" # Implement task range
More from ianphil/my-skills
astral-uv
>
11glab
GitLab CLI (glab) for merge requests, issues, and CI/CD pipelines. Use when working with GitLab repositories for MR creation/review, issue management, pipeline debugging, or any GitLab API operations. Triggers on GitLab URLs, mentions of "merge request" or "MR" (not "PR"), gitlab.com, or glab commands.
10ado
Azure DevOps CLI (az ado). Use for work items, PRs, pipelines, and backlog management. Triggers on: az ado, ADO, azure devops, work item, backlog, az boards, az repos, az pipelines.
1ainotes
This skill should be used when the user asks to "consolidate notes", "summarize ainotes", "clean up notes", "ainotes", or wants to consolidate accumulated agent observations into a compact summary.
1workiq
This skill should be used when the user asks to "install WorkIQ", "set up WorkIQ", "query my emails with AI", "connect AI to Microsoft 365", "query my meetings or documents", "use workiq ask", or wants to use natural language to search Microsoft 365 data (emails, meetings, Teams messages, documents, people) from an AI assistant.
1closer
This skill should be used when the user asks to "close feature", "archive feature", "complete feature N", "move feature to completed", or wants to move a finished feature to the _completed/ directory.
1