implement-feature
Implement Feature Skill
Purpose
Implement features from task specifications or user stories using Test-Driven Development (TDD). This skill writes tests first, implements code to make tests pass, and validates the implementation meets all acceptance criteria.
Core Principles:
- Test-Driven Development (Red → Green → Refactor)
- Deterministic operations via bmad-commands
- Automated acceptance criteria verification
- Continuous validation
Prerequisites
- Task specification exists at workspace/tasks/{task_id}.md
- bmad-commands skill available at
.claude/skills/bmad-commands/ - Development environment configured
- Test framework installed (Jest or Pytest)
Workflow
Step 0: Load Task Specification
Action: Use bmad-commands to load task spec.
Execute:
python .claude/skills/bmad-commands/scripts/read_file.py \
--path workspace/tasks/{task_id}.md \
--output json
Parse Response:
- Verify
success == true - Extract
outputs.contentfor task specification - Parse sections: Objective, Acceptance Criteria, Context, Tasks
If task spec not found:
- Error:
file_not_foundinerrorsarray - Action: Create task spec first using
create-task-specskill - Halt implementation
If subtask_id is provided:
- Parse the task specification to locate the specific subtask section
- Filter acceptance criteria to only those related to the subtask
- Scope implementation to only the subtask requirements
- Note: All workflow steps below apply only to the selected subtask
See: references/templates.md for required task spec format
Step 1: Analyze Requirements
Action: Break down acceptance criteria into test cases.
For each acceptance criterion:
- Identify what needs to be tested (behavior/outcome)
- Determine test type (unit/integration/e2e)
- Plan test structure and data
- Identify edge cases
Example Analysis:
AC-1: User can log in with valid credentials
Test Cases:
- [Unit] Should return user object when credentials valid
- [Unit] Should return null when email not found
- [Unit] Should return null when password incorrect
- [Integration] Should create session when login successful
- [Integration] Should return 401 when login fails
Identify Files:
- Files to create (new implementation + tests)
- Files to modify (existing routes, config)
- Files to reference (existing models, utilities)
See: references/requirement-analysis-guide.md for detailed analysis patterns
Step 2: Write Tests (TDD Red Phase)
Action: Write failing tests that cover all acceptance criteria.
Run Tests:
python .claude/skills/bmad-commands/scripts/run_tests.py \
--path . \
--framework auto \
--output json
Verify RED Phase:
- Parse response:
outputs.passed == false - Tests fail because implementation doesn't exist (not syntax errors)
If tests pass in RED phase:
- Tests are invalid (code already exists)
- Refine tests to be more specific
See: references/test-examples.md for comprehensive test patterns and structure
Step 3: Implement Code (TDD Green Phase)
Action: Write minimum code to make tests pass.
Implementation Strategy:
- Start with simplest test first
- Implement just enough to pass that test
- Run tests after each small change
- Keep refactoring for later (Green phase)
Run Tests:
python .claude/skills/bmad-commands/scripts/run_tests.py \
--path . \
--framework auto \
--output json
Verify GREEN Phase:
- Parse response:
outputs.passed == true - Check
outputs.coverage_percent >= 80 - Verify
outputs.failed_tests == 0
If tests still failing:
- Review failure messages in
outputs.failures - Fix implementation
- Re-run tests
- Repeat until GREEN
See: references/implementation-examples.md for implementation patterns
Step 4: Refactor (TDD Refactor Phase)
Action: Improve code quality while keeping tests green.
Refactoring Targets:
- Remove duplication (DRY principle)
- Improve naming (clarity)
- Extract functions/methods (single responsibility)
- Simplify conditionals (readability)
- Add type safety
- Enhance error handling
After Each Refactor:
python .claude/skills/bmad-commands/scripts/run_tests.py \
--path . \
--framework auto \
--output json
Verify tests stay green:
outputs.passed == trueafter each refactor- If tests break, revert refactor
- Only commit refactors that keep tests green
See: references/refactoring-patterns.md for common refactoring techniques
Step 5: Verify Acceptance Criteria
Action: Check that all acceptance criteria from task spec are met.
For each acceptance criterion:
- Identify corresponding tests
- Verify tests pass
- Verify behavior matches requirement
- Check edge cases covered
Automated Verification:
# Run full test suite
python .claude/skills/bmad-commands/scripts/run_tests.py \
--path . \
--framework auto \
--output json
Check:
outputs.passed == trueoutputs.coverage_percent >= 80outputs.total_tests >= expected_count- All acceptance criteria have corresponding passing tests
Manual Verification:
- Review code against technical specifications
- Verify API contracts match spec
- Check data models are correct
- Ensure error handling is complete
Step 6: Run Validation Suite
Action: Run comprehensive checks before completion.
Validation Checks:
- All tests passing
- Coverage >= 80%
- No syntax errors
- No linting errors
- All files created as specified
- Code follows project standards
Run Final Tests:
python .claude/skills/bmad-commands/scripts/run_tests.py \
--path . \
--framework auto \
--output json
Acceptance Criteria Verification:
- ✅
tests_passing:outputs.passed == true - ✅
coverage_threshold:outputs.coverage_percent >= 80 - ✅
no_syntax_errors: No syntax errors in output - ✅
task_spec_loaded: Task spec was successfully loaded in Step 0 - ✅
requirements_met: All acceptance criteria verified in Step 5
See: references/validation-guide.md for complete validation procedures
Output
Return structured output with implementation status, test results, and telemetry.
See: references/templates.md for complete output format and examples
Error Handling
If any step fails:
- Task Spec Not Found: Create task spec first using
create-task-specskill - Tests Failing: Review failures in outputs, fix code, re-run
- Coverage Below Threshold: Add more tests to reach 80%
- Syntax Errors: Fix syntax errors, re-run tests
See: references/error-scenarios.md for detailed error handling strategies
Common Scenarios
Scenario 1: Ambiguous Requirements
If acceptance criteria are unclear or not testable:
- Halt implementation
- Request requirement refinement
- Suggest using
refine-storyskill
Scenario 2: Missing Dependencies
If implementation requires files that don't exist:
- Check if dependency is from another task
- Suggest implementing dependency task first
- OR expand scope to include dependency (with user approval)
Scenario 3: Tests Failing After Refactor
If tests break during refactoring:
- Revert the refactor
- Run tests to verify green again
- Try smaller refactoring step
- Ensure tests are correct (not implementation-dependent)
Scenario 4: Large Task with Multiple Subtasks
If a task has multiple independent subtasks that can be implemented separately:
- Use the
--subtaskflag to implement one subtask at a time - Example:
/implement-feature workspace/tasks/task-auth-002.md --subtask subtask-1 - This allows for incremental development and easier code review
- Each subtask should have its own tests and can be committed independently
Best Practices
- Follow TDD Cycle - Red → Green → Refactor, no shortcuts
- Keep Tests Focused - One test per behavior
- Mock External Dependencies - Database, APIs, file system
- Commit Frequently - After each TDD phase
See: references/best-practices.md for detailed TDD best practices
Routing Guidance
Use this skill when:
- Task complexity is simple to medium (≤60 complexity score)
- Changes affect ≤5 files
- No breaking changes or migrations
- Clear, testable acceptance criteria
Use alternative when:
- High complexity (>60 complexity score)
- Large scale changes (>5 files)
- Breaking changes requiring discovery phase
- Migrations or schema changes
- → Route to:
implement-with-discoveryskill
Complexity Assessment:
- Low (0-30): 1-2 files, no database/API changes → Use this skill
- Medium (31-60): 3-5 files, minor schema changes → Use this skill with caution
- High (61-100): 6+ files, migrations, breaking changes → Use
implement-with-discovery
Reference Files
references/requirement-analysis-guide.md- Analyze acceptance criteria, plan testsreferences/test-examples.md- Complete test patterns (unit, integration, e2e)references/implementation-examples.md- Code implementation patternsreferences/refactoring-patterns.md- Common refactoring techniquesreferences/validation-guide.md- Complete validation proceduresreferences/error-scenarios.md- Error handling strategiesreferences/best-practices.md- TDD and testing best practicesreferences/templates.md- Task spec format, output format, commit templates
Using This Skill
Invoked by James subagent with routing, or called directly with task_id input.
Part of BMAD Enhanced Development Suite
More from adolfoaranaes12/bmad-enhanced
analyze-architecture
Comprehensive brownfield architecture analysis for existing codebases. Discovers structure, identifies patterns, assesses quality, calculates production readiness, and provides actionable recommendations. Use when analyzing existing codebases to understand architecture, assess quality, or prepare for modernization.
10bmad-commands
Atomic command primitives for BMAD operations. Provides type-safe, testable wrappers around file operations and test execution with structured JSON I/O and built-in telemetry. This skill should be used when BMAD workflows need deterministic, reliable primitive operations with observability.
5create-brownfield-prd
Generate Product Requirements Documents (PRD) for existing systems through systematic codebase analysis, feature extraction, and gap identification with confidence scoring for validation-needed areas. Use when documenting existing systems that lack requirements documentation or preparing for system modernization/migration.
5document-project
Generate comprehensive architecture documentation automatically from existing codebase analysis. This skill should be used when working with brownfield projects or updating outdated documentation.
5create-prd
Create comprehensive Product Requirements Documents (PRD) from high-level product ideas with structured market analysis, feature definition, and success metrics for both greenfield and brownfield contexts. Use when defining product vision for new projects (greenfield) or formalizing requirements for existing systems (brownfield).
4shard-document
Break large documents into smaller, manageable shards with maintained relationships and navigation, improving document usability and maintenance for PRDs, specs, and technical documentation. Use when large documents (>5000 words) need splitting for better maintainability, navigation, or when documentation becomes difficult to navigate.
4