bmad-commands
BMAD Commands
Overview
BMAD Commands provide atomic, testable command primitives that BMAD skills compose into workflows. Each command follows a strict contract with typed inputs/outputs, structured error handling, and built-in telemetry.
Design Principles:
- Deterministic: Same inputs always produce same outputs
- Testable: Pure functions with JSON I/O
- Observable: All commands emit telemetry data
- Composable: Commands are building blocks for skills
Available Commands
read_file
Read file contents with metadata.
Usage:
python scripts/read_file.py --path <file-path> --output json
Inputs:
path(required): Path to file to read
Outputs:
content: File contents as textline_count: Number of linessize_bytes: File size in bytespath: Absolute path to file
Example:
python .claude/skills/bmad-commands/scripts/read_file.py \
--path workspace/tasks/task-001.md \
--output json
Returns:
{
"success": true,
"outputs": {
"content": "# Task Specification\n...",
"line_count": 45,
"size_bytes": 1024,
"path": "/absolute/path/to/task-001.md"
},
"telemetry": {
"command": "read_file",
"duration_ms": 12,
"timestamp": "2025-01-15T10:30:00Z",
"path": "workspace/tasks/task-001.md",
"line_count": 45
},
"errors": []
}
run_tests
Execute tests with specified framework and return structured results.
Usage:
python scripts/run_tests.py --path <test-path> --framework <jest|pytest> --timeout <seconds> --output json
Inputs:
path(required): Directory containing testsframework(optional): Test framework (default: jest)timeout(optional): Timeout in seconds (default: 120)
Outputs:
passed: Whether all tests passed (boolean)summary: Human-readable summarytotal_tests: Total number of testspassed_tests: Number passedfailed_tests: Number failedcoverage_percent: Coverage percentage (0-100)junit_path: Path to JUnit report
Example:
python .claude/skills/bmad-commands/scripts/run_tests.py \
--path . \
--framework auto \
--timeout 120 \
--output json
Returns:
{
"success": true,
"outputs": {
"passed": true,
"summary": "10/10 tests passed",
"total_tests": 10,
"passed_tests": 10,
"failed_tests": 0,
"coverage_percent": 87,
"junit_path": "./junit.xml"
},
"telemetry": {
"command": "run_tests",
"framework": "jest",
"duration_ms": 4523,
"timestamp": "2025-01-15T10:30:00Z",
"total_tests": 10,
"passed_tests": 10,
"failed_tests": 0
},
"errors": []
}
generate_architecture_diagram
Generate architecture diagrams from architecture documents (C4 model, deployment, sequence diagrams).
Usage:
python scripts/generate_architecture_diagram.py --architecture <file-path> --type <diagram-type> --output <output-dir>
Inputs:
architecture(required): Path to architecture documenttype(required): Diagram type (c4-context, c4-container, c4-component, deployment, sequence)output(optional): Output directory (default: docs/diagrams)
Outputs:
diagram_path: Path to generated diagram filediagram_type: Type of diagram generateddiagram_format: Format (svg, png)architecture_source: Source architecture document
Example:
python .claude/skills/bmad-commands/scripts/generate_architecture_diagram.py \
--architecture docs/architecture.md \
--type c4-context \
--output docs/diagrams
Returns:
{
"success": true,
"outputs": {
"diagram_path": "/path/to/docs/diagrams/c4-context-20250131.svg",
"diagram_type": "c4-context",
"diagram_format": "svg",
"architecture_source": "docs/architecture.md"
},
"telemetry": {
"command": "generate_architecture_diagram",
"diagram_type": "c4-context",
"duration_ms": 450,
"timestamp": "2025-01-31T10:30:00Z"
},
"errors": []
}
analyze_tech_stack
Analyze technology stack from architecture document, validate compatibility, identify risks.
Usage:
python scripts/analyze_tech_stack.py --architecture <file-path> --output <json|summary>
Inputs:
architecture(required): Path to architecture documentoutput(optional): Output format (default: json)
Outputs:
technologies: List of detected technologies with categoriestech_count: Number of technologies detectedcategories: Technology categories (frontend, backend, database, etc.)compatibility: Compatibility analysis and warningsarchitecture_source: Source architecture document
Example:
python .claude/skills/bmad-commands/scripts/analyze_tech_stack.py \
--architecture docs/architecture.md \
--output json
Returns:
{
"success": true,
"outputs": {
"technologies": [
{"name": "React", "category": "frontend", "version": "18+"},
{"name": "Node.js", "category": "backend", "version": "20+"},
{"name": "PostgreSQL", "category": "database", "version": "15+"}
],
"tech_count": 3,
"categories": ["frontend", "backend", "database"],
"compatibility": {
"issues": [],
"warnings": [],
"recommendations": ["Verify versions are compatible"]
},
"architecture_source": "docs/architecture.md"
},
"telemetry": {
"command": "analyze_tech_stack",
"tech_count": 3,
"duration_ms": 180,
"timestamp": "2025-01-31T10:30:00Z"
},
"errors": []
}
extract_adrs
Extract Architecture Decision Records (ADRs) from architecture document into separate files.
Usage:
python scripts/extract_adrs.py --architecture <file-path> --output <output-dir>
Inputs:
architecture(required): Path to architecture documentoutput(optional): Output directory for ADR files (default: docs/adrs)
Outputs:
adrs_extracted: Number of ADRs extractedadrs: List of ADRs with number, title, and file pathoutput_directory: Directory where ADRs were savedarchitecture_source: Source architecture document
Example:
python .claude/skills/bmad-commands/scripts/extract_adrs.py \
--architecture docs/architecture.md \
--output docs/adrs
Returns:
{
"success": true,
"outputs": {
"adrs_extracted": 5,
"adrs": [
{
"number": "001",
"title": "Technology Stack Selection",
"file": "/path/to/docs/adrs/ADR-001-technology-stack-selection.md"
},
{
"number": "002",
"title": "Database Choice",
"file": "/path/to/docs/adrs/ADR-002-database-choice.md"
}
],
"output_directory": "/path/to/docs/adrs",
"architecture_source": "docs/architecture.md"
},
"telemetry": {
"command": "extract_adrs",
"adrs_count": 5,
"duration_ms": 120,
"timestamp": "2025-01-31T10:30:00Z"
},
"errors": []
}
validate_patterns
Validate architectural patterns against best practices, check appropriateness for requirements.
Usage:
python scripts/validate_patterns.py --architecture <file-path> [--requirements <file-path>] --output <json|summary>
Inputs:
architecture(required): Path to architecture documentrequirements(optional): Path to requirements documentoutput(optional): Output format (default: json)
Outputs:
detected_patterns: List of architectural patterns foundvalidation: Validation results including warnings and recommendationsarchitecture_source: Source architecture documentrequirements_source: Source requirements document (if provided)
Example:
python .claude/skills/bmad-commands/scripts/validate_patterns.py \
--architecture docs/architecture.md \
--requirements docs/prd.md \
--output json
Returns:
{
"success": true,
"outputs": {
"detected_patterns": [
{
"name": "Microservices",
"category": "architectural",
"validated": true,
"warnings": []
},
{
"name": "Repository Pattern",
"category": "architectural",
"validated": true,
"warnings": []
}
],
"validation": {
"patterns_validated": 2,
"patterns_appropriate": 2,
"anti_patterns_detected": 0,
"warnings": [],
"recommendations": [
"Validate pattern complexity matches team expertise",
"Ensure pattern choice aligns with scale requirements"
]
},
"architecture_source": "docs/architecture.md",
"requirements_source": "docs/prd.md"
},
"telemetry": {
"command": "validate_patterns",
"patterns_count": 2,
"anti_patterns_count": 0,
"duration_ms": 210,
"timestamp": "2025-01-31T10:30:00Z"
},
"errors": []
}
Response Format
All commands return JSON with this structure:
{
"success": boolean,
"outputs": {
// Command-specific outputs
},
"telemetry": {
"command": string,
"duration_ms": number,
"timestamp": string,
// Command-specific telemetry
},
"errors": [
// Array of error strings (empty if success=true)
]
}
Exit Codes:
0: Command succeeded (success: true)1: Command failed (success: false)
Using Commands from Skills
To use commands from other skills, execute the script and parse the JSON output.
Example in a skill's SKILL.md:
### Step 1: Read Task Specification
Execute the read_file command:
python .claude/skills/bmad-commands/scripts/read_file.py \
--path workspace/tasks/{task_id}.md \
--output json
Parse the JSON response and extract `outputs.content` for the task specification.
### Step 2: Run Tests
Execute the run_tests command:
python .claude/skills/bmad-commands/scripts/run_tests.py \
--path . \
--framework auto \
--output json
Parse the JSON response and check `outputs.passed` to verify tests passed.
Error Handling
All commands handle errors gracefully and return structured error information:
{
"success": false,
"outputs": {},
"telemetry": {
"command": "read_file",
"duration_ms": 5,
"timestamp": "2025-01-15T10:30:00Z"
},
"errors": ["file_not_found"]
}
Common Errors:
file_not_found: File doesn't existpath_is_not_file: Path is a directorypermission_denied: Insufficient permissionstimeout: Operation exceeded timeoutinvalid_path: Path validation failedunexpected_error: Unexpected error occurred
Telemetry
All commands emit telemetry data for observability:
command: Command nameduration_ms: Execution time in millisecondstimestamp: ISO 8601 timestamp- Command-specific metrics (e.g., line_count, test_count)
This telemetry enables:
- Performance monitoring
- Usage analytics
- Debugging workflows
- Production observability
Command Contracts
Full command contracts (inputs, outputs, errors, telemetry) are documented in:
references/command-contracts.yaml
Reference this file when:
- Creating new commands
- Updating existing commands
- Integrating commands into skills
- Understanding command behavior
Testing Commands
Test commands independently before using in workflows:
# Test read_file
python .claude/skills/bmad-commands/scripts/read_file.py \
--path README.md \
--output json
# Test run_tests (if you have a test suite)
python .claude/skills/bmad-commands/scripts/run_tests.py \
--path . \
--framework auto \
--output json
Verify:
- JSON output is valid
- Exit code is 0 for success, 1 for failure
- Telemetry data is present
- Errors are structured
Extending Commands
To add new commands:
- Create
scripts/<command_name>.py - Follow the standard response format
- Add command contract to
references/command-contracts.yaml - Update this SKILL.md with usage documentation
- Make script executable:
chmod +x scripts/<command_name>.py - Test independently before integrating
Philosophy
Commands are the foundation layer of BMAD's 3-layer architecture:
- Commands (this skill): Atomic, testable primitives
- Skills: Compose commands into workflows
- Subagents: Orchestrate skills with routing and guardrails
By keeping commands deterministic and testable, we enable:
- Unit testing of the framework itself
- Reliable skill composition
- Observable workflows
- Production-ready operations
Utility Scripts
In addition to command primitives, this skill includes utility scripts for UX and system management:
bmad-wizard.py
Interactive command wizard to help users find the right command for their task.
Usage:
python .claude/skills/bmad-commands/scripts/bmad-wizard.py
python .claude/skills/bmad-commands/scripts/bmad-wizard.py --list-all
python .claude/skills/bmad-commands/scripts/bmad-wizard.py --subagent alex
Features:
- Goal-based recommendations
- Interactive command selection
- Browse all commands
- Filter by subagent
Documentation: See docs/UX-IMPROVEMENTS-GUIDE.md
error-handler.py
Professional error handling system with structured errors and remediation guidance.
Usage:
python .claude/skills/bmad-commands/scripts/error-handler.py
Features:
- 10 predefined error templates
- Structured error format
- Remediation steps
- Color-coded severity levels
- JSON output support
Documentation: See docs/UX-IMPROVEMENTS-GUIDE.md
progress-visualizer.py
Real-time progress tracking for workflows with multiple visualization styles.
Usage:
python .claude/skills/bmad-commands/scripts/progress-visualizer.py
Features:
- 7-step workflow tracking
- 4 visualization styles (bar, spinner, dots, minimal)
- ETA calculation
- Elapsed time tracking
- Real-time updates
Documentation: See docs/UX-IMPROVEMENTS-GUIDE.md
monitor-skills.py
Skill validation and monitoring tool for ensuring all skills are properly loaded.
Usage:
python .claude/skills/bmad-commands/scripts/monitor-skills.py
python .claude/skills/bmad-commands/scripts/monitor-skills.py --validate-only
python .claude/skills/bmad-commands/scripts/monitor-skills.py --category planning
python .claude/skills/bmad-commands/scripts/monitor-skills.py --skill implement-feature
python .claude/skills/bmad-commands/scripts/monitor-skills.py --json output.json
Features:
- Discover and validate all skills
- Check YAML frontmatter
- Verify workflow steps
- Export to JSON
- Category filtering
Documentation: See docs/SKILL-LOADING-MONITORING.md
health-check.sh
Quick health check to validate system configuration and skill loading.
Usage:
./.claude/skills/bmad-commands/scripts/health-check.sh
Features:
- Check project structure
- Validate skills by category
- Check Python environment
- Validate required packages
- Check configuration
- Disk space check
Documentation: See docs/SKILL-LOADING-MONITORING.md
deploy-to-project.sh
Smart deployment script for deploying BMAD Enhanced to other projects.
Usage:
./.claude/skills/bmad-commands/scripts/deploy-to-project.sh <target-directory>
./.claude/skills/bmad-commands/scripts/deploy-to-project.sh --full <target-directory>
./.claude/skills/bmad-commands/scripts/deploy-to-project.sh --dry-run <target-directory>
Features:
- Minimal or full deployment modes
- Dry-run mode
- Force overwrite option
- Symlink support for full mode
- Post-deployment instructions
Documentation: See docs/DEPLOYMENT-TO-PROJECTS.md
More from adolfoaranaes12/bmad-enhanced
analyze-architecture
Comprehensive brownfield architecture analysis for existing codebases. Discovers structure, identifies patterns, assesses quality, calculates production readiness, and provides actionable recommendations. Use when analyzing existing codebases to understand architecture, assess quality, or prepare for modernization.
10create-brownfield-prd
Generate Product Requirements Documents (PRD) for existing systems through systematic codebase analysis, feature extraction, and gap identification with confidence scoring for validation-needed areas. Use when documenting existing systems that lack requirements documentation or preparing for system modernization/migration.
5document-project
Generate comprehensive architecture documentation automatically from existing codebase analysis. This skill should be used when working with brownfield projects or updating outdated documentation.
5create-prd
Create comprehensive Product Requirements Documents (PRD) from high-level product ideas with structured market analysis, feature definition, and success metrics for both greenfield and brownfield contexts. Use when defining product vision for new projects (greenfield) or formalizing requirements for existing systems (brownfield).
4shard-document
Break large documents into smaller, manageable shards with maintained relationships and navigation, improving document usability and maintenance for PRDs, specs, and technical documentation. Use when large documents (>5000 words) need splitting for better maintainability, navigation, or when documentation becomes difficult to navigate.
4quality-gate
Synthesize all quality assessments (risk, test-design, traceability, NFR) into evidence-based gate decision (PASS/CONCERNS/FAIL/WAIVED) with comprehensive rationale. Generates both YAML (CI/CD) and Markdown (human review) reports with action items. Use during final quality review to make go/no-go deployment decisions based on comprehensive quality evidence.
4