ralph-orchestrator
- spec-interview → Gather comprehensive requirements through guided discovery
- generate-prd → Create actionable Product Requirements Document
- ralph-convert-prd → Transform PRD into atomic user stories (prd.json)
- Subagent execution → Spawn ralph-coder/ralph-tester subagents via Task tool
This skill coordinates these tools while keeping you in control at decision points.
<essential_principles>
ALL code implementation MUST happen through subagents — ralph-coder implements production code, ralph-tester writes tests and verifies. You are the orchestrator, NOT the implementer. Do not write code, create files, modify source files, or make any project changes directly.
If you catch yourself about to write code or modify project files: STOP. Spawn a subagent instead.
Do NOT try to fix issues yourself, retry automatically, or continue past errors. Present the error clearly to the user and wait for their instructions.
This maximizes throughput while maintaining correct dependency ordering.
Separation gives each agent a focused context window. The orchestrator wraps ANY agent with Ralph context (story spec, return format, constraints) so even non-Ralph agents integrate seamlessly.
This lets users configure best-practice agents for their stack, and Ralph automatically uses them.
Both agents also update the docs/ folder with documentation about new features, APIs, test setup, and architecture changes. The tasks/test-log.md and tasks/review-notes.md files are updated by tester agents with test registries and improvement recommendations.
Never assume agents "remember" previous stories — but they CAN read shared knowledge files.
Right-sized:
- Add a database column
- Create a UI component
- Update a server action
- Implement a filter
Too large (will fail):
- Build entire dashboard
- Add authentication system
- Refactor entire API
- API stories: curl endpoints with real data, check response codes and bodies
- UI stories: Playwright e2e tests that navigate and interact with real UI
- Database stories: Run migrations, query DB directly to confirm schema
- Infra stories: Health checks, config validation, service startup
Static checks (typecheck, lint) are baseline. Runtime validation is required.
After each story, the tester runs ALL existing tests (via testCommands in prd.json root) to catch regressions. A story is NOT done until the entire test suite passes.
Stories track attempts / maxAttempts to prevent infinite retries on broken stories.
Don't rush. Bad requirements = wasted iterations.
</essential_principles>
<prd_json_schema>
{
"project": "[Project Name]",
"branchName": "ralph/[feature-name-kebab-case]",
"description": "[Feature description]",
"testCommands": {
"unit": "npm test",
"integration": "npm run test:integration",
"e2e": "npx playwright test",
"typecheck": "npm run typecheck"
},
"userStories": [
{
"id": "US-001",
"title": "[Story title]",
"description": "As a [user], I want [feature] so that [benefit]",
"storyType": "backend | frontend | database | api | infra | test",
"acceptanceCriteria": ["Specific criterion 1", "Typecheck passes"],
"verificationCommands": [
{ "command": "npm run typecheck", "expect": "exit_code:0" },
{ "command": "curl -s http://localhost:3000/api/...", "expect": "contains:expected" }
],
"status": "pending",
"priority": 1,
"attempts": 0,
"maxAttempts": 3,
"notes": "",
"blockedBy": [],
"docsToUpdate": ["README.md", "docs/api.md"],
"completedAt": null,
"lastAttemptLog": ""
}
]
}
Expect matchers for verificationCommands:
exit_code:0— command exits with code 0exit_code:N— command exits with specific code Ncontains:STRING— stdout contains STRINGnot_empty— stdout is non-emptymatches:REGEX— stdout matches regex pattern </prd_json_schema>
- Full pipeline - Start from scratch (spec → PRD → prd.json → execute)
- Continue from PRD - Already have PRD, convert and execute
- Execute only - Already have prd.json, run Ralph
- Check status - View current prd.json progress
Wait for response before proceeding.
After reading the workflow, follow it exactly.
<quick_reference>
Key Files:
| File | Purpose |
|---|---|
| SPEC.md | Comprehensive requirements from spec-interview |
| tasks/prd-*.md | Product Requirements Document |
| tasks/prd.json | Atomic user stories for Ralph |
| tasks/progress.txt | Learnings between iterations |
| tasks/test-log.md | Registry of all tests created per story (updated by tester agents) |
| tasks/review-notes.md | Improvement recommendations after each story (updated by tester agents) |
| tasks/common_knowledge.md | Shared knowledge base — patterns, conventions, gotchas discovered across stories (updated by both coder and tester agents, read by orchestrator between batches) |
| docs/ | Project documentation — updated by both coder and tester agents with new features, APIs, test setup, etc. |
Agents:
| Agent | Role | Fallback |
|---|---|---|
| ralph-coder | Implements production code + docs for one story | Default coder when no project-specific agent matches |
| ralph-tester | Writes tests + runs verification for one story | Default tester when no project-specific agent matches |
| Project agents | Discovered from .claude/agents/ and ~/.claude/agents/ | Matched to storyTypes by description keywords |
Execution model:
BATCH 1 (independent stories):
Phase 1: Task(coder, US-001, worktree) + Task(coder, US-005, worktree) ← parallel
Phase 2: Task(tester, US-001) + Task(tester, US-005) ← parallel
Merge: US-001 → main, US-005 → main ← sequential
BATCH 2 (stories that depended on BATCH 1):
Phase 1: Task(coder, US-002, worktree) + Task(coder, US-003, worktree)
Phase 2: Task(tester, US-002) + Task(tester, US-003)
Merge: US-002 → main, US-003 → main
Commands:
# Check story status
cat tasks/prd.json | jq '.userStories[] | {id, title, status, attempts}'
# View learnings
cat tasks/progress.txt
# View test registry
cat tasks/test-log.md
# View review notes
cat tasks/review-notes.md
</quick_reference>
<workflows_index>
| Workflow | Purpose |
|---|---|
| full-pipeline.md | Complete flow: spec → PRD → prd.json → execute |
| from-prd.md | Convert existing PRD and execute |
| execute-only.md | Run Ralph on existing prd.json |
| check-status.md | View current progress |
| </workflows_index> |
<success_criteria> Pipeline is complete when:
- Requirements gathered through spec-interview (including verification environment)
- PRD created with verifiable acceptance criteria
- prd.json has atomic stories with storyType, verificationCommands, and blockedBy
- All stories have
status: "done"in prd.json - All verification commands passed (real runtime checks, not just typecheck)
- Code committed and merged to main via worktree branches </success_criteria>