orchestrator
Orchestrator - Automated Multi-Agent Coordinator
When to use
- Complex feature requires multiple specialized agents working in parallel
- User wants automated execution without manually spawning agents
- Full-stack implementation spanning backend, frontend, mobile, and QA
- User says "run it automatically", "run in parallel", or similar automation requests
When NOT to use
- Simple single-domain task -> use the specific agent directly
- User wants step-by-step manual control -> use workflow-guide
- Quick bug fixes or minor changes
Important
This skill orchestrates CLI subagents via oh-my-ag agent:spawn. The CLI vendor (gemini, claude, codex, qwen) is resolved from configuration. Vendor-specific execution protocols are injected automatically. Each subagent runs as an independent process.
Configuration
| Setting | Default | Description |
|---|---|---|
| MAX_PARALLEL | 3 | Max concurrent subagents |
| MAX_RETRIES | 2 | Retry attempts per failed task |
| POLL_INTERVAL | 30s | Status check interval |
| MAX_TURNS (impl) | 20 | Turn limit for backend/frontend/mobile |
| MAX_TURNS (review) | 15 | Turn limit for qa/debug |
| MAX_TURNS (plan) | 10 | Turn limit for pm |
Memory Configuration
Memory provider and tool names are configurable via mcp.json:
{
"memoryConfig": {
"provider": "serena",
"basePath": ".serena/memories",
"tools": {
"read": "read_memory",
"write": "write_memory",
"edit": "edit_memory"
}
}
}
Workflow Phases
PHASE 1 - Plan: Analyze request -> decompose tasks -> generate session ID
PHASE 2 - Setup: Use memory write tool to create orchestrator-session.md + task-board.md
PHASE 3 - Execute: Spawn agents by priority tier (never exceed MAX_PARALLEL)
PHASE 4 - Monitor: Poll every POLL_INTERVAL; handle completed/failed/crashed agents
PHASE 4.5 - Verify: Run oh-my-ag verify {agent-type} per completed agent
PHASE 5 - Collect: Read all result-{agent}.md, compile summary, cleanup progress files
See resources/subagent-prompt-template.md for prompt construction.
See resources/memory-schema.md for memory file formats.
Memory File Ownership
| File | Owner | Others |
|---|---|---|
orchestrator-session.md |
orchestrator | read-only |
task-board.md |
orchestrator | read-only |
progress-{agent}.md |
that agent | orchestrator reads |
result-{agent}.md |
that agent | orchestrator reads |
Agent-to-Agent Review Loop (PHASE 4.5)
After each agent completes, enter an iterative review loop — not a single-pass verification.
Loop Flow
Agent completes work
↓
[1] Self-Review: Agent reviews its own changes
↓
[2] Verify: Run `oh-my-ag verify {agent-type} --workspace {workspace}`
↓ FAIL → Agent receives feedback, fixes, back to [1]
↓ PASS
[3] Cross-Review: QA agent reviews the changes
↓ FAIL → Agent receives review feedback, fixes, back to [1]
↓ PASS
Accept result ✓
Step Details
[1] Self-Review: Before requesting external review, the implementation agent must:
- Re-read its own diff and check against the task's acceptance criteria
- Run lint, type-check, and tests in the workspace
- Fix any issues found before proceeding
[2] Automated Verify:
oh-my-ag verify {agent-type} --workspace {workspace} --json
- PASS (exit 0): Proceed to cross-review
- FAIL (exit 1): Feed verify output back to the agent as correction context
[3] Cross-Review: Spawn QA agent to review the changes:
- QA agent reads the diff, runs checks, evaluates against acceptance criteria
- If
docs/CODE-REVIEW.mdexists, QA agent uses it as the review checklist - QA agent outputs: PASS (with optional nits) or FAIL (with specific issues)
- On FAIL: issues are fed back to the implementation agent for fixing
Loop Limits
| Counter | Max | On Exceeded |
|---|---|---|
| Self-review + fix cycles | 3 | Escalate to cross-review regardless |
| Cross-review rejections | 2 | Report to user with review history |
| Total loop iterations | 5 | Force-complete with quality warning |
Review Feedback Format
When feeding review results back to the implementation agent:
## Review Feedback (iteration {n}/{max})
**Reviewer**: {self / verify / qa-agent}
**Verdict**: FAIL
**Issues**:
1. {specific issue with file and line reference}
2. {specific issue}
**Fix instruction**: {what to change}
This replaces single-pass verification. Most "nitpicking" should happen agent-to-agent. Human review is reserved for final approval, not catching lint errors.
Retry Logic (after review loop exhaustion)
- 1st retry: Re-spawn agent with full review history as context
- 2nd retry: Re-spawn with "Try a different approach" + review history
- Final failure: Report to user with complete review trail, ask whether to continue or abort
Clarification Debt (CD) Monitoring
Track user corrections during session execution. See ../_shared/session-metrics.md for full protocol.
Event Classification
When user sends feedback during session:
- clarify (+10): User answering agent's question
- correct (+25): User correcting agent's misunderstanding
- redo (+40): User rejecting work, requesting restart
Threshold Actions
| CD Score | Action |
|---|---|
| CD >= 50 | RCA Required: QA agent must add entry to lessons-learned.md |
| CD >= 80 | Session Pause: Request user to re-specify requirements |
redo >= 2 |
Scope Lock: Request explicit allowlist confirmation before continuing |
Recording
After each user correction event:
[EDIT]("session-metrics.md", append event to Events table)
At session end, if CD >= 50:
- Include CD summary in final report
- Trigger QA agent RCA generation
- Update
lessons-learned.mdwith prevention measures
References
- Prompt template:
resources/subagent-prompt-template.md - Memory schema:
resources/memory-schema.md - Config:
config/cli-config.yaml - Scripts:
scripts/spawn-agent.sh,scripts/parallel-run.sh,scripts/verify.sh - Task templates:
templates/ - Skill routing:
../_shared/skill-routing.md - Verification:
scripts/verify.sh <agent-type> - Session metrics:
../_shared/session-metrics.md - API contracts:
../_shared/api-contracts/ - Context loading:
../_shared/context-loading.md - Difficulty guide:
../_shared/difficulty-guide.md - Reasoning templates:
../_shared/reasoning-templates.md - Clarification protocol:
../_shared/clarification-protocol.md - Context budget:
../_shared/context-budget.md - Lessons learned:
../_shared/lessons-learned.md
More from first-fluke/oh-my-ag
pm-agent
Product manager that decomposes requirements into actionable tasks with priorities and dependencies. Use for planning, requirements, specification, scope, prioritization, task breakdown, and ISO 21500, ISO 31000, or ISO 38500-aligned planning recommendations.
45qa-agent
Quality assurance specialist for security, performance, accessibility, comprehensive testing, and quality standard alignment. Use for test, review, security audit, OWASP, coverage, lint work, and ISO/IEC 25010 or ISO/IEC 29119-aligned QA recommendations.
43multi-agent-workflow
Guide for coordinating PM, Frontend, Backend, Mobile, and QA agents on complex projects via CLI. Use for manual step-by-step coordination and workflow guidance.
43frontend-agent
Frontend specialist for React, Next.js, TypeScript with FSD-lite architecture, shadcn/ui, and design system alignment. Use for UI, component, page, layout, CSS, Tailwind, and shadcn work.
42mobile-agent
Mobile specialist for Flutter, React Native, and cross-platform mobile development. Use for mobile app, Flutter, Dart, iOS, Android, Riverpod, and widget work.
42debug-agent
Bug diagnosis and fixing specialist - analyzes errors, identifies root causes, provides fixes, and writes regression tests. Use for bug, debug, error, crash, traceback, exception, and regression work.
37