Delegation
Delegation — Agent Orchestration & Parallelization
Auto-invoked by the Algorithm when work can be parallelized or requires agent specialization.
🚨 CRITICAL ROUTING — Two COMPLETELY Different Systems
| the user Says | System | Tool | What Happens |
|---|---|---|---|
| "custom agents", "specialized agents", "spin up agents", "launch agents" | Agents Skill (ComposeAgent) | Task(subagent_type="general-purpose", prompt=<ComposeAgent output>) |
Unique personalities, voices, colors via trait composition |
| "create an agent team", "agent team", "swarm" | Claude Code Teams | TeamCreate → TaskCreate → SendMessage |
Persistent team with shared task list, message coordination, multi-turn collaboration |
These are NOT the same thing:
- Custom agents = one-shot parallel workers with unique identities, launched via
Task(), no shared state - Agent teams = persistent coordinated teams with shared task lists, messaging, and multi-turn collaboration via
TeamCreate
When the Algorithm Should Use This Skill
- 3+ independent workstreams exist at Extended+ effort level
- Multiple identical non-serial tasks need parallel execution
- Specialized expertise needed (architecture design, implementation, ISC optimization)
- Large codebase changes spanning 5+ files benefit from parallel workers
- Research + execution can proceed simultaneously
- "Create an agent team" — use TeamCreate for persistent coordinated teams
- Unattended autonomous work where auditability matters more than speed — spawn an Observer team (Agents skill → SPAWNOBSERVERS) alongside the primary agent, reading the tool-activity audit log, voting continue/halt/escalate. ONLY use when BOTH (a) time is not a constraint and (b) auditability is the primary requirement. Never for interactive or time-sensitive work. See Agents/SKILL.md "Observer Team Archetype" for shape and guardrails.
Delegation Patterns
1. Built-In Agents
⚠️ Built-in agents are for internal workflow routing ONLY. When the user asks for custom, specialized, or uniquely-voiced agents, use the Agents skill (section 4 below) instead.
Use Task(subagent_type="AgentType") with these specialized agents:
| Agent Type | Specialization | When to Use |
|---|---|---|
Engineer |
TDD implementation, code changes | Code-heavy tasks requiring tests |
Architect |
System design, structure decisions | Architecture planning, design specs |
Algorithm |
ISC optimization, criteria work | ISC-specialized verification |
Explore |
Fast codebase search | Quick file/pattern discovery |
Plan |
Implementation strategy | Design before execution |
Always include: Full context, effort budget, expected output format.
2. Worktree-Isolated Agents
Run agents in their own git worktree with isolation: "worktree" for file-safe parallelism:
Task(subagent_type="Engineer", isolation: "worktree", prompt="...")
- Each agent gets its own working tree — no file conflicts with other agents
- Worktree auto-created on spawn, auto-cleaned when agent finishes (unless changes made)
- Use when multiple agents edit the same files or for competing approaches
- Can combine with
run_in_background: truefor non-blocking isolated work - Built-in agents with
isolation: worktreein frontmatter (Engineer, Architect) auto-isolate on every spawn
3. Background Agents
Run agents with run_in_background: true for non-blocking parallel work:
Task(subagent_type="Engineer", run_in_background: true, prompt="...")
- Use when results aren't needed immediately
- Check output with
Readtool on the output_file path - Ideal for: research, long builds, parallel investigations
3. Foreground Agents
Standard Task() calls that block until complete:
- Use when you need the result before proceeding
- Use for sequential dependencies
- Default mode — most common
4. Custom Agents (via Agents Skill)
Trigger: "custom agents", "spin up agents", "launch agents", "specialized agents"
Action: Invoke the Agents skill → run ComposeAgent.ts → launch with Task(subagent_type="general-purpose")
# Step 1: Compose agent identity
bun run ~/.claude/skills/Agents/Tools/ComposeAgent.ts --traits "security,skeptical,thorough" --task "Review auth" --output json
# Step 2: Launch with composed prompt
Task(subagent_type="general-purpose", prompt=<ComposeAgent JSON .prompt field>)
- Each agent gets unique personality, voice, and color via ComposeAgent
- Use DIFFERENT trait combinations for each agent to get unique voices
- Never use built-in agent types (Engineer, Architect) for custom work
- Ideal for: domain experts, adversarial reviewers, creative brainstormers, parallel analysis
5. Agent Teams (via TeamCreate)
Trigger: "create an agent team", "agent team", "swarm", "team of agents"
Action: Use TeamCreate tool → TaskCreate → spawn teammates via Task(team_name=...) → coordinate via SendMessage
1. TeamCreate(team_name="my-project") # Creates team + task list
2. TaskCreate(subject="Implement auth module") # Create team tasks
3. Task(subagent_type="Engineer", team_name="my-project", name="auth-engineer") # Spawn teammate
4. TaskUpdate(taskId="1", owner="auth-engineer") # Assign task
5. SendMessage(type="message", recipient="auth-engineer", content="...") # Coordinate
This is a COMPLETELY DIFFERENT system from custom agents:
- Custom agents (Agents skill) = fire-and-forget parallel workers, no shared state
- Agent teams (TeamCreate) = persistent coordinated teams with shared task lists, messaging, multi-turn
Team Guidelines:
- Use for 3+ independently workable criteria at Extended+
- Large complex coding tasks benefit most
- Each teammate works independently on assigned tasks via shared task list
- Parent coordinates via
SendMessage, reconciles results - Teammates go idle between turns — send messages to wake them
When to Use Teams vs Subagents (Decision Matrix)
| Factor | Subagents (Task) | Agent Teams (TeamCreate) |
|---|---|---|
| Communication | Fire-and-forget, no peer messaging | Persistent messaging between teammates |
| Context | Fresh context each spawn, limited window | Full context window per teammate, preserved across turns |
| Coordination | Parent collects results, no shared state | Shared task list, direct peer DMs, idle/wake cycle |
| Duration | Single-turn execution | Multi-turn, iterative work with course corrections |
| Overhead | Low — spawn and forget | Higher — team setup, task creation, message routing |
| Best for | Parallel research, one-shot analysis, simple delegation | Complex multi-file changes, iterative debugging, cross-layer coordination |
Decision rule: If agents need to talk to each other or iterate on shared work → Teams. If each agent does independent one-shot work → Subagents.
Concrete examples:
- "Research 4 topics in parallel" → Subagents (independent, no coordination needed)
- "Build a feature spanning API + UI + tests with shared state" → Teams (cross-layer, needs coordination)
- "Run 10 file updates with same pattern" → Subagents (parallel, identical, independent)
- "Debug a complex issue with competing hypotheses" → Teams (need to share findings, adjust approach)
6. Parallel Task Dispatch
For N identical operations (e.g., updating 10 files with the same pattern):
- Create N
Task()calls in a single message (parallel launch) - Each agent gets one unit of work
- Results collected when all complete
Effort-Level Scaling
| Effort | Delegation Strategy |
|---|---|
| Instant/Fast | No delegation — direct tools only |
| Standard | 1-2 foreground agents max for discrete subtasks |
| Extended | 2-4 agents, background agents for research |
| Advanced | 4-8 agents, agent teams for 3+ workstreams |
| Deep | Full team orchestration, parallel workers |
| Comprehensive | Unbounded — teams + parallel + background |
Two-Tier Delegation (Lightweight vs Full)
Not all delegation needs a full agent. Match delegation weight to task complexity:
Lightweight Delegation
For: One-shot extraction, classification, summarization, simple Q&A against provided content.
Task(subagent_type="general-purpose", model="haiku", max_turns=3, prompt="...")
- Use
model="haiku"for cost/speed efficiency - Set
max_turns=3— if it can't finish in 3 turns, it needs full delegation - Provide all input inline in the prompt (no tool use expected)
- Examples: "Classify this text as X/Y/Z", "Extract the 5 key points from this", "Summarize this in 2 sentences"
Full Delegation
For: Multi-step reasoning, tasks requiring tool use (file reads, searches, web), tasks that need their own iteration loop.
Task(subagent_type="general-purpose", prompt="...") # or specialized agent type
- Default model (sonnet/opus inherited from parent)
- No max_turns restriction — agent iterates until done
- Agent uses tools autonomously (Read, Grep, Bash, etc.)
- Examples: "Research X and produce a report", "Refactor these 5 files", "Debug why test Y fails"
Decision Rule
Ask: "Can this be answered in one LLM call with no tool use?" → Lightweight. Otherwise → Full.
| Signal | Tier |
|---|---|
| Input fits in prompt, output is extraction/classification | Lightweight |
| Needs to read files, search, or browse | Full |
| Needs iteration or self-correction | Full |
| Simple transform of provided content | Lightweight |
| Requires domain expertise + research | Full |
Why this matters: Spawning a full agent for a one-shot extraction wastes ~10-30s of startup overhead and unnecessary context. Lightweight delegation returns in 2-5s. Over an Extended+ Algorithm run with 10+ delegations, this saves minutes. Inspired by RLM's llm_query() vs rlm_query() two-tier pattern (Zhang/Kraska/Khattab 2025).
Anti-Patterns (Don't Do These)
- Don't delegate what Grep/Glob/Read can do in <2 seconds
- Don't spawn agents for single-file changes
- Don't create teams for fewer than 3 independent workstreams
- Don't send agents work without full context — they start fresh
- Don't use built-in agent names for custom agents
- Don't use built-in agent types (Designer, Architect, Engineer) when user asks for specialized or custom agents — always use ComposeAgent via the Agents skill
- Don't use full delegation for one-shot extraction/classification — use lightweight tier
Gotchas
- Delegation uses Claude Code's built-in TeamCreate — NOT the Agents skill's ComposeAgent. These are different systems.
- 3+ independent workstreams warrant delegation. For 1-2 tasks, direct work is faster than team coordination overhead.
- Agent teams share a task list. Use TaskCreate/TaskUpdate for coordination, not ad-hoc messages.
- Teams overkill for single-file tasks. (Mar 2026 reflection: "one agent that can both read code and write JSX is better than three specialists who can't coordinate")
Examples
Example 1: Parallel implementation
User: "build the frontend and backend in parallel"
→ Creates team via TeamCreate
→ Spawns frontend and backend agents
→ Shared task list for coordination
→ Agents work independently, merge results
Example 2: Research swarm
User: "launch an agent team to research these 5 topics"
→ Creates team with 5 research agents
→ Each agent handles one topic independently
→ Results synthesized by team lead
Execution Log
After completing any workflow, append a single JSONL entry:
echo '{"ts":"'$(date -u +%Y-%m-%dT%H:%M:%SZ)'","skill":"Delegation","workflow":"WORKFLOW_USED","input":"8_WORD_SUMMARY","status":"ok|error","duration_s":SECONDS}' >> ~/.claude/PAI/MEMORY/SKILLS/execution.jsonl
Replace WORKFLOW_USED with the workflow executed, 8_WORD_SUMMARY with a brief input description, and SECONDS with approximate wall-clock time. Log status: "error" if the workflow failed.
More from danielmiessler/personal_ai_infrastructure
osint
Structured OSINT investigations — people lookup, company intel, investment due diligence, entity/threat intel, domain recon, organization research using public sources with ethical authorization framework. USE WHEN OSINT, due diligence, background check, research person, company intel, investigate, company lookup, domain lookup, entity lookup, organization lookup, threat intel, discover OSINT sources.
259firstprinciples
Physics-based reasoning framework (Musk/Elon methodology) that deconstructs problems to irreducible fundamental truths rather than reasoning by analogy. Three-step structure: DECONSTRUCT (break to constituent parts and actual values), CHALLENGE (classify every element as hard constraint / soft constraint / unvalidated assumption — only physics is truly immutable), RECONSTRUCT (build optimal solution from fundamentals alone, ignoring inherited form). Outputs: constituent-parts breakdown, constraint classification table, and reconstructed solution with key insight. Three workflows: Deconstruct.md, Challenge.md, Reconstruct.md. Integrates with RedTeam (attack assumptions before deploying adversarial agents), Security (decompose threat model), Architecture (challenge design constraints), and Pentesters (decompose assumed security boundaries). Other skills invoke via: Challenge on all stated constraints → classify as hard/soft/assumption. Cross-domain synthesis: solutions from unrelated fields often apply once the fundamental truths are exposed. NOT FOR incident investigation and causal chains (use RootCauseAnalysis). NOT FOR structural feedback loops (use SystemsThinking). USE WHEN first principles, fundamental truths, challenge assumptions, is this a real constraint, rebuild from scratch, what are we actually paying for, what is this really made of, start over, physics first, question everything, reasoning by analogy, is this really necessary.
160documents
Read, write, convert, and analyze documents — routes to PDF, DOCX, XLSX, PPTX sub-skills for creation, editing, extraction, and format conversion. USE WHEN document, process file, create document, convert format, extract text, PDF, DOCX, XLSX, PPTX, Word, Excel, spreadsheet, PowerPoint, presentation, slides, consulting report, large PDF, merge PDF, fill form, tracked changes, redlining.
114council
Multi-agent collaborative debate that produces visible round-by-round transcripts with genuine intellectual friction. All council members are custom-composed via ComposeAgent (Agents skill) with domain expertise, unique voice, and personality tailored to the specific topic — never built-in generic types. ComposeAgent invoked as: bun run ~/.claude/skills/Agents/Tools/ComposeAgent.ts. Two workflows: DEBATE (3 rounds, full transcript + synthesis, parallel execution within rounds, 40-90 seconds total) and QUICK (1 round, fast perspective check). Context files: CouncilMembers.md (agent composition instructions), RoundStructure.md (three-round structure and timing), OutputFormat.md (transcript format templates). Agents are designed per debate topic to create real disagreement; 4-6 well-composed agents outperform 12 generic ones. Council is collaborative-adversarial (debate to find best path); for pure adversarial attack on an idea, use RedTeam instead. NOT FOR parallel task execution across agents (use Delegation skill). USE WHEN council, debate, multiple perspectives, weigh options, deliberate, get different views, multi-agent discussion, what would experts say, is there consensus, pros and cons from multiple angles.
112privateinvestigator
Ethical people-finding using 15 parallel research agents (45 search threads) across public records, social media, reverse lookups. Public data only, no pretexting. USE WHEN find person, locate, reconnect, people search, skip trace, reverse lookup, social media search, public records search, verify identity.
112redteam
Military-grade adversarial analysis that deploys 32 parallel expert agents (engineers, architects, pentesters, interns) to stress-test ideas, strategies, and plans — not systems or infrastructure. Two workflows: ParallelAnalysis (5-phase: decompose into 24 atomic claims → 32-agent parallel attack → synthesis → steelman → counter-argument, each 8 points) and AdversarialValidation (competing proposals synthesized into best solution). Context files: Philosophy.md (core principles, success criteria, agent types), Integration.md (how to combine with FirstPrinciples, Council, and other skills; output format). Targets arguments, not network vulnerabilities. Findings ranked by severity; goal is to strengthen, not destroy — weaknesses delivered with remediation paths. Collaborates with FirstPrinciples (decompose assumptions before attacking) and Council (Council debates to find paths; RedTeam attacks whatever survives). Also invoked internally by Ideate (TEST phase) and WorldThreatModel (horizon stress-testing). NOT FOR AI instruction set auditing (use BitterPillEngineering). NOT FOR network/system vulnerability testing (use a security assessment skill). USE WHEN red team, attack idea, counterarguments, critique, stress test, devil's advocate, find weaknesses, break this, poke holes, what could go wrong, strongest objection, adversarial validation, battle of bots.
112