delegate
Delegation Template
Workflow Reference: See Multi-Agent Orchestration for complete delegation flow with DONE/BLOCKED signaling.
Step 1: Analyze the task. Do you have the "WHERE, WHAT, WHY"?
Step 2: Construct the prompt using the template below.
Template
Your ROLE_TYPE is sub-agent.
[Task Identification - one sentence]
OBSERVATIONS:
- [Factual observations already in your context]
- [Verbatim error messages if applicable]
- [Environment or system state if relevant]
DEFINITION OF SUCCESS:
- [Specific measurable outcome]
- [Acceptance criteria]
- [Verification method]
CONTEXT:
- Location: [Where to look]
- Scope: [Boundaries]
- Constraints: [Hard requirements vs Preferences]
ECOSYSTEM CONTEXT:
- [Session-specific facts the agent cannot find in CLAUDE.md or tool descriptions]
- [Authenticated CLIs, non-obvious doc locations, task-specific access]
YOUR TASK:
1. Run /verify (as completion criteria guide)
2. Perform comprehensive context gathering
3. Form hypothesis → Experiment → Verify
4. Implement solution
5. Only report completion after /verify criteria are met
Authoring guidance (for the orchestrator filling in this template — do not include these annotations in the delivered prompt):
- OBSERVATIONS: Pass-through only — data already in your context (user messages, prior agent reports, command outputs you already received). Include file:line references if already known. Include verbatim error messages, not paraphrased. Do NOT pre-gather data for the agent (e.g., don't run
ruff check .before delegating to a linting agent). Do NOT read, grep, or glob files to find context for the agent — the agent has full tool access and an empty context window; it does its own discovery. No interpretations ("I think"), no assumptions ("probably"). SOURCE: agent-orchestration SKILL.md — Pre-Delegation Verification Checklist section. - DEFINITION OF SUCCESS: The "WHAT". Measurable outcomes the agent can verify. When the agent will produce more than ~1 line of output, instruct it to write results to a file and return only the path — this keeps orchestrator context lean. Example:
Write findings to .claude/reports/NAME-YYYYMMDD.md. Return: STATUS: DONE + file path. - CONTEXT: The "WHERE" and "WHY". Location narrows scope; constraints bound the solution space.
Delegation Rules
Check before sending:
| Rule | Check |
|---|---|
| Formula | Delegation = Observations + Success Criteria + Resources - Assumptions - Micromanagement |
| No HOW | Do NOT tell agent how to implement (e.g., "Change line 42 to X") |
| Constraints OK | DO tell agent constraints (e.g., "Must use the 'requests' library") |
| No Assumptions | Do NOT say "The issue is probably..." |
| Full Scope | If code smell found, instruct agent to audit entire pattern, not single instance |
Quick Checklist
- Starts with
Your ROLE_TYPE is sub-agent. - Contains only factual observations
- No assumptions stated as facts
- Defines WHAT and WHY, not HOW
- Lists resources without prescribing tools
More from jamie-bitflight/claude_skills
perl-lint
This skill should be used when the user asks to lint Perl code, run perlcritic, check Perl style, format Perl code, run perltidy, or mentions Perl Critic policies, code formatting, or style checking.
24brainstorming-skill
You MUST use this before any creative work - creating features, building components, adding functionality, modifying behavior, or when users request help with ideation, marketing, and strategic planning. Explores user intent, requirements, and design before implementation using 30+ research-validated prompt patterns.
11design-anti-patterns
Enforce anti-AI UI design rules based on the Uncodixfy methodology. Use when generating HTML, CSS, React, Vue, Svelte, or any frontend UI code. Prevents "Codex UI" — the generic AI aesthetic of soft gradients, floating panels, oversized rounded corners, glassmorphism, hero sections in dashboards, and decorative copy. Applies constraints from Linear/Raycast/Stripe/GitHub design philosophy: functional, honest, human-designed interfaces. Triggers on: UI generation, dashboard building, frontend component creation, CSS styling, landing page design, or any task producing visual interface code.
7python3-review
Comprehensive Python code review checking patterns, types, security, and performance. Use when reviewing Python code for quality issues, when auditing code before merge, or when assessing technical debt in a Python codebase.
7hooks-guide
Cross-platform hooks reference for AI coding assistants — Claude Code, GitHub Copilot, Cursor, Windsurf, Amp. Covers hook authoring in Node.js CJS and Python, per-platform event schemas, inline-agent hooks and MCP in agent frontmatter, common JSON I/O, exit codes, best practices, and a fetch script to refresh docs from official sources. Use when writing, reviewing, or debugging hooks for any AI assistant.
7agent-creator
Create high-quality Claude Code agents from scratch or by adapting existing agents as templates. Use when the user wants to create a new agent, modify agent configurations, build specialized subagents, or design agent architectures. Guides through requirements gathering, template selection, and agent file generation following Anthropic best practices (v2.1.63+).
6