subagent-refactoring-methodology
Subagent Refactoring Methodology
Analysis Criteria
Structural Issues
- Are instructions explicit and unambiguous, or do they use vague qualifiers ("try to", "might", "consider")?
- Is there a clear hierarchical structure (markdown headers) with logical flow: role → responsibilities → process → output?
- Are concerns properly separated (instructions vs examples vs data)?
- Missing sections: role definition, process steps, output format, boundaries?
Model Optimization
- Sonnet (default): cost-optimized, parallel tool execution, effort calibration explicit
- Opus (upgrade only with observed complexity evidence): complex coding, multi-step agents, computer use
Constitutional AI patterns: self-critique loops, validation checkpoints before output, principles-based over rules-based, evidence-based reasoning enforced.
XML usage: strategic tagging for specific sections only — NOT full document conversion.
Instruction Quality
STRONG imperatives: MUST, ALWAYS, NEVER, REQUIRED, FORBIDDEN
WEAK qualifiers to eliminate: "try to", "should", "consider", "might"
ACTIVE: "Generate X"
PASSIVE: "X should be generated" ← eliminate
CONCRETE: "Include exactly 3 examples with code blocks"
VAGUE: "Include some examples" ← eliminate
Check for contradictory instructions. Claude prioritizes system parameter and Constitutional AI principles when conflicting.
Transformation Patterns
VAGUE → EXPLICIT:
"Try to use examples" → "MUST include minimum 2 examples with full code blocks"
"Should consider error handling" → "ALWAYS validate inputs; NEVER proceed with invalid data"
PASSIVE → ACTIVE:
"The file should be read" → "READ the file using the Read tool"
"Analysis may be needed" → "ANALYZE [specific aspect] using [specific methodology]"
AMBIGUOUS → QUANTIFIED:
"Some details" → "Minimum 3 specific details with examples"
"Brief description" → "1-2 sentence description, maximum 50 words"
Correct Agent Structure Pattern
# Role and Objective
You are a [specific role]. Your mission is [clear, singular objective].
## Constraints
You MUST NOT:
- [Explicit limitation]
## Process Steps
<process>
<step_1>Analyze requirements</step_1>
<step_2>Design solution</step_2>
<step_3>Generate implementation</step_3>
<step_4>Validate output</step_4>
</process>
## Output Format
[Format specification with placeholders]
## Examples
<examples>
<example id="1">
<input>[Exact input]</input>
<output>[Complete output in exact format]</output>
<rationale>[Official source supporting this pattern]</rationale>
</example>
</examples>
KEY: markdown headers for structure, XML tags strategically for specific sections (process steps, examples), NOT wrapping the entire agent.
Tool Selection
For each tool in an agent's list, ask: "Would the agent fail without this tool?" If no, remove it.
File reading/analysis: Read, Grep, Glob
File creation: Write, Edit
Research/documentation: WebSearch, WebFetch, MCP Ref tools
Code operations: Read, Write, Edit, Bash
Orchestration: Task, TodoWrite
Prefer specific tools over generic (Grep over Bash for search).
Output Format Specification
Deliver three artifacts:
1. Analysis report
# Subagent Refactoring Analysis: [Agent Name]
## Structural Issues Identified
- [Issue with specific example from original]
## Model Optimization Opportunities
- [Opportunity with citation to official source]
## Instruction Quality Issues
- [Issue: quote original instruction, explain problem]
## Research Citations
1. [Source URL] — [Key finding applied]
2. Refactored agent file
## Changes Summary
Major Structural Changes:
1. [Change] — [Rationale with citation]
Instruction Improvements:
- [X vague phrases replaced with imperatives]
- [Y examples added]
- [Z tools removed]
<new_agent_file>
[Complete agent file]
</new_agent_file>
3. Validation checklist — confirm all items before delivery:
- Role defined in one sentence
- Output specified with verifiable form
- All instructions use MUST/NEVER/ALWAYS
- No vague qualifiers remain
- Active voice throughout
- Strategic XML applied (not full-document conversion)
- Tool set minimal — each tool has named use case
- Minimum 2 examples included
- All changes cite official Anthropic sources
Self-Validation Before Delivery
- Did I consult official Anthropic documentation? → Cite specific URL and finding
- Are ALL recommendations backed by Claude-specific authoritative sources? → List source per major change
- Did I remove, not add, unnecessary complexity? → Justify each addition
- Can someone implement this exactly as written? → Test by reading instructions literally
Anti-patterns to avoid:
- Adding features not requested
- Citing blog posts instead of official documentation
- Applying techniques from outdated model versions
- Converting entire agent to XML format (contradicts Anthropic guidance)
- Adding tools "just in case"
More from jamie-bitflight/claude_skills
perl-lint
This skill should be used when the user asks to lint Perl code, run perlcritic, check Perl style, format Perl code, run perltidy, or mentions Perl Critic policies, code formatting, or style checking.
24brainstorming-skill
You MUST use this before any creative work - creating features, building components, adding functionality, modifying behavior, or when users request help with ideation, marketing, and strategic planning. Explores user intent, requirements, and design before implementation using 30+ research-validated prompt patterns.
11design-anti-patterns
Enforce anti-AI UI design rules based on the Uncodixfy methodology. Use when generating HTML, CSS, React, Vue, Svelte, or any frontend UI code. Prevents "Codex UI" — the generic AI aesthetic of soft gradients, floating panels, oversized rounded corners, glassmorphism, hero sections in dashboards, and decorative copy. Applies constraints from Linear/Raycast/Stripe/GitHub design philosophy: functional, honest, human-designed interfaces. Triggers on: UI generation, dashboard building, frontend component creation, CSS styling, landing page design, or any task producing visual interface code.
7python3-review
Comprehensive Python code review checking patterns, types, security, and performance. Use when reviewing Python code for quality issues, when auditing code before merge, or when assessing technical debt in a Python codebase.
7hooks-guide
Cross-platform hooks reference for AI coding assistants — Claude Code, GitHub Copilot, Cursor, Windsurf, Amp. Covers hook authoring in Node.js CJS and Python, per-platform event schemas, inline-agent hooks and MCP in agent frontmatter, common JSON I/O, exit codes, best practices, and a fetch script to refresh docs from official sources. Use when writing, reviewing, or debugging hooks for any AI assistant.
7agent-creator
Create high-quality Claude Code agents from scratch or by adapting existing agents as templates. Use when the user wants to create a new agent, modify agent configurations, build specialized subagents, or design agent architectures. Guides through requirements gathering, template selection, and agent file generation following Anthropic best practices (v2.1.63+).
6