skill-research-process
Skill Research Process
Systematic, scalable approach for building comprehensive Claude Code skills using parallel research agents. Use this when a skill requires extensive documentation gathering from official sources.
Process Overview
Stage 1: Initialize → Categorization agent creates TODO checklist
↓
Gate 1: Verify categories are distinct and complete
↓
Stage 2: Research → Parallel agents populate references/{category}/
↓
Gate 2: Anti-hallucination checkpoint (verify all claims cited)
↓
Stage 3: Integrate → Update SKILL.md, validate structure
↓
Gate 3: Final validation (links work, quality standards met)
Pre-Requisites
-
Activate skill-creator for structure guidance:
Skill(skill: "plugin-creator:skill-creator") -
Read CLAUDE.md for verification requirements
Stage 1: Initialize Skill Structure
Objective: Create base skill directory and identify documentation categories.
Steps
-
Initialize skill directory:
plugins/plugin-creator/skills/skill-creator/scripts/init_skill.py <skill-name> --path <output-directory> -
Launch categorization agent - see Agent Prompts
-
Output:
{skill-name}.TODO.mdwith categorized checklist
Quality Gate 1: Category Verification
Before proceeding, verify:
- Categories are distinct (no overlap)
- Each category is specific enough to guide focused research
- 5-10 categories total (fewer for simple tools, more for complex)
- Categories cover the tool's full scope
If categories overlap: Merge or redefine boundaries before Stage 2.
Stage 2: Parallel Category Research
Objective: Launch concurrent research agents to build reference documentation.
Steps
- Read TODO categories from
{skill-name}.TODO.md - Launch concurrent Task agents (one per category) with
run_in_background: true - Each agent outputs to
./references/{category}/
See Research Agent Prompt for template.
Parallel Execution
Launch all agents in a single message with multiple Task calls:
Agent(subagent_type: "general-purpose", description: "Research Category A", run_in_background: true, ...)
Agent(subagent_type: "general-purpose", description: "Research Category B", run_in_background: true, ...)
Quality Gate 2: Anti-Hallucination Checkpoint
MANDATORY before Stage 3. For each category, verify:
- Every factual claim has a cited source (URL + access date)
- No claims based on training data knowledge
- Sources are authoritative (official docs > blogs > forums)
- Code examples are from official sources or tested
- Uncertain information marked explicitly as "unverified"
Citation Format Required:
According to the official documentation (https://example.com/docs, accessed 2026-02-01), ...
If citation missing: Research agent must add source or mark as "NOT_VERIFIED: [claim]".
Stage 3: Integration
Objective: Update SKILL.md with category links and finalize.
Steps
- Update
./SKILL.mdwith links to each category'sindex.md - Verify SKILL.md body ≤5k words
- Validate structure
Quality Gate 3: Final Validation
Run validation:
plugins/plugin-creator/skills/skill-creator/scripts/package_skill.py <skill-path>
Verify:
- All markdown links resolve (use Read tool to verify)
- All index.md files contain working links
- All TODO items from Stage 1 have corresponding reference files
- Skill follows skill-creator guidelines
Error Recovery
MCP Tools Unavailable
Fallback strategy when MCP tools not available:
- WebFetch + WebSearch: For discovery and overview
- GitHub CLI (
gh): For repository metadata, issues, releases - Clone + Read: Clone repo locally, use Read tool for code analysis
Research Agent Fails
If an agent fails or times out:
- Check output file with Read tool (background agents write to file)
- Resume with remaining work if partial results exist
- Re-launch with narrower scope if timeout
Incomplete Source Documentation
If official docs are incomplete:
- Document what IS available with citations
- Mark gaps explicitly: "Official documentation does not cover [topic]"
- Use GitHub issues/discussions as secondary sources (with citation)
- Never fill gaps with training data assumptions
MCP Tool Selection
| Tool | Fidelity | Use When |
|---|---|---|
| WebFetch | Low | Scoping only. NEVER for implementation details |
| mcpexa* | Medium | Code snippets, documentation extraction |
| mcpRef* | High | Authoritative, verbatim documentation |
See MCP Tool Usage Guide for details.
Key Principles
| Principle | Rule |
|---|---|
| Progressive Disclosure | SKILL.md ≤5k words; details in references/ |
| Parallel Execution | Launch all category agents in single message |
| Citation Required | Every claim needs source + access date |
| No Training Data | Only document what sources confirm |
| Relative Paths | All links use ./ prefix |
Success Checklist
Before finalizing:
- All quality gates passed (1, 2, 3)
- Every factual claim has citation with access date
- No speculation or training-data-based claims
- All links use
./relative paths - Categories are distinct (no overlap)
- SKILL.md ≤5k words
- Each category has
index.mdwith working links - Validation script passes
Agent Team Alternative for Stage 2
When Stage 2 (category research) involves 3+ independent categories where findings from one category inform or challenge another, consider agent teams instead of sequential subagents.
When Agent Teams Apply
A category research workflow is a candidate for agent teams when ALL of these are true:
- 3+ independent categories to research (enough parallelism to justify coordination overhead)
- Categories benefit from cross-communication (findings from one category inform or challenge another)
- No shared file mutations (each teammate owns different category files)
- Result is a synthesis, not a concatenation (value comes from combining, deduplicating, or reconciling findings across categories)
When Subagents Suffice
A category research workflow is NOT a candidate for agent teams when:
- Only 1-2 categories (subagent overhead is lower)
- Categories are fully independent with no cross-communication need (subagents suffice)
- Result is just collecting N outputs (no synthesis step)
- Work is sequential (each step depends on the previous)
Reference
See Agent Teams Documentation for complete criteria, architecture, and usage patterns.
SOURCE: Lines 27-39 of agent-teams.md (accessed 2026-02-06)
References
- Agent Prompt Templates
- MCP Tool Usage Guide
- Gaps Analysis - Known limitations and improvement opportunities
More from jamie-bitflight/claude_skills
perl-lint
This skill should be used when the user asks to lint Perl code, run perlcritic, check Perl style, format Perl code, run perltidy, or mentions Perl Critic policies, code formatting, or style checking.
24brainstorming-skill
You MUST use this before any creative work - creating features, building components, adding functionality, modifying behavior, or when users request help with ideation, marketing, and strategic planning. Explores user intent, requirements, and design before implementation using 30+ research-validated prompt patterns.
11design-anti-patterns
Enforce anti-AI UI design rules based on the Uncodixfy methodology. Use when generating HTML, CSS, React, Vue, Svelte, or any frontend UI code. Prevents "Codex UI" — the generic AI aesthetic of soft gradients, floating panels, oversized rounded corners, glassmorphism, hero sections in dashboards, and decorative copy. Applies constraints from Linear/Raycast/Stripe/GitHub design philosophy: functional, honest, human-designed interfaces. Triggers on: UI generation, dashboard building, frontend component creation, CSS styling, landing page design, or any task producing visual interface code.
7python3-review
Comprehensive Python code review checking patterns, types, security, and performance. Use when reviewing Python code for quality issues, when auditing code before merge, or when assessing technical debt in a Python codebase.
7hooks-guide
Cross-platform hooks reference for AI coding assistants — Claude Code, GitHub Copilot, Cursor, Windsurf, Amp. Covers hook authoring in Node.js CJS and Python, per-platform event schemas, inline-agent hooks and MCP in agent frontmatter, common JSON I/O, exit codes, best practices, and a fetch script to refresh docs from official sources. Use when writing, reviewing, or debugging hooks for any AI assistant.
7agent-creator
Create high-quality Claude Code agents from scratch or by adapting existing agents as templates. Use when the user wants to create a new agent, modify agent configurations, build specialized subagents, or design agent architectures. Guides through requirements gathering, template selection, and agent file generation following Anthropic best practices (v2.1.63+).
6