research-curator
<mode_args>$ARGUMENTS</mode_args>
[!IMPORTANT] When provided a process map or Mermaid diagram, treat it as the authoritative procedure. Execute steps in the exact order shown, including branches, decision points, and stop conditions. A Mermaid process diagram is an executable instruction set. Follow it exactly as written: respect sequence, conditions, loops, parallel paths, and terminal states. Do not improvise, reorder, or skip steps. If any node is ambiguous or missing required detail, pause and ask a clarifying question before continuing. When interacting with a user, report before acting the interpreted path you will follow from the diagram, then execute.
Research Curator -- Multi-Mode Orchestrator
Orchestrate research entry creation, maintenance, and validation in ./research/. Spawns @research-curator agents for content work; handles coordination, README updates, and post-actions.
Mode Routing
Parse <mode_args/> to select operating mode. Optional --layer 0|1|2 filters discovery by SDLC layer when used with knowledge-explorer or refresh-research.
The following diagram is the authoritative procedure for mode routing. Execute steps in the exact order shown, including branches, decision points, and stop conditions.
flowchart TD
Start(["Parse <mode_args/>"]) --> Q1{"Does <mode_args/> contain --batch?"}
Q1 -->|"Yes — batch flag present"| Q1Layer{"Does <mode_args/> also contain --layer 0, 1, or 2?"}
Q1 -->|"No — batch flag absent"| Q2{"Does <mode_args/> contain --rerun?"}
Q1Layer -->|"Yes — layer filter present"| BatchLayer(["Execute Batch Mode with layer filter applied"])
Q1Layer -->|"No — no layer filter"| Batch(["Execute Batch Mode"])
Q2 -->|"Yes — rerun flag present"| Q2Layer{"Does <mode_args/> also contain --layer 0, 1, or 2?"}
Q2 -->|"No — rerun flag absent"| Q3{"Does <mode_args/> contain --validate?"}
Q2Layer -->|"Yes — layer filter present"| RerunLayer(["Execute Rerun Mode with layer filter applied"])
Q2Layer -->|"No — no layer filter"| Rerun(["Execute Rerun Mode"])
Q3 -->|"Yes — validate flag present"| Validate(["Execute Validate Mode"])
Q3 -->|"No — no flags matched — <mode_args/> contains a URL only"| Default(["Execute Default Mode — single URL"])
Research Directory
Single source of truth: ./research/ (repo-root relative).
Structure:
./research/
README.md # Category tables with all entries
{category}/ # One directory per category
{resource-name}.md # Individual research entries
Category selection follows the flowchart in Entry Template. Create directories as needed.
Agent Result Relay Rules
These rules apply whenever this orchestrator receives results from any @research-curator agent. Violating them corrupts information before it reaches the user.
Rule 1 — Preserve exact counts. When an agent reports numbers, relay those exact numbers.
| Agent says | Relay as | Never relay as |
|---|---|---|
| "7 of 10 found" | "7 of 10 found" | "most found" |
| "3 errors, 2 warnings" | "3 errors, 2 warnings" | "several issues" |
| "0 results" | "0 results" | "nothing relevant" |
Rule 2 — Preserve failure reasons. Relay the specific reason; do not generalize.
| Agent says | Relay as | Never relay as |
|---|---|---|
| "HTTP 403 Forbidden" | "access denied (HTTP 403)" | "not available" |
| "Connection timeout" | "connection timed out" | "doesn't exist" |
| "File not found at path X" | "file not found at X" | "no such file" |
| "Rate limited" | "rate limited" | "unavailable" |
Rule 3 — Reference files instead of re-summarizing. When an agent wrote a file, include its path in the relay.
Rule 4 — Relay structure, not interpretation. When an agent returns a STATUS/ARTIFACTS/WARNINGS block, preserve that structure. Do not flatten it into a single sentence.
Rule 5 — Distinguish observations from conclusions. "Config has no timeout field" (observation) is different from "timeout defaults to 30s" (agent's conclusion). Keep them distinct.
Pre-Relay Quality Checklist
Before reporting results to the user after any mode completes, verify:
- All numbers from agent output are preserved in relay
- All failure reasons are preserved verbatim (not generalized)
- File paths are included if agent wrote output files
- "Not found" has not been upgraded to "doesn't exist"
- "Inaccessible" has not been upgraded to "unavailable" or "nonexistent"
- Structured sections (STATUS, ARTIFACTS, WARNINGS) are preserved
- Agent observations are distinguished from agent conclusions
<default_mode>
Default Mode -- Single URL
Trigger: <mode_args/> contains a URL with no flags.
Workflow
-
Parse -- extract the URL from
<mode_args/> -
Spawn agent -- invoke
@research-curatorvia Agent tool with the URLAgent tool parameters: agent: .claude/agents/research-curator.md prompt: "Research and create an entry for: {URL}" -
Wait for structured result (status, file path, category, key findings)
-
Apply relay rules -- verify pre-relay checklist before proceeding
-
Spawn four tasks concurrently -- if research status is not
failed:a. Agent tool parameters: agent: .claude/agents/research-insight-extractor.md prompt: "Extract improvements from {file-path-from-agent-result}" b. Agent tool parameters: agent: .claude/agents/research-utilization-assessor.md prompt: "Assess utilization opportunities from {file-path-from-agent-result}" c. Agent tool parameters: agent: .claude/agents/research-cross-referencer.md prompt: "Add cross-references to {file-path-from-agent-result}" d. Update ./research/README.md -- add new entry to category table -
Wait for all four tasks and surface results -- collect structured return blocks from all three agents and confirm README updated:
- Insight: if the result contains
IMMEDIATE_ATTENTION:, report each item with#{issue} {title}and the one-sentence reason. If noIMMEDIATE_ATTENTIONsection: report "N improvements added to backlog from {resource-name}." - Utilization: relay
PROPOSALS_WRITTENcount andFILEpath. IfSTATUS: no_utilization_surface, report "No direct utilization surface found." - Cross-references: relay
CROSS_REFERENCES_ADDEDcount.
- Insight: if the result contains
-
Post-actions -- lint, commit, push (see Post-Actions)
Error Handling
- If agent returns
status: failed, relay the exact failure reason to user and stop - Do not create partial entries or update README on failure
</default_mode>
<batch_mode>
Batch Mode
Trigger: <mode_args/> contains --batch.
Full workflow defined in Batch Mode reference. Summary below.
URL Parsing
Extract all tokens after --batch matching https?:// as target URLs. Non-URL tokens ignored with warning.
Wave Spawning
Spawn up to 5 @research-curator agents per wave via Agent tool. Wait for all agents in the current wave before spawning the next. After all waves complete, for each successful entry spawn three concurrent agents: @research-insight-extractor, @research-utilization-assessor, and @research-cross-referencer (up to 5 entries processed concurrently — 3 agents each). See Batch Mode reference for the complete wave spawning diagram.
Duplicate Detection
Before spawning, check if ./research/ already contains an entry for the URL's resource.
If found:
- Read the entry's Freshness Tracking section.
- Compute days since Last Verified (integer: today minus Last Verified date).
- Emit:
Entry is N days old (last verified: YYYY-MM-DD, vX.Y.Z). Proceeding with refresh. - Pass
--rerun ./research/{category}/{name}.mdto the agent instead of skipping.
If the Freshness Tracking section is absent or Last Verified is unreadable, emit:
Entry exists but freshness data unavailable. Proceeding with refresh.
and pass --rerun ./research/{category}/{name}.md to the agent.
Progress Reporting
After each wave, relay exact counts and exact failure reasons from agent output:
Wave N complete: M/N succeeded
created -- category/resource-name.md
refreshed -- category/resource-name.md (was N days old)
failed -- https://url.com -- {exact reason from agent}
After all waves:
Batch complete: X/Y total succeeded
Files created: [list]
README updated: Yes
</batch_mode>
<rerun_mode>
Rerun Mode
Trigger: <mode_args/> contains --rerun.
Re-research existing entries to refresh stale data.
Target Parsing
The following diagram is the authoritative procedure for rerun mode. Execute steps in the exact order shown, including branches, decision points, and stop conditions.
flowchart TD
Start(["Parse --rerun argument value"]) --> Q{"What is the --rerun target value?"}
Q -->|"category/name — single entry path"| VerifyFile{"Does ./research/category/name.md exist?"}
Q -->|"all — re-research every entry"| FindAll["Glob ./research/**/*.md<br>excluding README.md — collect all entry paths"]
VerifyFile -->|"No — file not found"| Missing(["Report error: entry not found at path. Stop."])
VerifyFile -->|"Yes — file exists"| ReadFile["Read ./research/category/name.md<br>extract current content and metadata"]
ReadFile --> Spawn1["Spawn @research-curator via Agent tool<br>prompt: --rerun ./research/category/name.md"]
Spawn1 --> RelayCheck1["Apply pre-relay quality checklist"]
RelayCheck1 --> UpdateDate["Update ./research/README.md<br>refresh freshness date for this entry"]
FindAll --> WaveSpawn["Spawn @research-curator agents in waves of 5<br>each receives --rerun ./research/category/name.md<br>wait for each wave before spawning next"]
WaveSpawn --> RelayCheck2["Apply pre-relay quality checklist<br>to all wave results"]
RelayCheck2 --> UpdateDates["Update ./research/README.md<br>refresh freshness dates for all re-researched entries"]
UpdateDate --> SpawnAnalysis1["Concurrently spawn 3 agents:<br>@research-insight-extractor 'Extract improvements from ./research/category/name.md'<br>@research-utilization-assessor 'Assess utilization opportunities from ./research/category/name.md'<br>@research-cross-referencer 'Add cross-references to ./research/category/name.md'"]
SpawnAnalysis1 --> WaitAnalysis1["Wait for all 3 agents<br>Surface IMMEDIATE_ATTENTION items from insight result<br>Report utilization proposal count<br>Report cross-references added count"]
WaitAnalysis1 --> PostActions(["Execute Post-Actions — lint, commit, push"])
UpdateDates --> SpawnAnalysisN["For each updated entry (concurrent, up to 5 entries)<br>spawn 3 agents per entry:<br>@research-insight-extractor<br>@research-utilization-assessor<br>@research-cross-referencer"]
SpawnAnalysisN --> WaitAnalysisN["Wait for all analysis agents<br>Collect IMMEDIATE_ATTENTION items<br>Report total utilization proposals and cross-references added"]
WaitAnalysisN --> PostActions
Single Entry Rerun
-
Verify
./research/{category}/{name}.mdexists -
Spawn
@research-curatorvia Agent tool:prompt: "--rerun ./research/{category}/{name}.md" -
Agent reads existing entry, re-gathers fresh data, updates content and freshness tracking
-
Apply pre-relay quality checklist to agent result
-
Update README with refreshed date
-
Concurrently spawn three analysis agents:
- @research-insight-extractor — "Extract improvements from ./research/{category}/{name}.md" - @research-utilization-assessor — "Assess utilization opportunities from ./research/{category}/{name}.md" - @research-cross-referencer — "Add cross-references to ./research/{category}/{name}.md" -
Wait for all three; surface
IMMEDIATE_ATTENTIONitems from insight result; report utilization proposal count; report cross-references added count
All Entries Rerun
- Glob
./research/**/*.mdexcludingREADME.md - Spawn agents in waves of 5 (same pattern as Batch Mode)
- Each agent receives
--rerun ./research/{category}/{name}.md - Apply pre-relay quality checklist after each wave
- Update README once after all waves complete
</rerun_mode>
<validate_mode>
Validate Mode
Trigger: <mode_args/> contains --validate.
Run structural validation and fix error-severity issues.
What Gets Checked
The validator script (validate_research.py) checks each entry file against the rules in Validation Rules. It emits JSON with three severity levels:
- error -- structural violations that make entries unusable (missing required fields, broken links, malformed frontmatter). Auto-fixed by spawning
@research-curatorwith--fixand the specific issue list. - warning -- quality issues that don't break entries (stale dates, thin summaries). Reported to user; not auto-fixed.
- info -- informational observations (entry age, word count). Reported to user; no action.
Validation Workflow
The following diagram is the authoritative procedure for validate mode. Execute steps in the exact order shown, including branches, decision points, and stop conditions.
flowchart TD
Start(["Parse --validate argument value"]) --> Q{"What is the --validate target value?"}
Q -->|"category/name — single entry path"| RunScript["Run validate_research.py --json<br>on ./research/category/name.md"]
Q -->|"all — validate every entry"| RunScriptAll["Run validate_research.py --json<br>on ./research/ directory"]
RunScript --> ParseJSON["Parse JSON output<br>Extract issues keyed by severity: error, warning, info<br>Count totals per severity"]
RunScriptAll --> ParseJSON
ParseJSON --> HasErrors{"Does parsed output contain<br>any error-severity issues?"}
HasErrors -->|"Yes — N error-severity issues found"| SpawnFix["Spawn @research-curator agents in waves of 5<br>Each agent receives --fix flag<br>PLUS the exact error list for that entry from JSON output<br>(not a summary — the raw issue text)"]
HasErrors -->|"No — zero error-severity issues"| ReportClean(["Report: all entries passed. Include exact warning and info counts. Stop."])
SpawnFix --> RelayCheck["Apply pre-relay quality checklist<br>to all fix-agent results"]
RelayCheck --> ReportSummary["Report validation summary with exact counts<br>(total scanned, passed, errors fixed, warnings noted, info items)"]
ReportSummary --> PostActions(["Execute Post-Actions — lint, commit, push"])
Script Invocation
uv run .claude/skills/research-curator/scripts/validate_research.py --json ./research/{target}
Fix Agent Delegation
When spawning a fix agent, pass the exact error text from the JSON output — not a paraphrase. The agent receives:
prompt: "--fix ./research/{category}/{name}.md
Issues to fix (from validator JSON):
- {exact issue text from JSON}
- {exact issue text from JSON}"
Issue Handling
Severity handling per Validation Rules:
- error -- spawn
@research-curatorwith--fixflag and the exact issue list extracted from JSON - warning -- include exact warning text in report to user; do not auto-fix
- info -- include exact info text in report; no action needed
For error-severity fixes, spawn agents in waves of 5 (same pattern as Batch Mode).
Summary Report
Report exact counts from the validator JSON output — do not paraphrase:
Validation complete:
Total scanned: N
Passed: N
Errors found: N (M auto-fixed)
Warnings noted: N
Info items: N
</validate_mode>
<post_actions>
Post-Actions
Shared by all modes. Execute after any mode completes successfully.
-
README Update -- add or update entries in
./research/README.mdcategory tables -
Lint -- run formatting checks on all modified files:
uv run prek run --files ./research/README.md [new-or-modified-files] -
Commit -- stage and commit all research and insight changes:
git add ./research/ git commit -m "docs(research): [action] [resource names]" -
Push -- push to current branch:
git push -u origin HEAD
Commit message actions by mode:
- Default --
add {resource-name} research entry - Batch --
add {N} research entries - Rerun --
refresh {resource-name|N entries} - Validate --
fix validation issues in {resource-name|N entries}
</post_actions>
<output_format>
Output Format
Report to user after any mode completes. All counts and failure reasons MUST be relayed exactly as received from agents — apply the pre-relay quality checklist before writing this output.
Default Mode Output
## Research Entry Created
**Resource**: {name}
**Category**: {category}
**File**: ./research/{category}/{filename}.md
**README Updated**: Yes
**Cross-References Added**: N
**Utilization Proposals**: N (file: ./research/insights/YYYY-MM-DD-{name}-utilization.md)
### Key Findings
- Finding 1
- Finding 2
- Finding 3
### Next Review
YYYY-MM-DD
Batch Mode Output
## Batch Research Complete
**Total**: X URLs processed
**Created**: Y new entries
**Refreshed**: Z existing entries
**Failed**: W
### Entries Created
- ./research/{category}/{name}.md
### Entries Refreshed
- ./research/{category}/{name}.md (was N days old, last: YYYY-MM-DD, vX.Y.Z)
### Failures
- {URL} -- {exact reason from agent output}
Rerun Mode Output
## Research Entries Refreshed
**Refreshed**: N entries
**Changes Detected**: M entries had updated data
### Updated Entries
- ./research/{category}/{name}.md -- {what changed}
Validate Mode Output
## Validation Results
**Scanned**: N entries
**Passed**: N
**Errors Fixed**: N
**Warnings**: N
**Info**: N
### Fixes Applied
- ./research/{category}/{name}.md -- {exact issue fixed, from validator JSON}
### Warnings (manual review recommended)
- ./research/{category}/{name}.md -- {exact warning text}
</output_format>
Reference Links
- Entry Template -- standard format for all research entries
- Validation Rules -- checks and severity mapping for
--validatemode - Batch Mode -- wave spawning workflow for
--batchmode - Agent:
@research-curatorat.claude/agents/research-curator.md-- single-entry research executor - Agent:
@research-insight-extractorat.claude/agents/research-insight-extractor.md-- extracts backlog improvements from research entries - Agent:
@research-utilization-assessorat.claude/agents/research-utilization-assessor.md-- assesses direct API/service utilization opportunities - Agent:
@research-cross-referencerat.claude/agents/research-cross-referencer.md-- appends Cross-References section to research entries
SOURCE: Agent result relay rules and pre-relay checklist adapted from plugins/summarizer/skills/agent-result-relay/SKILL.md (accessed 2026-03-06).
More from jamie-bitflight/claude_skills
perl-lint
This skill should be used when the user asks to lint Perl code, run perlcritic, check Perl style, format Perl code, run perltidy, or mentions Perl Critic policies, code formatting, or style checking.
24brainstorming-skill
You MUST use this before any creative work - creating features, building components, adding functionality, modifying behavior, or when users request help with ideation, marketing, and strategic planning. Explores user intent, requirements, and design before implementation using 30+ research-validated prompt patterns.
11design-anti-patterns
Enforce anti-AI UI design rules based on the Uncodixfy methodology. Use when generating HTML, CSS, React, Vue, Svelte, or any frontend UI code. Prevents "Codex UI" — the generic AI aesthetic of soft gradients, floating panels, oversized rounded corners, glassmorphism, hero sections in dashboards, and decorative copy. Applies constraints from Linear/Raycast/Stripe/GitHub design philosophy: functional, honest, human-designed interfaces. Triggers on: UI generation, dashboard building, frontend component creation, CSS styling, landing page design, or any task producing visual interface code.
7python3-review
Comprehensive Python code review checking patterns, types, security, and performance. Use when reviewing Python code for quality issues, when auditing code before merge, or when assessing technical debt in a Python codebase.
7hooks-guide
Cross-platform hooks reference for AI coding assistants — Claude Code, GitHub Copilot, Cursor, Windsurf, Amp. Covers hook authoring in Node.js CJS and Python, per-platform event schemas, inline-agent hooks and MCP in agent frontmatter, common JSON I/O, exit codes, best practices, and a fetch script to refresh docs from official sources. Use when writing, reviewing, or debugging hooks for any AI assistant.
7agent-creator
Create high-quality Claude Code agents from scratch or by adapting existing agents as templates. Use when the user wants to create a new agent, modify agent configurations, build specialized subagents, or design agent architectures. Guides through requirements gathering, template selection, and agent file generation following Anthropic best practices (v2.1.63+).
6