parallel-research
Parallel Research Orchestration
A systematic methodology for conducting thorough research using hierarchical multi-agent coordination.
When to Use
- Deep investigation of complex topics
- Research requiring multiple perspectives
- Analysis with potential for conflicting viewpoints
- Exploratory work in unfamiliar domains
- Any research that benefits from devil's advocate review
Agent Hierarchy
1. Lead Agent (Coordinator)
- Decomposes research question into threads
- Assigns threads to sub-agents
- Monitors progress and adjusts strategy
- Coordinates synthesis
2. Sub-Agents (Specialists)
- Execute specific research threads
- Focus deeply on assigned topic
- Report findings with confidence levels
- Identify gaps and uncertainties
3. Critical Review Agent (Devil's Advocate)
- Challenges findings from sub-agents
- Identifies weaknesses in reasoning
- Proposes alternative interpretations
- Stress-tests conclusions
4. Synthesis Agent (Integrator)
- Combines findings across threads
- Resolves conflicts between sources
- Produces coherent narrative
- Highlights remaining uncertainties
Phase 1: Anticipatory Decomposition
Lead Agent Responsibilities:
-
Analyze Research Question
- Identify core question and sub-questions
- Map dependencies between research threads
- Anticipate potential failures and alternative remedies
-
Decompose Into Parallel Threads
- Create 3-6 independent research threads
- Each thread: clear objective, search strategy, success criteria
- Ensure threads cover different perspectives (not redundant)
- Define explicit handoff protocols
-
Launch Sub-Agents
Launch 4 research subagents in parallel: - Thread 1: [Specific focus and search strategy] - Thread 2: [Specific focus and search strategy] - Thread 3: [Specific focus and search strategy] - Thread 4: [Specific focus and search strategy] Each agent should return only: - Key findings - Evidence quality assessment - Confidence score - Conflicting information found
Phase 2: Parallel Research Execution
Research Sub-Agent Instructions:
-
Search Strategy
- Execute assigned searches
- Use progressive disclosure (don't front-load context)
- Cross-reference multiple sources for balance
- Track evidence quality
-
Validation Requirements
- Verify claims against original sources
- Note contradictions or conflicts found
- Assign confidence scores (high/medium/low)
- Flag assumptions or gaps
-
Return Format
Thread [N] Findings: KEY FINDINGS: - [Finding 1] (Confidence: high/medium/low) - [Finding 2] (Confidence: high/medium/low) EVIDENCE QUALITY: - [Source type, credibility, date] CONFLICTS DETECTED: - [Any contradictions between sources] GAPS/LIMITATIONS: - [What wasn't found or remains unclear]
Phase 3: Critical Review (Devil's Advocate)
Role Assignment: "You are a systematic skeptic. Your role is to identify risks, edge cases, failure modes, logical fallacies, and vulnerabilities. Focus on disagreement and counterarguments, not confirmation."
Three-Fold Review:
-
Anticipatory Critique
- What could be wrong with these findings?
- What alternative interpretations exist?
- What evidence is missing or weak?
-
Finding-by-Finding Challenge
- For each finding: "What if this is wrong?"
- Identify logical fallacies or reasoning gaps
- Check for confirmation bias
-
Strategic Refinement
- What should agents have done differently?
- Which low-confidence findings should be rejected?
- What additional research is needed?
Output Requirements:
CRITICAL REVIEW REPORT:
STRONG FINDINGS (accept):
- [Findings that withstand scrutiny]
WEAK FINDINGS (reject or flag):
- [Findings with logical flaws, weak evidence]
IDENTIFIED RISKS:
- [Edge cases, failure modes]
CONFLICTS REQUIRING RESOLUTION:
- [Contradictions between threads]
ADDITIONAL RESEARCH NEEDED:
- [Gaps requiring follow-up]
Phase 4: Conflict Resolution & Synthesis
Conflict Resolution Framework:
-
Debate Pattern
- For each conflict, have research agents defend findings
- Evaluate on: evidence quality, logical consistency, edge case coverage
- Judge agent makes final decision
-
Voting with Confidence Weighting
- Weight findings by confidence scores
- Prioritize primary over secondary sources
- Prioritize recent over outdated (when relevant)
- Require minimum confidence threshold
-
Cross-Referencing Validation
- Verify final synthesis against original sources
- Ensure balanced perspective
- Flag remaining uncertainties
Synthesis Output Format:
RESEARCH SYNTHESIS REPORT
EXECUTIVE SUMMARY:
[2-3 paragraph synthesis answering original question]
VALIDATED FINDINGS:
1. [Finding] (Confidence: X, Sources: Y)
2. [Finding] (Confidence: X, Sources: Y)
CONFLICTS RESOLVED:
- [How contradictions were resolved]
REMAINING UNCERTAINTIES:
- [What remains unclear]
RECOMMENDATIONS:
- [Actionable recommendations]
SOURCES:
- [Complete source list with hyperlinks]
Quality Assurance Principles
- Parallel Validation: Multiple verification paths catch different errors
- End-State Evaluation: Focus on correct final synthesis, not process
- Separation of Generation and Evaluation: Research and critique are separate
- Transparency: Full disclosure of methodology and sources
- Human Escalation: For irresolvable conflicts, escalate to user
Claude Code Specific Optimizations
- Use Built-in Subagents: Leverage Plan Subagent for orchestration, Explore Subagent for codebase research
- Parallel Execution: Always execute independent threads in parallel
- Context Preservation: Main thread maintains context; subagents use isolated windows
- Token Awareness: This methodology uses significant tokens (4+ parallel agents)
Common Pitfalls to Avoid
- Sequential Research: Don't run threads one-by-one; parallelize
- Weak Devil's Advocate: Enforce systematic skepticism
- Premature Synthesis: Don't synthesize before critical review
- Context Bleeding: Keep sub-agent contexts isolated
- Unchallenged Conflicts: Require confidence thresholds and debate
- Missing Transparency: Always include source attribution
Success Metrics
A successful research orchestration produces:
- High-confidence findings (backed by multiple sources)
- Acknowledged and resolved conflicts
- Transparent limitations and uncertainties
- Actionable synthesis (not just information dump)
- Complete source attribution
- Evidence of critical review (not confirmation bias)
More from simonstrumse/vibelabs-skills
bunny-net
Use Bunny.net for CDN, image hosting, video hosting, edge storage, image optimization, AI image generation, edge scripting, container hosting, DNS, and security (WAF/DDoS). Trigger when user mentions bunny, cdn, image hosting, video hosting, media storage, edge storage, image optimization, video streaming, pull zone, storage zone, WebP conversion, AVIF, lazy loading, DRM, video encoding, edge functions, serverless edge, container hosting, WAF, DDoS protection, bot detection, or content delivery.
25klarsprak
Norwegian plain language (klarspråk) guidelines from Språkrådet. Use when writing Norwegian website copy, marketing text, UI labels, documentation, or any user-facing Norwegian content. Triggers on Norwegian text, norsk tekst, klarspråk, markedsføring, nettsidetekst, brukergrensesnitt, or when reviewing/improving Norwegian copy quality.
5transcribe
>
4fiken
|
3company-research
Company research using Exa search. Finds company info, competitors, news, tweets, financials, LinkedIn profiles, builds company lists.
3soft-glass-ui
Design system for creating premium glass UI using iOS 26 native Liquid Glass APIs (.glassEffect modifier) or fallback gradient-based approaches for older iOS versions. Use when building SwiftUI iOS apps with glassmorphism, premium UI, translucent cards, or modern Apple-style interfaces. Triggers on requests for liquid glass, glassmorphism, iOS 26 design, premium UI, or translucent interfaces.
3