parallel-diagnosis
Parallel Diagnosis
Two independent agents investigate a bug in parallel, then converge on a single unified diagnosis. Produces structured output that downstream skills (two-pass-review, fix-loop) can act on.
When to use
- YES: Multi-layer bugs (schema + router + model), framework quirks, uncertain library behavior, high-risk deploys, intermittent failures
- NO: Trivial bugs, typos, issues where root cause is already known, zero blast radius
Instructions
1 — Parallel diagnosis
If the problem statement is ambiguous (unclear which subsystem, multiple possible symptoms, or no file paths identified), use the AskUserQuestion tool to narrow scope before spinning up agents. Present what you understand and ask the user to confirm or clarify.
- Spin up 2 independent subagents in parallel. Default: Sonnet. Use Opus for complex async/architectural bugs.
- Give them ONLY the problem statement and relevant file paths.
- Instruct each agent to:
- Read the code independently — no communication between agents.
- Trace the root cause.
- Propose where the fix should go (not the fix itself).
- Note any caveats or uncertainties.
- Collect both reports.
Failure handling:
- One agent returns unusable output → treat it as a non-vote. Proceed with the single usable report, noting
confidence: "medium"(single source, reduced but not absent). - Both agents return unusable output → retry Step 1 once with fresh agents.
- Both retries fail → abort and use the
AskUserQuestiontool to escalate, presenting what both attempts returned and asking whether to provide more context, try a different approach, or stop.
2 — Consensus
Compare both diagnosis reports.
- If they agree: High confidence. Set
agreement: "converged". Proceed to Step 3. - If they disagree: Read the contested files yourself and resolve. If still ambiguous, escalate to the human with both reports. Set
agreement: "resolved"(orchestrator resolved) and proceed to Step 3, oragreement: "escalated"(human needed). - If both conclude not a bug: Report findings to the human and stop.
- If one found an extra detail: Include it only if it concerns the same root cause and does not contradict the other agent's findings.
- Outcome: Produce exactly 1 unified diagnosis.
3 — Return output
- Produce a
DiagnosisOutputconforming to the Output Schema below. - Present to the human with a recommended next step:
- If the diagnosis points to a clear fix → recommend fix-loop
- If the fix needs review before applying → recommend two-pass-review
- If the diagnosis is uncertain → recommend the human investigate further, citing the specific uncertainty
- If
agreementis "escalated": Wait for the human to resolve. After the human provides direction, produce theDiagnosisOutputincorporating the human's input, setagreement: "resolved", and proceed with the recommended next step.
Constraints
- Agent cap. Max 2 subagents per run.
- No reuse. Never reuse agents across retries. Spin up fresh agents each time.
- Human in the loop. At disagreements, ambiguity, or escalation — use
AskUserQuestionwith structured options and a recommended choice. Never silently proceed on assumptions. - Retry limit. Max 1 retry of Step 1 if both agents fail. After that, escalate.
Handoff
This skill is composable. Its structured output feeds directly into:
- fix-loop — to fix confirmed issues. The
affected_filesandfix_directionfields give fix-loop enough context to act. - two-pass-review — to review a proposed fix. The
root_causeandconfidencefields inform what the reviewer should validate.
The DiagnosisOutput preserves both raw agent reports for transparency, so downstream skills or the human can trace reasoning back to source.
Output Schema
DiagnosisOutput
DiagnosisOutput {
schema_version: string, always "v1",
status: "diagnosed" | "not_a_bug" | "inconclusive",
root_cause: string, unified root cause (1-3 sentences),
confidence: "high" | "medium" | "low",
affected_files: string[], file paths involved in the root cause,
fix_direction: string, what should be fixed and how, in prose — NOT code,
agreement: "converged" | "resolved" | "escalated",
agent_reports: AgentReport[]
}
AgentReport
AgentReport {
agent_id: number, 1 | 2,
root_cause: string, this agent's root cause assessment,
affected_files: string[], file paths this agent identified,
fix_direction: string, this agent's proposed fix direction,
confidence_notes: string, any caveats or uncertainties this agent noted
}
Field notes
status—"diagnosed": normal case, root cause found and actionable."not_a_bug": both agents concluded the reported behavior is not a bug — terminal, no downstream skills needed."inconclusive": couldn't determine root cause, escalated to human for further investigation.confidence—"high": both agents agreed on root cause."medium": orchestrator resolved a disagreement."low": significant uncertainty remains (e.g., one agent failed, or orchestrator resolution was shaky).agreement—"converged": agents independently reached the same conclusion."resolved": agents disagreed but the orchestrator reconciled."escalated": human input was needed to resolve.root_cause— the unified assessment, not a copy of either agent's report. When agents disagreed and the orchestrator resolved, this reflects the orchestrator's judgment.fix_direction— prose description of what to change and where. Enough detail for fix-loop or a human to act on, but no code.affected_files— union of files from both agents, filtered to those relevant to the unified root cause.agent_reports— both raw reports preserved for transparency. Downstream consumers can trace reasoning back to source.confidence_notes— per-agent field. Agents should be honest about what they're unsure of. Empty string if no caveats.
More from preetamnath/agent-skills
shopify-dev-mcp
Routes Shopify Dev MCP calls for surfaces NOT covered by the bundled Shopify skills: `storefront-graphql`, `customer`, `partner`, `payments-apps`, `functions`, `hydrogen`, `liquid`, `custom-data`. SKIP for Admin GraphQL or App Home markup — the bundled `shopify-admin` and `shopify-polaris-app-home` skills cover those. SKIP entirely for `@shopify/post-purchase-ui-extensions-react` — the MCP doesn't index that legacy SDK; use `post-purchase-ui-extension` instead.
15plan-runner
Executes wave-grouped markdown plans via parallel subagents. Orchestrates implementation, per-wave review, fix cycles, and final two-pass-review. Resumable across conversations.
13interview-me
Move from ambiguity to clarity before building. Use when user says 'interview me', asks to be interviewed, or the task has ambiguous scope.
13sentry-analysis
Analyze Sentry error logs, breadcrumbs, and codebase context to diagnose and explain the root cause of issues.
13code-review
Structured code review with P0-P3 findings, confidence scores, and criteria-based analysis. Use for reviewing code changes, PRs, or specific files.
12plan-builder
Creates dependency-ordered, wave-grouped executable plans from a goal + context. Produces markdown plans with parallel execution waves compatible with plan-runner. Use when you need to break a goal into sequenced, atomic work items before execution.
10