reasoning-orchestrator
Reasoning Orchestrator
CRITICAL: This skill does NOT perform any reasoning itself. It plans and delegates. All reasoning is done by reading and applying individual skill files, or by spawning subagents. Do not shortcut by reasoning inline.
How to Execute This Skill
Follow these steps exactly, in order. Do not skip steps. Do not collapse steps.
STEP 1 — Triage (do this yourself, no skill file needed)
Classify the situation across these dimensions and write the classification out loud before proceeding:
Temporal direction: Forward-looking / Present-state / Backward-looking
Problem maturity:
- Unframed — the problem itself may not be correctly defined
- Framed, unsolved — problem understood, solution not found
- Solved, needs validation — solution exists, needs stress-testing
- Decided, needs learning — something happened, extract signal
Primary obstacle:
- Don't understand the system → diagnostic track
- Understand it, stuck on solutions → generative track
- Have solutions, need to choose → convergence track
- Something went wrong → retrospective track
- About to commit to something → adversarial track
- Don't know what we don't know → epistemic track
- Incentives are misaligned / need to design rules → strategic track
Domain complexity (if unclear, assign unknown and include cynefin-framework in Step 2):
- Clear / Complicated / Complex / Chaotic / Unknown
Write the triage output in this format before moving to Step 2:
TRIAGE
Temporal: [forward / present / backward]
Maturity: [label]
Obstacle: [label] → [track]
Domain: [label]
STEP 2 — Build the Execution Plan (do this yourself, no skill file needed)
Using the triage output and the routing table below, produce a numbered execution plan. Label each step as SEQUENTIAL or PARALLEL. Do not execute anything yet.
Routing table — entry points by obstacle:
| Obstacle | Execution plan |
|---|---|
| Don't know what we're dealing with | PARALLEL: epistemic-mapping ∥ cynefin-framework ∥ cognitive-bias-detection → re-triage |
| Don't understand the system / why it's failing | PARALLEL: systems-thinking ∥ theory-of-constraints ∥ causal-inference → SEQUENTIAL: five-whys-root-cause → decision-synthesis |
| Don't understand the system / strategic agents involved | PARALLEL: systems-thinking ∥ game-theoretic-analysis ∥ causal-inference → SEQUENTIAL: five-whys-root-cause → decision-synthesis |
| Plan / design needs validation | PARALLEL: inversion-premortem ∥ red-teaming ∥ second-order-thinking → SEQUENTIAL: cognitive-bias-detection → decision-synthesis |
| Stuck, all solutions feel the same | SEQUENTIAL: epistemic-mapping → PARALLEL: lateral-thinking ∥ analogical-thinking ∥ first-principles-thinking → SEQUENTIAL: inversion-premortem → decision-synthesis |
| Need to decide between options | PARALLEL: scenario-planning ∥ probabilistic-thinking ∥ fermi-estimation → decision-synthesis |
| Something went wrong / post-mortem | SEQUENTIAL: retrospective-counterfactual → PARALLEL: five-whys-root-cause ∥ causal-inference → SEQUENTIAL: cognitive-bias-detection |
| Long-term strategic commitment | PARALLEL: epistemic-mapping ∥ cognitive-bias-detection ∥ cynefin-framework → PARALLEL: scenario-planning ∥ probabilistic-thinking ∥ fermi-estimation → PARALLEL: inversion-premortem ∥ red-teaming ∥ second-order-thinking → PARALLEL: stakeholder-power-mapping ∥ game-theoretic-analysis → decision-synthesis |
| Blocked / people won't adopt | PARALLEL: stakeholder-power-mapping ∥ game-theoretic-analysis ∥ second-order-thinking ∥ causal-inference → decision-synthesis |
| Incentives misaligned / need to design rules | SEQUENTIAL: game-theoretic-analysis → PARALLEL: second-order-thinking ∥ red-teaming → SEQUENTIAL: stakeholder-power-mapping → decision-synthesis |
Write the plan in this format:
EXECUTION PLAN
Step 1 — PARALLEL: [skill-a] ∥ [skill-b] ∥ [skill-c]
Step 2 — SEQUENTIAL: [skill-d]
Step 3 — PARALLEL: [skill-e] ∥ [skill-f]
Step 4 — SEQUENTIAL: [skill-g]
STEP 3 — Execute the Plan (step by step, never all at once)
Work through the execution plan one step at a time. For each step:
For a SEQUENTIAL step:
- Read the skill file:
view: skills/[skill-name]/SKILL.md - Apply the skill to the current problem context, following the methodology in that file exactly
- Write the output in the skill's output format
- Write the routing decision:
STEP [N] COMPLETE — SEQUENTIAL: [skill-name] Key finding: [1–2 sentences] Routing to Step [N+1]: [reason based on finding] - Proceed to the next step
For a PARALLEL step:
-
Spawn one subagent per skill using the Task tool. Each subagent receives:
- The full problem description
- Any findings from prior sequential steps
- This instruction: "Read skills/[skill-name]/SKILL.md then apply that skill's full methodology to the problem. Output your findings in that skill's output format. Do not perform other reasoning."
-
Wait for all subagents to complete
-
Synthesize their outputs:
SYNTHESIS — Step [N] PARALLEL: [skill-a] ∥ [skill-b] ∥ [skill-c] Convergent findings (2+ skills agree — higher confidence): - [finding] Divergent findings (1 skill only — worth noting): - [finding] (from [skill]) Contradictions (skills disagree — resolve before proceeding): - [skill-a] says [X], [skill-b] says [Y] → resolution: [...] Key inputs for Step [N+1]: [what the next step needs] -
Proceed to the next step
STEP 4 — Terminate
Stop when any of these conditions are met:
decision-synthesishas completed and produced a decision with acceptable confidence- The problem is understood well enough to act without further analysis
- If 4+ steps have run without converging: stop, flag the problem as likely unframed, restart from Step 1 with
epistemic-mappingas the only entry point
Write the termination output:
CHAIN COMPLETE
Steps executed: [list with SEQUENTIAL/PARALLEL labels]
Key finding: [1–2 sentences]
Recommended action: [what to do now]
Parallelization Reference
These five clusters are the canonical parallel groups. Skills within a cluster apply independent lenses and never depend on each other's output.
| Pattern | Skills | Use when |
|---|---|---|
| P1 — Adversarial | inversion-premortem ∥ red-teaming ∥ second-order-thinking | Validating a plan before commitment |
| P2 — Generative | lateral-thinking ∥ analogical-thinking ∥ first-principles-thinking | Stuck, need new options |
| P3 — Diagnostic | systems-thinking ∥ theory-of-constraints ∥ causal-inference | System is failing, need to understand why |
| P4 — Uncertainty | scenario-planning ∥ probabilistic-thinking ∥ fermi-estimation | Decision needs quantification |
| P5 — Meta-cognitive | epistemic-mapping ∥ cognitive-bias-detection ∥ cynefin-framework | Clean the reasoning environment first |
| P6 — Strategic | game-theoretic-analysis ∥ stakeholder-power-mapping ∥ second-order-thinking | Multiple agents with competing incentives |
Post-Step Routing Reference
After completing any step, use this table to adjust the plan if findings warrant it:
| Completed skill | Finding | Adjust plan to include |
|---|---|---|
| epistemic-mapping | Dangerous assumptions found | first-principles-thinking (sequential, next) |
| cynefin-framework | Domain = Complex | lateral-thinking ∥ scenario-planning (parallel) instead of structured analysis |
| cynefin-framework | Domain = Chaotic | Act immediately; retrospective-counterfactual after stabilization |
| stakeholder-power-mapping | Strategic agents with conflicting incentives | game-theoretic-analysis (sequential) |
| game-theoretic-analysis | Nash Equilibrium ≠ Pareto Optimum | Mechanism design needed — decision-synthesis with design constraints |
| game-theoretic-analysis | Information asymmetry identified | red-teaming on exploitability (sequential) |
| game-theoretic-analysis | Repeated game dynamics | second-order-thinking on reputation effects (sequential) |
| systems-thinking | Bottleneck identified | theory-of-constraints (sequential) |
| five-whys-root-cause | Root cause is causal claim | causal-inference (sequential) |
| five-whys-root-cause | Multiple root causes | decision-synthesis (sequential, to prioritize) |
| adversarial panel (P1) | High-severity risks | cognitive-bias-detection on the risk analysis (sequential) |
| generative panel (P2) | All options weak | epistemic-mapping — frame may be wrong (sequential, restart) |
| uncertainty panel (P4) | High uncertainty persists | inversion-premortem on worst-case scenario (sequential) |
| decision-synthesis | Key assumption too uncertain | epistemic-mapping → validate before committing |
| retrospective-counterfactual | Systemic cause found | systems-thinking ∥ five-whys-root-cause (parallel) |
Skill Registry
| Skill | File path | Parallelizes with |
|---|---|---|
| epistemic-mapping | skills/epistemic-mapping/SKILL.md | cynefin-framework, cognitive-bias-detection |
| cynefin-framework | skills/cynefin-framework/SKILL.md | epistemic-mapping, cognitive-bias-detection |
| systems-thinking | skills/systems-thinking/SKILL.md | theory-of-constraints, causal-inference |
| theory-of-constraints | skills/theory-of-constraints/SKILL.md | systems-thinking, causal-inference |
| five-whys-root-cause | skills/five-whys-root-cause/SKILL.md | causal-inference |
| causal-inference | skills/causal-inference/SKILL.md | systems-thinking, five-whys-root-cause |
| cognitive-bias-detection | skills/cognitive-bias-detection/SKILL.md | epistemic-mapping, cynefin-framework |
| inversion-premortem | skills/inversion-premortem/SKILL.md | red-teaming, second-order-thinking |
| red-teaming | skills/red-teaming/SKILL.md | inversion-premortem, second-order-thinking |
| second-order-thinking | skills/second-order-thinking/SKILL.md | inversion-premortem, red-teaming |
| probabilistic-thinking | skills/probabilistic-thinking/SKILL.md | scenario-planning, fermi-estimation |
| fermi-estimation | skills/fermi-estimation/SKILL.md | probabilistic-thinking, scenario-planning |
| scenario-planning | skills/scenario-planning/SKILL.md | probabilistic-thinking, fermi-estimation |
| stakeholder-power-mapping | skills/stakeholder-power-mapping/SKILL.md | second-order-thinking, causal-inference, game-theoretic-analysis |
| game-theoretic-analysis | skills/game-theoretic-analysis/SKILL.md | stakeholder-power-mapping, second-order-thinking, causal-inference |
| lateral-thinking | skills/lateral-thinking/SKILL.md | analogical-thinking, first-principles-thinking |
| analogical-thinking | skills/analogical-thinking/SKILL.md | lateral-thinking, first-principles-thinking |
| first-principles-thinking | skills/first-principles-thinking/SKILL.md | lateral-thinking, analogical-thinking |
| decision-synthesis | skills/decision-synthesis/SKILL.md | runs after all others |
| retrospective-counterfactual | skills/retrospective-counterfactual/SKILL.md | five-whys-root-cause, causal-inference |
More from andurilcode/skills
causal-inference
Apply causal inference whenever the user is interpreting metrics, debugging system behavior, reading A/B test results, or trying to understand whether an observed change was caused by an action or by something else. Triggers on phrases like "X caused Y", "since we deployed this, metrics changed", "the A/B test showed a lift", "why did this metric move?", "is this correlation or causation?", "we changed X and Y improved", "how do we know this worked?", "the data shows…", or any situation where conclusions are being drawn from observational data. Also trigger before any decision based on metric interpretation — confusing correlation with causation leads to interventions that don't work and misattribution of credit. Never assume causation without applying this skill.
30probabilistic-thinking
Apply probabilistic and Bayesian thinking whenever the user needs to reason under uncertainty, compare risks, prioritize between options, update beliefs based on new evidence, or make decisions without complete information. Triggers on phrases like "what are the odds?", "how likely is this?", "should I be worried about X?", "which risk is bigger?", "does this data change anything?", "is this a signal or noise?", "what's the probability?", "how confident are we?", or any situation where decisions are being made based on incomplete or ambiguous evidence. Also trigger when someone is treating uncertain outcomes as certainties, or when probability language is being used loosely ("probably", "unlikely", "very likely") without quantification. Don't leave uncertainty unexamined.
27cognitive-bias-detection
Apply cognitive bias detection whenever the user (or Claude itself) is making an evaluation, recommendation, or decision that could be silently distorted by systematic thinking errors. Triggers on phrases like "I'm pretty sure", "obviously", "everyone agrees", "we already invested so much", "this has always worked", "just one more try", "I knew it", "the data confirms what we thought", "we can't go back now", or when analysis feels suspiciously aligned with what someone wanted to hear. Also trigger proactively when evaluating high-stakes decisions, plans with significant sunk costs, or conclusions that conveniently support the evaluator's existing position. The goal is not to paralyze — it's to flag where reasoning may be compromised so it can be corrected.
24inversion-premortem
Apply inversion and pre-mortem thinking whenever the user asks to evaluate a plan, strategy, architecture, feature, or decision before execution — or when they want to stress-test something that already exists. Triggers on phrases like "is this a good idea?", "what could go wrong?", "review this plan", "should we do this?", "are we missing anything?", "stress-test this", "what are the risks?", or any request to validate a decision or design. Use this skill proactively — if the user is about to commit to something, this skill should be consulted even if they don't ask for it explicitly.
23analogical-thinking
Apply analogical thinking whenever the user is designing a system, architecture, or process and would benefit from structural patterns that already exist in other domains — or when a problem feels novel but may have been solved elsewhere under a different name. Triggers on phrases like "how should we structure this?", "has anyone solved this before?", "we're designing from scratch", "what's a good model for this?", "I keep feeling like this resembles something", "what patterns apply here?", or when facing architecture, organizational, or process design decisions. Also trigger when a problem has been analyzed thoroughly but no good solution has emerged — the answer may exist in an adjacent domain. Don't reinvent what's been solved. Recognize the shape of the problem first.
22first-principles-thinking
Apply first principles thinking whenever the user is questioning whether a design, strategy, or solution is fundamentally right — not just well-executed. Triggers on phrases like "are we solving the right problem?", "why do we do it this way?", "is this the best approach?", "everyone does X but should we?", "we've always done it this way", "challenge our assumptions", "start from scratch", "is there a better way?", or when the user seems to be iterating on a flawed premise rather than questioning the premise itself. Also trigger when a proposed solution feels like an incremental improvement on something that may be fundamentally broken. Don't optimize a flawed foundation — question it first.
21