decision-synthesis
Decision Synthesis
Core principle: Analysis produces options and criteria. Synthesis produces a decision. Most frameworks are good at divergence — generating possibilities, mapping complexity. This skill is about convergence: taking what you know and making a defensible, traceable choice.
A good decision process doesn't guarantee the right outcome. It maximizes the quality of reasoning given available information, and it's transparent enough to learn from when outcomes arrive.
When to Use This Skill
Use after other reasoning frameworks have done their work:
- Systems Thinking mapped the structure
- 5 Whys found the root causes
- Scenario Planning produced multiple futures
- Red Teaming attacked the options
- Stakeholder Mapping identified who needs to align
Now you have a rich picture and multiple options. Decision Synthesis is how you land.
The Core Process
Step 1: Clarify What's Actually Being Decided
Before evaluating options, make the decision crisp:
- What is the exact choice being made?
- What's the decision horizon? (reversible in 3 months? irreversible?)
- Who has final authority?
- What's the cost of delaying the decision?
Many decision processes fail because people are evaluating different questions without realizing it.
Step 2: Surface All Options
List every viable option explicitly — including:
- The status quo (doing nothing is always an option)
- Hybrid approaches
- Sequenced approaches (do A now, revisit B in 6 months)
- Options that were dismissed early but deserve a formal look
Step 3: Define Criteria
What matters in this decision? List the criteria explicitly:
- Must-haves (binary — options failing these are eliminated)
- Want-to-haves (graded — options are compared on these)
Good criteria are:
- Specific enough to score (not "good quality" but "error rate < 1%")
- Independent of each other (avoid double-counting)
- Tied to the actual goal, not proxies
Step 4: Weight the Criteria
Not all criteria are equally important. Assign weights explicitly — this forces clarity about what actually matters most and exposes hidden disagreements between stakeholders.
Simple approach: distribute 100 points across criteria. The allocation is the conversation.
Step 5: Score the Options
For each option against each criterion, score 1–5 (or 1–10). Be explicit about the reasoning for each score — scores without reasoning can't be challenged or improved.
Step 6: Compute and Challenge
Weighted scores give a quantitative signal — not a verdict. Use the result to:
- Check: does the top scorer match intuition? If not, why not?
- Interrogate: which criteria are driving the result? Are they the right ones?
- Stress-test: if the top two criteria swap weights, does the answer change?
- Sanity-check: would you be comfortable explaining this choice to a critic?
Output Format
🎯 Decision Statement
- Decision: [The exact choice being made]
- Horizon: [Reversible / Partially reversible / Irreversible]
- Decider: [Who has authority]
- Deadline: [When this must be resolved]
📋 Options
| # | Option | Brief description |
|---|---|---|
| 1 | [Name] | [One line] |
| 2 | ... |
⚖️ Criteria & Weights
| Criterion | Type | Weight | Rationale |
|---|---|---|---|
| [Criterion 1] | Must-have | — | [Why it's binary] |
| [Criterion 2] | Want-to-have | 35 | [Why this weight] |
| [Criterion 3] | Want-to-have | 25 | |
| ... | 100 |
📊 Scoring Matrix
| Option | Criterion 1 | Criterion 2 (×35) | Criterion 3 (×25) | ... | Weighted Total |
|---|---|---|---|---|---|
| Option A | ✅ Pass | 4 → 140 | 3 → 75 | X | |
| Option B | ✅ Pass | 2 → 70 | 5 → 125 | Y | |
| Option C | ❌ Fail | — | — | Eliminated |
🏆 Recommendation
- Recommended option: [Name]
- Primary reason: [The 1–2 criteria that drove the result]
- Main trade-off: [What this option sacrifices]
- Confidence: [High / Medium / Low — based on quality of information, not strength of preference]
⚠️ Sensitivity Check
- If [top-weighted criterion] changes in importance, does the answer change?
- What assumption, if wrong, most undermines this recommendation?
- What new information would cause a re-evaluation?
🔁 Reversibility & Regret
- Can this be undone? At what cost?
- Regret minimization: Which choice produces the least regret if the situation changes significantly?
- If confidence is low and the decision is irreversible — flag this explicitly before committing.
Decision Traps to Avoid
False consensus: Everyone nods but the criteria weights were never made explicit — different people were solving for different things.
Analysis paralysis: More analysis rarely resolves genuine value disagreements. Name the disagreement and make the call.
Criteria inflation: Adding more criteria to feel thorough — but irrelevant criteria add noise, not signal. Keep the list short and honest.
Anchoring on the first option: The option framed first gets disproportionate attention. Evaluate all options in parallel, not sequentially.
Score laundering: Working backwards from a preferred conclusion to assign scores that justify it. The matrix is a thinking tool, not a legitimacy machine.
Thinking Triggers
- "If I had to make this decision alone, with no politics involved, what would I choose?"
- "Which criteria are we weighting based on what actually matters vs. what's easy to measure?"
- "Is there a hybrid option we haven't named?"
- "What would a regret minimizer choose? A risk minimizer? A maximizer?"
- "Are we delaying because we need more information, or because we don't want to own the decision?"
More from andurilcode/craftwork
deep-document-processor
>
4summarizer
Apply this skill whenever the user asks to summarize, condense, distill, or compress any content — a document, article, meeting notes, conversation, codebase, book, research paper, video transcript, or any other source material. Triggers on phrases like 'summarize this', 'give me the TL;DR', 'condense this', 'what are the key points?', 'distill this down', 'brief me on this', 'what's the gist?', 'BLUF this', 'executive summary', 'compress this for me', or any request to reduce content while preserving its essential value. Also trigger when the user pastes a long text and implicitly wants it shortened, when they share a link and ask 'what does this say?', or when they ask for meeting notes or action items from a transcript. This skill does NOT apply to 'explain X to me' (use topic-explainer) or 'write a summary section for my doc' (use technical-writing). This skill is for when source material exists and needs to be compressed.
3inversion-premortem
Apply inversion and pre-mortem thinking whenever the user asks to evaluate a plan, strategy, architecture, feature, or decision before execution — or when they want to stress-test something that already exists. Triggers on phrases like "is this a good idea?", "what could go wrong?", "review this plan", "should we do this?", "are we missing anything?", "stress-test this", "what are the risks?", or any request to validate a decision or design. Use this skill proactively — if the user is about to commit to something, this skill should be consulted even if they don't ask for it explicitly.
3llms-txt-generator
Generate llms.txt-style context documents — token-budgeted, section-per-concept Markdown optimized for LLM and RAG consumption. Use this skill whenever someone asks to generate an llms.txt, create LLM-friendly documentation, produce a context document for a library or codebase, build a RAG-ready reference, make docs 'agent-readable', create a developer quick-reference, or says anything like 'generate context for X', 'make an llms.txt for this repo', 'create a reference doc for NotebookLM', 'turn these docs into something an LLM can use', 'context document', 'developer cheatsheet from docs'. Also trigger when someone provides a GitHub repo URL and asks for documentation synthesis, or when working inside a codebase and asked to produce a self-contained reference of how it works. This is the context engineer's doc generation tool — it turns sprawling documentation into precise, structured, token-efficient context.
3context-compressor
>
3probabilistic-thinking
Apply probabilistic and Bayesian thinking whenever the user needs to reason under uncertainty, compare risks, prioritize between options, update beliefs based on new evidence, or make decisions without complete information. Triggers on phrases like "what are the odds?", "how likely is this?", "should I be worried about X?", "which risk is bigger?", "does this data change anything?", "is this a signal or noise?", "what's the probability?", "how confident are we?", or any situation where decisions are being made based on incomplete or ambiguous evidence. Also trigger when someone is treating uncertain outcomes as certainties, or when probability language is being used loosely ("probably", "unlikely", "very likely") without quantification. Don't leave uncertainty unexamined.
3