inversion-premortem
Inversion & Pre-mortem
Core principle: Instead of asking "how do we make this succeed?", ask "how does this definitely fail?" — then work backwards. Surfaces hidden assumptions, fragile dependencies, and blind spots forward analysis misses.
Two Complementary Techniques
Technique 1: Inversion
Flip the problem. If you want to understand how a system succeeds, first rigorously define how it fails.
Process:
- State the goal clearly: "We want X to succeed."
- Invert it: "What would guarantee X fails completely?"
- List all failure conditions exhaustively — be adversarial, not optimistic
- For each failure condition: is it currently present, partially present, or guarded against?
- The unguarded ones become your risk register
Key question to ask: "What assumptions must be true for this to work — and what happens if they're false?"
Technique 2: Pre-mortem
Imagine you're 12 months in the future. The project/system/decision has failed badly. Now explain why.
Process:
- Vividly imagine the failure: "It's [date]. This has completely fallen apart."
- Write the failure story in past tense — what happened?
- Identify the 3–5 root causes that led to failure
- For each cause: what early warning signal would have been visible?
- Map signals back to today: which of those signals exist right now?
Output Format
💀 Failure Modes (Inversion)
For each identified failure mode:
- Condition: What must go wrong for this to fail?
- Likelihood: Low / Medium / High
- Currently guarded?: Yes / Partially / No
- What guards it (or what's missing)
🪦 The Failure Story (Pre-mortem)
A short narrative: "It's [future date]. Here's what happened..."
- Name the specific sequence of events
- Call out the moment where it became unrecoverable
- Identify what looked fine at the start but was actually fragile
⚠️ Hidden Assumptions
List the beliefs the plan depends on that haven't been validated:
- Technical assumptions
- Human/team behavior assumptions
- Market or user assumptions
- Dependencies on external systems or actors
🛡️ Mitigations
For each high-likelihood, unguarded failure mode:
- Concrete action to reduce risk
- Early warning metric to monitor
- Reversibility assessment: Can we undo this if it fails?
Thinking Triggers
Use these prompts to deepen the analysis:
- "What is the single most likely way this fails?"
- "Who is most likely to be frustrated by this in 6 months, and why?"
- "What do we believe that might be wrong?"
- "If we had to bet against this succeeding, where would we put our money?"
- "What's the optimistic assumption hiding in plain sight?"
Example Applications
- Evaluating a new feature: What user behaviors are we assuming? What if adoption is 10x lower than expected?
- Architecture decision: What if the third-party API we depend on changes its contract? What if latency doubles?
- Hiring or team change: What does this look like if the new hire isn't a fit after 3 months?
- Growth strategy: Imagine we executed perfectly and it still failed — why?
More from andurilcode/skills
causal-inference
Apply causal inference whenever the user is interpreting metrics, debugging system behavior, reading A/B test results, or trying to understand whether an observed change was caused by an action or by something else. Triggers on phrases like "X caused Y", "since we deployed this, metrics changed", "the A/B test showed a lift", "why did this metric move?", "is this correlation or causation?", "we changed X and Y improved", "how do we know this worked?", "the data shows…", or any situation where conclusions are being drawn from observational data. Also trigger before any decision based on metric interpretation — confusing correlation with causation leads to interventions that don't work and misattribution of credit. Never assume causation without applying this skill.
30probabilistic-thinking
Apply probabilistic and Bayesian thinking whenever the user needs to reason under uncertainty, compare risks, prioritize between options, update beliefs based on new evidence, or make decisions without complete information. Triggers on phrases like "what are the odds?", "how likely is this?", "should I be worried about X?", "which risk is bigger?", "does this data change anything?", "is this a signal or noise?", "what's the probability?", "how confident are we?", or any situation where decisions are being made based on incomplete or ambiguous evidence. Also trigger when someone is treating uncertain outcomes as certainties, or when probability language is being used loosely ("probably", "unlikely", "very likely") without quantification. Don't leave uncertainty unexamined.
27cognitive-bias-detection
Apply cognitive bias detection whenever the user (or Claude itself) is making an evaluation, recommendation, or decision that could be silently distorted by systematic thinking errors. Triggers on phrases like "I'm pretty sure", "obviously", "everyone agrees", "we already invested so much", "this has always worked", "just one more try", "I knew it", "the data confirms what we thought", "we can't go back now", or when analysis feels suspiciously aligned with what someone wanted to hear. Also trigger proactively when evaluating high-stakes decisions, plans with significant sunk costs, or conclusions that conveniently support the evaluator's existing position. The goal is not to paralyze — it's to flag where reasoning may be compromised so it can be corrected.
24analogical-thinking
Apply analogical thinking whenever the user is designing a system, architecture, or process and would benefit from structural patterns that already exist in other domains — or when a problem feels novel but may have been solved elsewhere under a different name. Triggers on phrases like "how should we structure this?", "has anyone solved this before?", "we're designing from scratch", "what's a good model for this?", "I keep feeling like this resembles something", "what patterns apply here?", or when facing architecture, organizational, or process design decisions. Also trigger when a problem has been analyzed thoroughly but no good solution has emerged — the answer may exist in an adjacent domain. Don't reinvent what's been solved. Recognize the shape of the problem first.
22first-principles-thinking
Apply first principles thinking whenever the user is questioning whether a design, strategy, or solution is fundamentally right — not just well-executed. Triggers on phrases like "are we solving the right problem?", "why do we do it this way?", "is this the best approach?", "everyone does X but should we?", "we've always done it this way", "challenge our assumptions", "start from scratch", "is there a better way?", or when the user seems to be iterating on a flawed premise rather than questioning the premise itself. Also trigger when a proposed solution feels like an incremental improvement on something that may be fundamentally broken. Don't optimize a flawed foundation — question it first.
21scenario-planning
Apply scenario planning whenever the user is making long-term decisions, building roadmaps, evaluating strategies, or operating in an environment with significant uncertainty about how the future will unfold. Triggers on phrases like "what should our roadmap look like?", "how do we plan for the future?", "what if things change?", "we're not sure which direction the market will go", "how do we make this strategy resilient?", "what's our plan B?", "what are the different futures we could face?", or when a plan assumes a single future state. Also trigger when someone is over-committed to one expected outcome and hasn't stress-tested the strategy against alternative futures. Don't plan for one future — plan for multiple.
21