inversion-premortem
Inversion & Pre-mortem Skill
Core principle: Instead of asking "how do we make this succeed?", ask "how does this definitely fail?" — then work backwards. Surfaces hidden assumptions, fragile dependencies, and blind spots that forward-thinking analysis misses.
Two Complementary Techniques
Technique 1: Inversion
Flip the problem. If you want to understand how a system succeeds, first rigorously define how it fails.
Process:
- State the goal clearly: "We want X to succeed."
- Invert it: "What would guarantee X fails completely?"
- List all failure conditions exhaustively — be adversarial, not optimistic
- For each failure condition: is it currently present, partially present, or guarded against?
- The unguarded ones become your risk register
Key question to ask: "What assumptions must be true for this to work — and what happens if they're false?"
Technique 2: Pre-mortem
Imagine you're 12 months in the future. The project/system/decision has failed badly. Now explain why.
Process:
- Vividly imagine the failure: "It's [date]. This has completely fallen apart."
- Write the failure story in past tense — what happened?
- Identify the 3–5 root causes that led to failure
- For each cause: what early warning signal would have been visible?
- Map signals back to today: which of those signals exist right now?
Output Format
💀 Failure Modes (Inversion)
For each identified failure mode:
- Condition: What must go wrong for this to fail?
- Likelihood: Low / Medium / High
- Currently guarded?: Yes / Partially / No
- What guards it (or what's missing)
🪦 The Failure Story (Pre-mortem)
A short narrative: "It's [future date]. Here's what happened..."
- Name the specific sequence of events
- Call out the moment where it became unrecoverable
- Identify what looked fine at the start but was actually fragile
⚠️ Hidden Assumptions
List the beliefs the plan depends on that haven't been validated:
- Technical assumptions
- Human/team behavior assumptions
- Market or user assumptions
- Dependencies on external systems or actors
🛡️ Mitigations
For each high-likelihood, unguarded failure mode:
- Concrete action to reduce risk
- Early warning metric to monitor
- Reversibility assessment: Can we undo this if it fails?
Thinking Triggers
Use these prompts to deepen the analysis:
- "What is the single most likely way this fails?"
- "Who is most likely to be frustrated by this in 6 months, and why?"
- "What do we believe that might be wrong?"
- "If we had to bet against this succeeding, where would we put our money?"
- "What's the optimistic assumption hiding in plain sight?"
Example Applications
- Evaluating a new feature: What user behaviors are we assuming? What if adoption is 10x lower than expected?
- Architecture decision: What if the third-party API we depend on changes its contract? What if latency doubles?
- Hiring or team change: What does this look like if the new hire isn't a fit after 3 months?
- Growth strategy: Imagine we executed perfectly and it still failed — why?
More from andurilcode/craftwork
deep-document-processor
>
4summarizer
Apply this skill whenever the user asks to summarize, condense, distill, or compress any content — a document, article, meeting notes, conversation, codebase, book, research paper, video transcript, or any other source material. Triggers on phrases like 'summarize this', 'give me the TL;DR', 'condense this', 'what are the key points?', 'distill this down', 'brief me on this', 'what's the gist?', 'BLUF this', 'executive summary', 'compress this for me', or any request to reduce content while preserving its essential value. Also trigger when the user pastes a long text and implicitly wants it shortened, when they share a link and ask 'what does this say?', or when they ask for meeting notes or action items from a transcript. This skill does NOT apply to 'explain X to me' (use topic-explainer) or 'write a summary section for my doc' (use technical-writing). This skill is for when source material exists and needs to be compressed.
3llms-txt-generator
Generate llms.txt-style context documents — token-budgeted, section-per-concept Markdown optimized for LLM and RAG consumption. Use this skill whenever someone asks to generate an llms.txt, create LLM-friendly documentation, produce a context document for a library or codebase, build a RAG-ready reference, make docs 'agent-readable', create a developer quick-reference, or says anything like 'generate context for X', 'make an llms.txt for this repo', 'create a reference doc for NotebookLM', 'turn these docs into something an LLM can use', 'context document', 'developer cheatsheet from docs'. Also trigger when someone provides a GitHub repo URL and asks for documentation synthesis, or when working inside a codebase and asked to produce a self-contained reference of how it works. This is the context engineer's doc generation tool — it turns sprawling documentation into precise, structured, token-efficient context.
3context-compressor
>
3probabilistic-thinking
Apply probabilistic and Bayesian thinking whenever the user needs to reason under uncertainty, compare risks, prioritize between options, update beliefs based on new evidence, or make decisions without complete information. Triggers on phrases like "what are the odds?", "how likely is this?", "should I be worried about X?", "which risk is bigger?", "does this data change anything?", "is this a signal or noise?", "what's the probability?", "how confident are we?", or any situation where decisions are being made based on incomplete or ambiguous evidence. Also trigger when someone is treating uncertain outcomes as certainties, or when probability language is being used loosely ("probably", "unlikely", "very likely") without quantification. Don't leave uncertainty unexamined.
3context-cartography
Use when designing what goes into an agent's context window — system prompts, tool definitions, retrieval results, or any context artifact assembled before the agent runs. Triggers on "what should I put in the system prompt?", "how do I structure my context?", "the agent loses track of...", "my context window is full", "how do I decide what to include?", "designing a new harness", "the agent ignores my instructions". Do NOT use for one-off prompts, runtime conversation management, or when the problem is model capability rather than context design.
3