first-principles-thinking
First Principles Thinking
Core principle: Strip away assumptions, conventions, and analogies. Reduce everything to the fundamental truths you know to be true, then rebuild from there. Most thinking is by analogy — "we do it this way because that's how it's done." First principles asks: why is it done that way at all?
The Core Process
Step 1: Identify the Current Belief or Solution
State clearly what is currently assumed, accepted, or proposed:
- What is the existing approach?
- What problem is it trying to solve?
- What does everyone in this space assume to be true?
Step 2: Challenge Every Assumption
For each element of the current approach, ask:
- "Is this actually true, or do we believe it because we've always believed it?"
- "Is this a constraint of reality, or a constraint of convention?"
- "What would have to be true for this assumption to be wrong?"
Distinguish between:
- Physical constraints: Laws of nature, math, physics — these are real
- Resource constraints: Time, money, people — real but changeable
- Conventional constraints: "You can't do X" meaning "nobody has done X yet"
- Inherited assumptions: Decisions made for past conditions that no longer apply
Step 3: Identify the Fundamental Truths
What do you actually know, stripped of convention?
- What is the core need being served?
- What are the irreducible requirements?
- What would this look like if you designed it from zero, knowing only what's physically true?
Step 4: Rebuild From the Ground Up
Starting only from fundamental truths, reconstruct the solution:
- What's the simplest approach that satisfies the real requirements?
- What would this look like if invented today, with today's capabilities?
- What existing constraints can be eliminated now that you're not inheriting them?
Output Format
🏛️ Current Belief / Approach
State what's being questioned:
- The existing design, strategy, or assumption
- Why it exists (historical or conventional reason)
- What problem it was meant to solve
🔬 Assumption Deconstruction
For each major assumption:
| Assumption | Type | Actually true? | Evidence |
|---|---|---|---|
| "We need X to do Y" | Conventional | Maybe not | Reason |
| "This requires Z" | Physical | Yes | Because... |
| "Users expect A" | Inherited | Unvalidated | Never tested |
🧱 Fundamental Truths Identified
What do we actually know, independent of convention?
- Core need: [The real underlying need being served]
- Hard constraints: [What is genuinely immovable]
- Validated facts: [What has been empirically confirmed]
🔨 Rebuilt Solution
Starting from fundamentals:
- What does the solution look like without inherited assumptions?
- What changes dramatically?
- What stays the same (and why — what fundamental truth supports it)?
- What's now possible that wasn't in the old frame?
⚠️ Assumption Risks
Which surviving assumptions are highest-risk?
- If any single assumption proves wrong, what breaks?
- Which assumptions should be validated before committing?
Thinking Triggers
- "What is this actually trying to accomplish at the most basic level?"
- "If we were building this today with no legacy, what would we do?"
- "Is this a law of nature or a law of habit?"
- "Who decided this was the right way, and what were their constraints?"
- "What would a brilliant outsider — who doesn't know our conventions — suggest?"
- "Are we solving the problem, or are we solving our version of the problem?"
Analogy vs. First Principles
Most thinking operates by analogy:
"We do it like company X does it" / "The industry standard is Y" / "That's how it's always been done"
Analogy-based thinking is fast and usually adequate. But it inherits the constraints and mistakes of the original. When something is fundamentally broken or when you need a step-change improvement — not an incremental one — analogy thinking will never get you there.
First principles is slower but the only path to genuinely novel solutions.
Example Applications
- "Should our agent pipeline be sequential?" → Why sequential? What's the fundamental constraint? Is it ordering of dependencies, or just convention borrowed from waterfall?
- "We need a dedicated QA team" → Is QA a separate function by necessity, or because testing was historically slow and manual?
- "Our API needs versioning" → What's the actual need — backward compatibility. What's the minimum mechanism that provides that, built from scratch?
- "We need standups every day" → What's the fundamental need? Coordination. What are all the ways to achieve that, unconstrained by "meeting" as a format?
More from andurilcode/skills
causal-inference
Apply causal inference whenever the user is interpreting metrics, debugging system behavior, reading A/B test results, or trying to understand whether an observed change was caused by an action or by something else. Triggers on phrases like "X caused Y", "since we deployed this, metrics changed", "the A/B test showed a lift", "why did this metric move?", "is this correlation or causation?", "we changed X and Y improved", "how do we know this worked?", "the data shows…", or any situation where conclusions are being drawn from observational data. Also trigger before any decision based on metric interpretation — confusing correlation with causation leads to interventions that don't work and misattribution of credit. Never assume causation without applying this skill.
30probabilistic-thinking
Apply probabilistic and Bayesian thinking whenever the user needs to reason under uncertainty, compare risks, prioritize between options, update beliefs based on new evidence, or make decisions without complete information. Triggers on phrases like "what are the odds?", "how likely is this?", "should I be worried about X?", "which risk is bigger?", "does this data change anything?", "is this a signal or noise?", "what's the probability?", "how confident are we?", or any situation where decisions are being made based on incomplete or ambiguous evidence. Also trigger when someone is treating uncertain outcomes as certainties, or when probability language is being used loosely ("probably", "unlikely", "very likely") without quantification. Don't leave uncertainty unexamined.
27cognitive-bias-detection
Apply cognitive bias detection whenever the user (or Claude itself) is making an evaluation, recommendation, or decision that could be silently distorted by systematic thinking errors. Triggers on phrases like "I'm pretty sure", "obviously", "everyone agrees", "we already invested so much", "this has always worked", "just one more try", "I knew it", "the data confirms what we thought", "we can't go back now", or when analysis feels suspiciously aligned with what someone wanted to hear. Also trigger proactively when evaluating high-stakes decisions, plans with significant sunk costs, or conclusions that conveniently support the evaluator's existing position. The goal is not to paralyze — it's to flag where reasoning may be compromised so it can be corrected.
24inversion-premortem
Apply inversion and pre-mortem thinking whenever the user asks to evaluate a plan, strategy, architecture, feature, or decision before execution — or when they want to stress-test something that already exists. Triggers on phrases like "is this a good idea?", "what could go wrong?", "review this plan", "should we do this?", "are we missing anything?", "stress-test this", "what are the risks?", or any request to validate a decision or design. Use this skill proactively — if the user is about to commit to something, this skill should be consulted even if they don't ask for it explicitly.
23analogical-thinking
Apply analogical thinking whenever the user is designing a system, architecture, or process and would benefit from structural patterns that already exist in other domains — or when a problem feels novel but may have been solved elsewhere under a different name. Triggers on phrases like "how should we structure this?", "has anyone solved this before?", "we're designing from scratch", "what's a good model for this?", "I keep feeling like this resembles something", "what patterns apply here?", or when facing architecture, organizational, or process design decisions. Also trigger when a problem has been analyzed thoroughly but no good solution has emerged — the answer may exist in an adjacent domain. Don't reinvent what's been solved. Recognize the shape of the problem first.
22scenario-planning
Apply scenario planning whenever the user is making long-term decisions, building roadmaps, evaluating strategies, or operating in an environment with significant uncertainty about how the future will unfold. Triggers on phrases like "what should our roadmap look like?", "how do we plan for the future?", "what if things change?", "we're not sure which direction the market will go", "how do we make this strategy resilient?", "what's our plan B?", "what are the different futures we could face?", or when a plan assumes a single future state. Also trigger when someone is over-committed to one expected outcome and hasn't stress-tested the strategy against alternative futures. Don't plan for one future — plan for multiple.
21