second-order-thinking
Second-Order Thinking
First-order asks "what happens next?" Second-order asks "and then what?" Most consequential effects — intended and unintended — live in the second and third order.
The Core Loop
For any action or decision, trace the consequence chain:
Action
→ 1st order effect (immediate, obvious)
→ 2nd order effect (who responds? what changes?)
→ 3rd order effect (what does that change produce?)
→ ... (stop when effects become negligible or too uncertain)
At each level ask:
- Who is affected? (not just the intended target)
- How do they respond? (assume rational actors adapting to the new reality)
- What new equilibrium does that create?
- Does this feedback back into the original action?
Key Mental Models
Unintended Consequences
Classic patterns:
| Intervention | 1st Order | 2nd Order (unintended) |
|---|---|---|
| Rent control | Rents don't rise | Landlords convert to condos, housing supply shrinks |
| Add more lanes to highway | More capacity | More drivers, same congestion (induced demand) |
| Reward engineers for lines of code | More code written | Code quality drops, complexity explodes |
| Add more process to prevent mistakes | Fewer mistakes | Slower delivery, process-worship, talent leaves |
| Fix every bug immediately | Cleaner code | Team never works on features, roadmap stalls |
Goodhart's Law
"When a measure becomes a target, it ceases to be a good measure." Optimizing for a metric shifts behavior to game it — the underlying goal is lost.
Ask: What happens to this metric if people optimize for it directly?
Cobra Effect
Incentives designed to solve a problem can make it worse. (British bounties for dead cobras in India → people bred cobras for bounties.)
Ask: Could the incentive structure be gamed to produce more of what we're trying to eliminate?
Equilibrium Shifts
Systems seek equilibrium. Disturbed, they rebalance — often canceling your intervention.
Ask: What new equilibrium does this create? Better or worse than the current one?
Output Format
🎯 First-Order Effects
The immediate, obvious results of the action:
- What changes directly?
- Who benefits immediately?
- What problem does this solve?
🔄 Second-Order Effects
Who adapts? What do they do in response?
- Actors affected: Who is impacted and how might they respond?
- Behavioral shifts: How does behavior change in response to the new reality?
- New dynamics: What new relationships, incentives, or tensions emerge?
🌊 Third-Order Effects (if significant)
What does the second-order response produce?
- Does this reinforce or undermine the original intent?
- Does this create a new problem to solve?
- Does this change the system's equilibrium permanently?
⚠️ Unintended Consequences to Watch
List the most likely negative second/third-order effects:
- Probability: Low / Medium / High
- Severity: Minor / Significant / Critical
- Reversibility: Easily reversible / Hard to undo / Irreversible
🛡️ Design Adjustments
Modify the action to preserve first-order benefits while mitigating second-order risks:
- Add constraints/safeguards
- Phase the change (test before full rollout)
- Monitor early signals of bad second-order effects
- Build in a reversal mechanism
Thinking Triggers
Use these to deepen the analysis:
- "Who benefits from the current state and will resist this change?"
- "If this works exactly as intended, what new problem does it create?"
- "What gets optimized for, and what happens when people optimize for it directly?"
- "10 minutes / 10 months / 10 years from now — how does this look different?"
- "What's the equilibrium this produces? Do we want to live there?"
- "Who isn't in the room that this affects?"
Time Horizons
Apply three horizons:
| Horizon | Question | Typical blind spot |
|---|---|---|
| 10 minutes | What happens immediately? | Usually well-understood |
| 10 months | Who has adapted and how? | Often overlooked |
| 10 years | Long-term equilibrium? | Almost always ignored |
The 10-month window is where most unintended consequences first become visible.
Example Applications
- "Let's add a KPI for deployment frequency" → Engineers start splitting PRs artificially, code quality drops
- "We should make the AI agent more autonomous" → Fewer interruptions (good) → harder to catch drift → errors compound silently
- "Let's reduce meeting time" → More focus time (good) → alignment gaps emerge → decisions made in silos → rework increases
- "We should charge for the API to reduce abuse" → Less abuse (good) → legitimate experimenters leave → ecosystem shrinks → competitors gain ground
More from andurilcode/skills
causal-inference
Apply causal inference whenever the user is interpreting metrics, debugging system behavior, reading A/B test results, or trying to understand whether an observed change was caused by an action or by something else. Triggers on phrases like "X caused Y", "since we deployed this, metrics changed", "the A/B test showed a lift", "why did this metric move?", "is this correlation or causation?", "we changed X and Y improved", "how do we know this worked?", "the data shows…", or any situation where conclusions are being drawn from observational data. Also trigger before any decision based on metric interpretation — confusing correlation with causation leads to interventions that don't work and misattribution of credit. Never assume causation without applying this skill.
30probabilistic-thinking
Apply probabilistic and Bayesian thinking whenever the user needs to reason under uncertainty, compare risks, prioritize between options, update beliefs based on new evidence, or make decisions without complete information. Triggers on phrases like "what are the odds?", "how likely is this?", "should I be worried about X?", "which risk is bigger?", "does this data change anything?", "is this a signal or noise?", "what's the probability?", "how confident are we?", or any situation where decisions are being made based on incomplete or ambiguous evidence. Also trigger when someone is treating uncertain outcomes as certainties, or when probability language is being used loosely ("probably", "unlikely", "very likely") without quantification. Don't leave uncertainty unexamined.
27cognitive-bias-detection
Apply cognitive bias detection whenever the user (or Claude itself) is making an evaluation, recommendation, or decision that could be silently distorted by systematic thinking errors. Triggers on phrases like "I'm pretty sure", "obviously", "everyone agrees", "we already invested so much", "this has always worked", "just one more try", "I knew it", "the data confirms what we thought", "we can't go back now", or when analysis feels suspiciously aligned with what someone wanted to hear. Also trigger proactively when evaluating high-stakes decisions, plans with significant sunk costs, or conclusions that conveniently support the evaluator's existing position. The goal is not to paralyze — it's to flag where reasoning may be compromised so it can be corrected.
24inversion-premortem
Apply inversion and pre-mortem thinking whenever the user asks to evaluate a plan, strategy, architecture, feature, or decision before execution — or when they want to stress-test something that already exists. Triggers on phrases like "is this a good idea?", "what could go wrong?", "review this plan", "should we do this?", "are we missing anything?", "stress-test this", "what are the risks?", or any request to validate a decision or design. Use this skill proactively — if the user is about to commit to something, this skill should be consulted even if they don't ask for it explicitly.
23analogical-thinking
Apply analogical thinking whenever the user is designing a system, architecture, or process and would benefit from structural patterns that already exist in other domains — or when a problem feels novel but may have been solved elsewhere under a different name. Triggers on phrases like "how should we structure this?", "has anyone solved this before?", "we're designing from scratch", "what's a good model for this?", "I keep feeling like this resembles something", "what patterns apply here?", or when facing architecture, organizational, or process design decisions. Also trigger when a problem has been analyzed thoroughly but no good solution has emerged — the answer may exist in an adjacent domain. Don't reinvent what's been solved. Recognize the shape of the problem first.
22first-principles-thinking
Apply first principles thinking whenever the user is questioning whether a design, strategy, or solution is fundamentally right — not just well-executed. Triggers on phrases like "are we solving the right problem?", "why do we do it this way?", "is this the best approach?", "everyone does X but should we?", "we've always done it this way", "challenge our assumptions", "start from scratch", "is there a better way?", or when the user seems to be iterating on a flawed premise rather than questioning the premise itself. Also trigger when a proposed solution feels like an incremental improvement on something that may be fundamentally broken. Don't optimize a flawed foundation — question it first.
21