lateral-thinking
Lateral Thinking
Core principle: Most thinking is vertical — digging deeper in the same direction, refining what already exists. Lateral thinking moves sideways — deliberately escaping the dominant pattern to find a different place to dig. The goal is not a better version of the current solution. It's a different solution entirely.
"You cannot dig a hole in a different place by digging the same hole deeper." — Edward de Bono
When Vertical Thinking Fails
Vertical thinking is efficient when the problem is well-defined and the solution space is known. It fails when:
- The problem keeps recurring despite fixes (the frame is wrong)
- All solutions feel like variations of the same idea
- The best available option is "least bad"
- Everyone in the room agrees on the approach (groupthink + vertical thinking)
- The problem is genuinely novel
When any of these are true, stop digging. Move laterally.
Core Techniques
1. Random Entry
Introduce a completely unrelated stimulus — a random word, image, or object — and force a connection to the problem.
Process:
- Pick a random word (or use one: bridge / fog / anchor / seed / mirror / friction)
- List properties or associations of that word
- Force-connect each property to the problem
- Don't filter — connections that seem absurd often lead somewhere real
Why it works: The random stimulus activates parts of the solution space the brain wouldn't reach through logical progression.
Example: Problem: how do we reduce agent pipeline errors? Random word: filter Associations: removes impurities, selective, layered, passive → Idea: add a passive validation layer between agents that only flags, never blocks — removes errors without slowing the flow
2. Provocation (Po)
Make a deliberately absurd, impossible, or reversed statement about the problem — then extract useful ideas from it.
Provocations use the operator "Po" (signals it's a provocation, not a claim):
- Po: the agent has no memory at all
- Po: users pay us to make mistakes
- Po: the bottleneck is the solution
- Po: we do the opposite of what we're doing now
Process:
- State the provocation (make it extreme — mild provocations produce mild ideas)
- Ask: "What would have to be true for this to work?"
- Ask: "What intermediate ideas does this generate?"
- Extract the usable concepts, even if the provocation itself is impossible
Example: Problem: onboarding takes too long Po: users onboard themselves before they meet us → Idea: pre-onboarding flow that users complete asynchronously, so the first live interaction starts mid-process, not at the beginning
3. Challenge
Question every assumption about why things are done the way they are — not to criticize, but to open alternatives.
Challenge questions:
- "Why is it done this way?"
- "Does it have to be done at all?"
- "Does it have to be done in this order?"
- "Does it have to be done by this person/system?"
- "Does it have to be done at this point in the process?"
Every "because that's how it's done" is a candidate for lateral escape.
Distinguish:
- Necessary constraints: Remove them and the goal disappears (challenge won't help here)
- Arbitrary constraints: Historical, habitual, or inherited — fertile ground for lateral thinking
4. Alternatives (Fixed Point)
Fix the goal but change everything else. List as many alternative ways to achieve the same outcome as possible — without evaluating any of them until the list is complete.
Process:
- State the fixed point: "The goal is [outcome]"
- Generate 10+ routes to that outcome — including the obvious, the strange, and the impractical
- Only evaluate after the full list exists (evaluation during generation kills lateral thinking)
- Look for hybrids between non-obvious options
Quota thinking: Set a number before you start ("we need 15 alternatives") — it forces you past the first 3–4 obvious answers into genuinely new territory.
5. Concept Extraction
Abstract the current solution to its core concept, then find other ways to deliver the same concept.
Process:
- Describe the current solution in one sentence
- Extract the underlying concept: "The concept here is [X]"
- List other implementations of that same concept
- Pick the most promising and develop it
Example: Current solution: weekly sync meeting to align the team Core concept: shared awareness of state Alternative implementations: async status page, ambient dashboard, daily digest message, visual kanban, automated diff reports → Some of these are faster, quieter, and more persistent than a meeting
6. Reversal
Reverse the problem statement entirely. Then ask what that reversed world looks like, and work backwards to generate ideas.
Process:
- State the problem: "How do we get more users to complete onboarding?"
- Reverse it: "How do we get users to abandon onboarding?"
- List everything that would cause the reversed outcome
- Invert those back into ideas for the original problem
Why it works: The reversed problem is often easier to answer — and inverting the answers surfaces ideas that pure forward thinking wouldn't reach.
Output Format
🔀 Dominant Pattern Identification
Before generating alternatives, name the dominant pattern being broken:
- "The current thinking is: [frame/assumption/direction]"
- "The rut is: [what keeps pulling solutions back to the same place]"
💡 Generated Alternatives
Present ideas grouped by technique used. For each:
- Idea: One-sentence description
- Origin: Which technique generated it (signals it's deliberate, not random)
- Kernel: The useful concept inside, even if the idea as stated is impractical
Don't filter during generation. Quantity first, quality second.
🌱 Most Promising Concepts
After generation, select the 2–4 ideas with the most potential:
- What makes this worth developing?
- What would need to be true for it to work?
- What's the next concrete step to test it?
⚠️ Dominant Pattern Traps to Watch
Note any ideas that are actually vertical (refined versions of the existing approach disguised as new ideas). Flag and set aside — they belong in a different conversation.
Rules for Lateral Thinking Sessions
- Suspend judgment during generation — evaluation kills divergence
- Welcome the absurd — impractical ideas often contain useful kernels
- Quantity before quality — the goal is to move past the first 3 obvious answers
- Build, don't reject — "yes, and..." before "yes, but..."
- Name the dominant pattern first — you can't escape a rut you haven't identified
Thinking Triggers
- "What would this look like if we had no history with the problem?"
- "Who solves a completely different problem in a way that could apply here?"
- "What's the most counterintuitive thing we could do?"
- "If we couldn't use the current approach at all, what would we do?"
- "What would a 10-year-old suggest? What about someone from a completely different industry?"
- "What are we not allowed to question — and why?"
Example Applications
- "We've tried everything to reduce churn" → Challenge every assumption about what "reducing churn" means, then use Reversal to find what causes people to want to leave — inverted back into retention ideas
- "Our pipeline is slow and we don't know how to speed it up" → Random Entry + Provocation to break out of "optimize each step" thinking
- "We need a new feature but everything feels incremental" → Concept Extraction to abstract what users are really hiring current features to do, then explore radically different implementations
- "The team keeps proposing the same solutions in retros" → Fixed Point alternatives with a quota of 15 — forces past the obvious into genuinely new territory
More from andurilcode/skills
causal-inference
Apply causal inference whenever the user is interpreting metrics, debugging system behavior, reading A/B test results, or trying to understand whether an observed change was caused by an action or by something else. Triggers on phrases like "X caused Y", "since we deployed this, metrics changed", "the A/B test showed a lift", "why did this metric move?", "is this correlation or causation?", "we changed X and Y improved", "how do we know this worked?", "the data shows…", or any situation where conclusions are being drawn from observational data. Also trigger before any decision based on metric interpretation — confusing correlation with causation leads to interventions that don't work and misattribution of credit. Never assume causation without applying this skill.
30probabilistic-thinking
Apply probabilistic and Bayesian thinking whenever the user needs to reason under uncertainty, compare risks, prioritize between options, update beliefs based on new evidence, or make decisions without complete information. Triggers on phrases like "what are the odds?", "how likely is this?", "should I be worried about X?", "which risk is bigger?", "does this data change anything?", "is this a signal or noise?", "what's the probability?", "how confident are we?", or any situation where decisions are being made based on incomplete or ambiguous evidence. Also trigger when someone is treating uncertain outcomes as certainties, or when probability language is being used loosely ("probably", "unlikely", "very likely") without quantification. Don't leave uncertainty unexamined.
27cognitive-bias-detection
Apply cognitive bias detection whenever the user (or Claude itself) is making an evaluation, recommendation, or decision that could be silently distorted by systematic thinking errors. Triggers on phrases like "I'm pretty sure", "obviously", "everyone agrees", "we already invested so much", "this has always worked", "just one more try", "I knew it", "the data confirms what we thought", "we can't go back now", or when analysis feels suspiciously aligned with what someone wanted to hear. Also trigger proactively when evaluating high-stakes decisions, plans with significant sunk costs, or conclusions that conveniently support the evaluator's existing position. The goal is not to paralyze — it's to flag where reasoning may be compromised so it can be corrected.
24inversion-premortem
Apply inversion and pre-mortem thinking whenever the user asks to evaluate a plan, strategy, architecture, feature, or decision before execution — or when they want to stress-test something that already exists. Triggers on phrases like "is this a good idea?", "what could go wrong?", "review this plan", "should we do this?", "are we missing anything?", "stress-test this", "what are the risks?", or any request to validate a decision or design. Use this skill proactively — if the user is about to commit to something, this skill should be consulted even if they don't ask for it explicitly.
23analogical-thinking
Apply analogical thinking whenever the user is designing a system, architecture, or process and would benefit from structural patterns that already exist in other domains — or when a problem feels novel but may have been solved elsewhere under a different name. Triggers on phrases like "how should we structure this?", "has anyone solved this before?", "we're designing from scratch", "what's a good model for this?", "I keep feeling like this resembles something", "what patterns apply here?", or when facing architecture, organizational, or process design decisions. Also trigger when a problem has been analyzed thoroughly but no good solution has emerged — the answer may exist in an adjacent domain. Don't reinvent what's been solved. Recognize the shape of the problem first.
22first-principles-thinking
Apply first principles thinking whenever the user is questioning whether a design, strategy, or solution is fundamentally right — not just well-executed. Triggers on phrases like "are we solving the right problem?", "why do we do it this way?", "is this the best approach?", "everyone does X but should we?", "we've always done it this way", "challenge our assumptions", "start from scratch", "is there a better way?", or when the user seems to be iterating on a flawed premise rather than questioning the premise itself. Also trigger when a proposed solution feels like an incremental improvement on something that may be fundamentally broken. Don't optimize a flawed foundation — question it first.
21