theory-of-constraints
Theory of Constraints (TOC)
Core principle: Every system has exactly one constraint limiting its throughput at any moment. Improving anything that is not the constraint is waste. Find the constraint, exploit it, then repeat.
The 5 Focusing Steps
Step 1: Identify the Constraint
Find the single bottleneck — the resource, process, or step with the least capacity relative to demand.
How to find it:
- Where does work pile up? (queue buildup = upstream of the constraint)
- Where is there idle time? (downstream of the constraint — starved for input)
- What does everyone wait on? (the constraint is often "that one person" or "that one step")
- What's the longest step in the value stream?
Signal: The constraint is where WIP accumulates and where delays originate.
Step 2: Exploit the Constraint
Get maximum throughput from the constraint without spending money or making major changes first.
Tactics:
- Eliminate waste at the constraint (don't let it sit idle, don't let it process low-value work)
- Protect it with a buffer upstream (so it's never starved)
- Remove it from non-constraint work (free it up to do only what only it can do)
- Reduce defects feeding into it (rework at the constraint is doubly expensive)
Step 3: Subordinate Everything Else
Make all non-constraint steps serve the constraint, not optimize themselves.
This is counterintuitive: a non-constraint running at 100% is harmful if it floods the constraint with WIP.
Key shift: Stop measuring local efficiency. Measure constraint throughput as the system metric.
Step 4: Elevate the Constraint
If steps 2–3 aren't enough, now invest: add capacity, add people, add tooling — but only at the constraint.
Step 5: Repeat
Once the constraint is resolved, it moves. Find the new one. Never let inertia become the constraint.
Output Format
🔍 Constraint Identification
- Identified constraint: [Name the specific step, role, resource, or decision]
- Evidence: What signals point to this being the constraint?
- WIP accumulation point: Where does work pile up?
- Downstream starvation: What's blocked waiting for the constraint?
📊 Throughput Analysis
- Current flow: [Input → Step A → Step B → ... → Output]
- Constraint location in the flow
- Estimated throughput loss due to constraint
⚡ Exploit Actions (no-cost first)
- What waste can be removed at the constraint immediately?
- What low-value work can be offloaded from the constraint?
- What buffers should be added upstream?
🔄 Subordination Changes
- Which non-constraint steps are currently "optimizing locally" and harming system throughput?
- What should they slow down or stop doing?
📈 Elevation Options (if needed)
- Investment options to increase constraint capacity
- Cost/benefit relative to throughput gain
⚠️ False Bottlenecks to Avoid
- Which steps look slow but are actually downstream of the real constraint?
- What "improvements" would be wasted effort?
Common Constraint Types
| Type | Example | Signal |
|---|---|---|
| Capacity constraint | One engineer reviews all PRs | PRs queue up, reviewer is always busy |
| Knowledge constraint | Only one person knows the system | Everything waits on that person |
| Decision constraint | Approvals bottleneck execution | Teams idle waiting for sign-off |
| Handoff constraint | Work crosses team boundaries | Long wait times at team interfaces |
| Policy constraint | Rules prevent fast action | Work is done but can't ship |
| Market constraint | Demand is the limit | System has capacity but no demand |
Key Mental Shifts
- Don't: Optimize every step. Do: Protect and exploit the constraint.
- Don't: Measure local efficiency. Do: Measure global throughput.
- Don't: Start with investment. Do: Exploit first, then elevate.
- Don't: Assume the constraint is fixed. Do: Expect it to move after you fix it.
More from andurilcode/skills
causal-inference
Apply causal inference whenever the user is interpreting metrics, debugging system behavior, reading A/B test results, or trying to understand whether an observed change was caused by an action or by something else. Triggers on phrases like "X caused Y", "since we deployed this, metrics changed", "the A/B test showed a lift", "why did this metric move?", "is this correlation or causation?", "we changed X and Y improved", "how do we know this worked?", "the data shows…", or any situation where conclusions are being drawn from observational data. Also trigger before any decision based on metric interpretation — confusing correlation with causation leads to interventions that don't work and misattribution of credit. Never assume causation without applying this skill.
30probabilistic-thinking
Apply probabilistic and Bayesian thinking whenever the user needs to reason under uncertainty, compare risks, prioritize between options, update beliefs based on new evidence, or make decisions without complete information. Triggers on phrases like "what are the odds?", "how likely is this?", "should I be worried about X?", "which risk is bigger?", "does this data change anything?", "is this a signal or noise?", "what's the probability?", "how confident are we?", or any situation where decisions are being made based on incomplete or ambiguous evidence. Also trigger when someone is treating uncertain outcomes as certainties, or when probability language is being used loosely ("probably", "unlikely", "very likely") without quantification. Don't leave uncertainty unexamined.
27cognitive-bias-detection
Apply cognitive bias detection whenever the user (or Claude itself) is making an evaluation, recommendation, or decision that could be silently distorted by systematic thinking errors. Triggers on phrases like "I'm pretty sure", "obviously", "everyone agrees", "we already invested so much", "this has always worked", "just one more try", "I knew it", "the data confirms what we thought", "we can't go back now", or when analysis feels suspiciously aligned with what someone wanted to hear. Also trigger proactively when evaluating high-stakes decisions, plans with significant sunk costs, or conclusions that conveniently support the evaluator's existing position. The goal is not to paralyze — it's to flag where reasoning may be compromised so it can be corrected.
24inversion-premortem
Apply inversion and pre-mortem thinking whenever the user asks to evaluate a plan, strategy, architecture, feature, or decision before execution — or when they want to stress-test something that already exists. Triggers on phrases like "is this a good idea?", "what could go wrong?", "review this plan", "should we do this?", "are we missing anything?", "stress-test this", "what are the risks?", or any request to validate a decision or design. Use this skill proactively — if the user is about to commit to something, this skill should be consulted even if they don't ask for it explicitly.
23analogical-thinking
Apply analogical thinking whenever the user is designing a system, architecture, or process and would benefit from structural patterns that already exist in other domains — or when a problem feels novel but may have been solved elsewhere under a different name. Triggers on phrases like "how should we structure this?", "has anyone solved this before?", "we're designing from scratch", "what's a good model for this?", "I keep feeling like this resembles something", "what patterns apply here?", or when facing architecture, organizational, or process design decisions. Also trigger when a problem has been analyzed thoroughly but no good solution has emerged — the answer may exist in an adjacent domain. Don't reinvent what's been solved. Recognize the shape of the problem first.
22first-principles-thinking
Apply first principles thinking whenever the user is questioning whether a design, strategy, or solution is fundamentally right — not just well-executed. Triggers on phrases like "are we solving the right problem?", "why do we do it this way?", "is this the best approach?", "everyone does X but should we?", "we've always done it this way", "challenge our assumptions", "start from scratch", "is there a better way?", or when the user seems to be iterating on a flawed premise rather than questioning the premise itself. Also trigger when a proposed solution feels like an incremental improvement on something that may be fundamentally broken. Don't optimize a flawed foundation — question it first.
21