SystemsThinking

Installation
SKILL.md

Customization

Before executing, check for user customizations at: ~/.claude/PAI/USER/SKILLCUSTOMIZATIONS/SystemsThinking/

If this directory exists, load and apply any PREFERENCES.md, configurations, or resources found there. These override default behavior. If the directory does not exist, proceed with skill defaults.

MANDATORY: Voice Notification (REQUIRED BEFORE ANY ACTION)

You MUST send this notification BEFORE doing anything else when this skill is invoked.

  1. Send voice notification:

    curl -s -X POST http://localhost:31337/notify \
      -H "Content-Type: application/json" \
      -d '{"message": "Running the WORKFLOWNAME workflow in the SystemsThinking skill to ACTION"}' \
      > /dev/null 2>&1 &
    
  2. Output text notification:

    Running the **WorkflowName** workflow in the **SystemsThinking** skill to ACTION...
    

This is not optional. Execute this curl command immediately upon skill invocation.


SystemsThinking Skill

Structured analysis of complex systems — the tools that reveal why the same problem keeps coming back and where a small change produces a large result. Grounded in Donella Meadows, Peter Senge, Jay Forrester, Russell Ackoff, and the Santa Fe Institute tradition.

Systems thinking is the difference between treating symptoms (patch the bug) and fixing structure (change the feedback loop that keeps producing the bug). Most "debug it harder" attempts fail because they operate at the event layer; the real cause lives 3-4 layers below, in the structure that generates events.

Core Concept

A system is a set of elements interconnected in a way that produces a characteristic behavior over time. Change the elements, often nothing happens. Change the interconnections or the purpose, and behavior shifts dramatically.

Five axioms this skill operates on:

  1. Behavior is generated by structure. If the same outcome keeps happening, the cause is structural, not a series of unrelated incidents.
  2. Events are visible; structure is not. Most analysis stops at events. Systems thinking walks down.
  3. Feedback loops are the basic unit. Every persistent pattern is one of a small number of loop archetypes.
  4. High-leverage interventions are usually counterintuitive. The obvious fix often makes the problem worse (policy resistance, shifting the burden, fixes that fail).
  5. You can't optimize a part of a system — you can only improve the system. Local optimization often degrades global performance.

Use / Win

When to use:

  • Recurring problems — the same kind of bug, incident, deadline slip, or conflict keeps appearing. Event-level fixes are not working.
  • Unintended consequences — a "fix" produced a new problem, or made the original worse.
  • System design — before committing to an architecture, product strategy, organization structure, or policy.
  • Debugging systemic issues — distributed-system flakiness, performance cliffs, reliability decay, tech-debt accretion.
  • Strategy — understanding where competition, demand, adoption, or resistance actually comes from.
  • Policy, incentives, organization design — any environment where human behavior is an input.
  • Before a large intervention — run the causal loop first; intended effects are rarely the only effects.

What you win:

  • Structural causes instead of blame-the-nearest-event. The real lever is almost never where the symptom appeared.
  • Archetype recognition — most organizational and technical pathologies match one of ~10 patterns. Naming the pattern unlocks the canonical intervention.
  • Leverage-point identification — Meadows' 12 leverage points, ordered. Parameters are low leverage; paradigms are highest. Knowing where to push is the whole game.
  • Unintended-consequence preview — causal loops let you simulate second- and third-order effects before shipping the change.
  • Durable fixes — structural changes don't regress the way symptom patches do.

Default mental model: At Extended+ effort on anything with recurring behavior, organizational dynamics, or cross-component coupling, systems thinking is not optional enrichment — it's how you find the fix that sticks.

Workflow Routing

Route to the appropriate workflow based on the request.

Workflow Trigger File
Iceberg "iceberg model", "structural cause", "why does this keep happening", walk from symptom down to structure Workflows/Iceberg.md
CausalLoop "causal loop", "feedback loop", "connection circle", "map relationships", build a CLD Workflows/CausalLoop.md
FindArchetype "systems archetype", "recognize this pattern", "fixes that fail", "shifting the burden", "tragedy of the commons" Workflows/FindArchetype.md
FindLeverage "leverage point", "where to intervene", "highest-leverage change", Meadows' 12 Workflows/FindLeverage.md
ConceptMap "concept map", "map the entities", "relationship map", Novak-style mapping Workflows/ConceptMap.md

Quick Reference

  • 5 workflows — Iceberg, CausalLoop, FindArchetype, FindLeverage, ConceptMap
  • Iceberg layers (top to bottom): Events → Patterns → Structures → Mental Models
  • Feedback loop types: Reinforcing (R) — amplifying / exponential; Balancing (B) — goal-seeking / stabilizing
  • Archetype count: ~10 canonical patterns (Senge, Braun)
  • Leverage points: 12 levels, from parameters (weakest) to paradigm transcendence (strongest) — Meadows

Context files (loaded on demand):

  • Foundation.md — Meadows, Senge, Forrester, Ackoff, Capra; canonical definitions
  • Archetypes.md — the 10 systems archetypes with structure, recognition signs, canonical intervention
  • LeveragePoints.md — Meadows' 12 leverage points with worked examples

Integration

Depends on: nothing — standalone analytical skill.

Works well with:

  • RootCauseAnalysis — RCA is event-layer and pattern-layer; SystemsThinking continues down to structure and mental models. Pair them for deep incident analysis.
  • FirstPrinciples — decompose to axioms, then use SystemsThinking to see how axioms interconnect.
  • IterativeDepth — rotates lenses; SystemsThinking is the structural lens.
  • BeCreative / Ideate — generate intervention candidates after identifying the leverage point.
  • Art — render causal loop diagrams, iceberg diagrams, concept maps.

Examples

Example 1: Recurring incidents

User: "we keep getting paged for the same class of timeout"
→ Iceberg workflow
→ Events: 6 pages in 3 weeks
→ Patterns: all during deploy windows, all touching payments service
→ Structure: auto-scaler cold-start latency > health-check timeout during deploys
→ Mental model: "deploys are safe if tests pass" — but health checks aren't in the test path
→ Fix is structural, not another retry

Example 2: Strategy

User: "why does adding engineers slow us down past team size 12?"
→ FindArchetype workflow
→ Match: "Limits to Growth" archetype
→ Reinforcing loop: more engineers → more output → more hiring
→ Balancing loop: team size → coordination cost → per-engineer output ↓
→ Canonical intervention: attack the balancing loop (coordination mechanism), not the reinforcing one (stop hiring)

Example 3: Unintended consequences preview

User: "we're about to add a rate limit to stop abuse"
→ CausalLoop workflow
→ Build CLD of users, abusers, support load, legitimate traffic
→ Surface: balancing loop (rate limit ↓ abuse), reinforcing loop (rate limit → legit users retry → total load ↑)
→ Recommend: rate-limit per-identity with reputation scoring, not per-IP

Best Practices

  1. Always walk the iceberg before intervening. Even if you end up fixing at the event layer, knowing the structural cause tells you whether your fix is durable.
  2. Draw the loops. Causal loops are almost always clearer on paper than in prose. Use the Art skill for rendering.
  3. Name the archetype. If the behavior matches a known archetype, the canonical intervention is documented — don't reinvent it.
  4. Leverage-point order matters. Parameters (taxes, quotas, settings) are low leverage; structures and rules are middle; paradigms are highest. Prefer higher where the cost allows.
  5. Second-order effects are not optional. Any non-trivial intervention should be simulated through at least one round of its own feedback loop before shipping.

Gotchas

  • Systems thinking is descriptive, not prescriptive. It reveals structure; it does not tell you what to build. Use it with BeCreative or FirstPrinciples to generate interventions.
  • Don't mistake a list for a system. A system has feedback. If you can't draw at least one loop, you have a list of components, not a system.
  • Blaming the model is the mistake. When a loop says something uncomfortable ("incentives are the cause"), the reaction is often to reject the model. Sit with it.
  • Delay is underrated. Many systemic failures come from delays (between action and feedback). Capture delays explicitly on your diagram.
  • Soft variables count. "Trust," "morale," "perceived safety" are as real as latency numbers in systems work. Don't drop them because they're hard to measure.

Attribution: Frameworks drawn from Donella Meadows (Thinking in Systems, 2008; "Places to Intervene in a System," 1999), Peter Senge (The Fifth Discipline, 1990), Jay Forrester (Industrial Dynamics, 1961), Russell Ackoff (Systems Thinking for Curious Managers), Fritjof Capra (The Web of Life), and the System Dynamics Society tradition.

Execution Log

After completing any workflow, append a single JSONL entry:

echo '{"ts":"'$(date -u +%Y-%m-%dT%H:%M:%SZ)'","skill":"SystemsThinking","workflow":"WORKFLOW_USED","input":"8_WORD_SUMMARY","status":"ok|error","duration_s":SECONDS}' >> ~/.claude/PAI/MEMORY/SKILLS/execution.jsonl
Related skills

More from danielmiessler/personal_ai_infrastructure

Installs
6
GitHub Stars
12.1K
First Seen
6 days ago