SystemsThinking
Customization
Before executing, check for user customizations at:
~/.claude/PAI/USER/SKILLCUSTOMIZATIONS/SystemsThinking/
If this directory exists, load and apply any PREFERENCES.md, configurations, or resources found there. These override default behavior. If the directory does not exist, proceed with skill defaults.
MANDATORY: Voice Notification (REQUIRED BEFORE ANY ACTION)
You MUST send this notification BEFORE doing anything else when this skill is invoked.
-
Send voice notification:
curl -s -X POST http://localhost:31337/notify \ -H "Content-Type: application/json" \ -d '{"message": "Running the WORKFLOWNAME workflow in the SystemsThinking skill to ACTION"}' \ > /dev/null 2>&1 & -
Output text notification:
Running the **WorkflowName** workflow in the **SystemsThinking** skill to ACTION...
This is not optional. Execute this curl command immediately upon skill invocation.
SystemsThinking Skill
Structured analysis of complex systems — the tools that reveal why the same problem keeps coming back and where a small change produces a large result. Grounded in Donella Meadows, Peter Senge, Jay Forrester, Russell Ackoff, and the Santa Fe Institute tradition.
Systems thinking is the difference between treating symptoms (patch the bug) and fixing structure (change the feedback loop that keeps producing the bug). Most "debug it harder" attempts fail because they operate at the event layer; the real cause lives 3-4 layers below, in the structure that generates events.
Core Concept
A system is a set of elements interconnected in a way that produces a characteristic behavior over time. Change the elements, often nothing happens. Change the interconnections or the purpose, and behavior shifts dramatically.
Five axioms this skill operates on:
- Behavior is generated by structure. If the same outcome keeps happening, the cause is structural, not a series of unrelated incidents.
- Events are visible; structure is not. Most analysis stops at events. Systems thinking walks down.
- Feedback loops are the basic unit. Every persistent pattern is one of a small number of loop archetypes.
- High-leverage interventions are usually counterintuitive. The obvious fix often makes the problem worse (policy resistance, shifting the burden, fixes that fail).
- You can't optimize a part of a system — you can only improve the system. Local optimization often degrades global performance.
Use / Win
When to use:
- Recurring problems — the same kind of bug, incident, deadline slip, or conflict keeps appearing. Event-level fixes are not working.
- Unintended consequences — a "fix" produced a new problem, or made the original worse.
- System design — before committing to an architecture, product strategy, organization structure, or policy.
- Debugging systemic issues — distributed-system flakiness, performance cliffs, reliability decay, tech-debt accretion.
- Strategy — understanding where competition, demand, adoption, or resistance actually comes from.
- Policy, incentives, organization design — any environment where human behavior is an input.
- Before a large intervention — run the causal loop first; intended effects are rarely the only effects.
What you win:
- Structural causes instead of blame-the-nearest-event. The real lever is almost never where the symptom appeared.
- Archetype recognition — most organizational and technical pathologies match one of ~10 patterns. Naming the pattern unlocks the canonical intervention.
- Leverage-point identification — Meadows' 12 leverage points, ordered. Parameters are low leverage; paradigms are highest. Knowing where to push is the whole game.
- Unintended-consequence preview — causal loops let you simulate second- and third-order effects before shipping the change.
- Durable fixes — structural changes don't regress the way symptom patches do.
Default mental model: At Extended+ effort on anything with recurring behavior, organizational dynamics, or cross-component coupling, systems thinking is not optional enrichment — it's how you find the fix that sticks.
Workflow Routing
Route to the appropriate workflow based on the request.
| Workflow | Trigger | File |
|---|---|---|
| Iceberg | "iceberg model", "structural cause", "why does this keep happening", walk from symptom down to structure | Workflows/Iceberg.md |
| CausalLoop | "causal loop", "feedback loop", "connection circle", "map relationships", build a CLD | Workflows/CausalLoop.md |
| FindArchetype | "systems archetype", "recognize this pattern", "fixes that fail", "shifting the burden", "tragedy of the commons" | Workflows/FindArchetype.md |
| FindLeverage | "leverage point", "where to intervene", "highest-leverage change", Meadows' 12 | Workflows/FindLeverage.md |
| ConceptMap | "concept map", "map the entities", "relationship map", Novak-style mapping | Workflows/ConceptMap.md |
Quick Reference
- 5 workflows — Iceberg, CausalLoop, FindArchetype, FindLeverage, ConceptMap
- Iceberg layers (top to bottom): Events → Patterns → Structures → Mental Models
- Feedback loop types: Reinforcing (R) — amplifying / exponential; Balancing (B) — goal-seeking / stabilizing
- Archetype count: ~10 canonical patterns (Senge, Braun)
- Leverage points: 12 levels, from parameters (weakest) to paradigm transcendence (strongest) — Meadows
Context files (loaded on demand):
Foundation.md— Meadows, Senge, Forrester, Ackoff, Capra; canonical definitionsArchetypes.md— the 10 systems archetypes with structure, recognition signs, canonical interventionLeveragePoints.md— Meadows' 12 leverage points with worked examples
Integration
Depends on: nothing — standalone analytical skill.
Works well with:
- RootCauseAnalysis — RCA is event-layer and pattern-layer; SystemsThinking continues down to structure and mental models. Pair them for deep incident analysis.
- FirstPrinciples — decompose to axioms, then use SystemsThinking to see how axioms interconnect.
- IterativeDepth — rotates lenses; SystemsThinking is the structural lens.
- BeCreative / Ideate — generate intervention candidates after identifying the leverage point.
- Art — render causal loop diagrams, iceberg diagrams, concept maps.
Examples
Example 1: Recurring incidents
User: "we keep getting paged for the same class of timeout"
→ Iceberg workflow
→ Events: 6 pages in 3 weeks
→ Patterns: all during deploy windows, all touching payments service
→ Structure: auto-scaler cold-start latency > health-check timeout during deploys
→ Mental model: "deploys are safe if tests pass" — but health checks aren't in the test path
→ Fix is structural, not another retry
Example 2: Strategy
User: "why does adding engineers slow us down past team size 12?"
→ FindArchetype workflow
→ Match: "Limits to Growth" archetype
→ Reinforcing loop: more engineers → more output → more hiring
→ Balancing loop: team size → coordination cost → per-engineer output ↓
→ Canonical intervention: attack the balancing loop (coordination mechanism), not the reinforcing one (stop hiring)
Example 3: Unintended consequences preview
User: "we're about to add a rate limit to stop abuse"
→ CausalLoop workflow
→ Build CLD of users, abusers, support load, legitimate traffic
→ Surface: balancing loop (rate limit ↓ abuse), reinforcing loop (rate limit → legit users retry → total load ↑)
→ Recommend: rate-limit per-identity with reputation scoring, not per-IP
Best Practices
- Always walk the iceberg before intervening. Even if you end up fixing at the event layer, knowing the structural cause tells you whether your fix is durable.
- Draw the loops. Causal loops are almost always clearer on paper than in prose. Use the Art skill for rendering.
- Name the archetype. If the behavior matches a known archetype, the canonical intervention is documented — don't reinvent it.
- Leverage-point order matters. Parameters (taxes, quotas, settings) are low leverage; structures and rules are middle; paradigms are highest. Prefer higher where the cost allows.
- Second-order effects are not optional. Any non-trivial intervention should be simulated through at least one round of its own feedback loop before shipping.
Gotchas
- Systems thinking is descriptive, not prescriptive. It reveals structure; it does not tell you what to build. Use it with
BeCreativeorFirstPrinciplesto generate interventions. - Don't mistake a list for a system. A system has feedback. If you can't draw at least one loop, you have a list of components, not a system.
- Blaming the model is the mistake. When a loop says something uncomfortable ("incentives are the cause"), the reaction is often to reject the model. Sit with it.
- Delay is underrated. Many systemic failures come from delays (between action and feedback). Capture delays explicitly on your diagram.
- Soft variables count. "Trust," "morale," "perceived safety" are as real as latency numbers in systems work. Don't drop them because they're hard to measure.
Attribution: Frameworks drawn from Donella Meadows (Thinking in Systems, 2008; "Places to Intervene in a System," 1999), Peter Senge (The Fifth Discipline, 1990), Jay Forrester (Industrial Dynamics, 1961), Russell Ackoff (Systems Thinking for Curious Managers), Fritjof Capra (The Web of Life), and the System Dynamics Society tradition.
Execution Log
After completing any workflow, append a single JSONL entry:
echo '{"ts":"'$(date -u +%Y-%m-%dT%H:%M:%SZ)'","skill":"SystemsThinking","workflow":"WORKFLOW_USED","input":"8_WORD_SUMMARY","status":"ok|error","duration_s":SECONDS}' >> ~/.claude/PAI/MEMORY/SKILLS/execution.jsonl
More from danielmiessler/personal_ai_infrastructure
osint
Structured OSINT investigations — people lookup, company intel, investment due diligence, entity/threat intel, domain recon, organization research using public sources with ethical authorization framework. USE WHEN OSINT, due diligence, background check, research person, company intel, investigate, company lookup, domain lookup, entity lookup, organization lookup, threat intel, discover OSINT sources.
259firstprinciples
Physics-based reasoning framework (Musk/Elon methodology) that deconstructs problems to irreducible fundamental truths rather than reasoning by analogy. Three-step structure: DECONSTRUCT (break to constituent parts and actual values), CHALLENGE (classify every element as hard constraint / soft constraint / unvalidated assumption — only physics is truly immutable), RECONSTRUCT (build optimal solution from fundamentals alone, ignoring inherited form). Outputs: constituent-parts breakdown, constraint classification table, and reconstructed solution with key insight. Three workflows: Deconstruct.md, Challenge.md, Reconstruct.md. Integrates with RedTeam (attack assumptions before deploying adversarial agents), Security (decompose threat model), Architecture (challenge design constraints), and Pentesters (decompose assumed security boundaries). Other skills invoke via: Challenge on all stated constraints → classify as hard/soft/assumption. Cross-domain synthesis: solutions from unrelated fields often apply once the fundamental truths are exposed. NOT FOR incident investigation and causal chains (use RootCauseAnalysis). NOT FOR structural feedback loops (use SystemsThinking). USE WHEN first principles, fundamental truths, challenge assumptions, is this a real constraint, rebuild from scratch, what are we actually paying for, what is this really made of, start over, physics first, question everything, reasoning by analogy, is this really necessary.
160documents
Read, write, convert, and analyze documents — routes to PDF, DOCX, XLSX, PPTX sub-skills for creation, editing, extraction, and format conversion. USE WHEN document, process file, create document, convert format, extract text, PDF, DOCX, XLSX, PPTX, Word, Excel, spreadsheet, PowerPoint, presentation, slides, consulting report, large PDF, merge PDF, fill form, tracked changes, redlining.
114council
Multi-agent collaborative debate that produces visible round-by-round transcripts with genuine intellectual friction. All council members are custom-composed via ComposeAgent (Agents skill) with domain expertise, unique voice, and personality tailored to the specific topic — never built-in generic types. ComposeAgent invoked as: bun run ~/.claude/skills/Agents/Tools/ComposeAgent.ts. Two workflows: DEBATE (3 rounds, full transcript + synthesis, parallel execution within rounds, 40-90 seconds total) and QUICK (1 round, fast perspective check). Context files: CouncilMembers.md (agent composition instructions), RoundStructure.md (three-round structure and timing), OutputFormat.md (transcript format templates). Agents are designed per debate topic to create real disagreement; 4-6 well-composed agents outperform 12 generic ones. Council is collaborative-adversarial (debate to find best path); for pure adversarial attack on an idea, use RedTeam instead. NOT FOR parallel task execution across agents (use Delegation skill). USE WHEN council, debate, multiple perspectives, weigh options, deliberate, get different views, multi-agent discussion, what would experts say, is there consensus, pros and cons from multiple angles.
112privateinvestigator
Ethical people-finding using 15 parallel research agents (45 search threads) across public records, social media, reverse lookups. Public data only, no pretexting. USE WHEN find person, locate, reconnect, people search, skip trace, reverse lookup, social media search, public records search, verify identity.
112redteam
Military-grade adversarial analysis that deploys 32 parallel expert agents (engineers, architects, pentesters, interns) to stress-test ideas, strategies, and plans — not systems or infrastructure. Two workflows: ParallelAnalysis (5-phase: decompose into 24 atomic claims → 32-agent parallel attack → synthesis → steelman → counter-argument, each 8 points) and AdversarialValidation (competing proposals synthesized into best solution). Context files: Philosophy.md (core principles, success criteria, agent types), Integration.md (how to combine with FirstPrinciples, Council, and other skills; output format). Targets arguments, not network vulnerabilities. Findings ranked by severity; goal is to strengthen, not destroy — weaknesses delivered with remediation paths. Collaborates with FirstPrinciples (decompose assumptions before attacking) and Council (Council debates to find paths; RedTeam attacks whatever survives). Also invoked internally by Ideate (TEST phase) and WorldThreatModel (horizon stress-testing). NOT FOR AI instruction set auditing (use BitterPillEngineering). NOT FOR network/system vulnerability testing (use a security assessment skill). USE WHEN red team, attack idea, counterarguments, critique, stress test, devil's advocate, find weaknesses, break this, poke holes, what could go wrong, strongest objection, adversarial validation, battle of bots.
112