THEALGORITHM
Customization
Before executing, check for user customizations at:
~/.claude/skills/CORE/USER/SKILLCUSTOMIZATIONS/THEALGORITHM/
If this directory exists, load and apply any PREFERENCES.md, configurations, or resources found there. These override default behavior. If the directory does not exist, proceed with skill defaults.
THE ALGORITHM - Universal Execution Engine
PURPOSE: Produce euphoric, highly surprising, exceptional results that solve the problem better than expected.
PHILOSOPHY: Move from current state to ideal state using the scientific method. The ISC (Ideal State Criteria) captures what "ideal" looks like, which we execute against, verify against, and iterate against until achieved.
CORE PRINCIPLE: Effort classification determines which capabilities are available. Higher effort unlocks more powerful tools.
Core Mission
The goal of The Algorithm—and indeed of the PAI system overall—is to create Euphoric Surprise at its ability to perform every task in service of helping its principal become the best version of themselves.
This is the north star. Every capability, every phase, every ISC row exists to serve this mission. When the system works correctly, the principal should be genuinely surprised and delighted by how thoroughly, thoughtfully, and effectively their request was fulfilled—not just "done" but done in a way that exceeds expectations and moves them closer to their ideal self.
Visual Display & Voice Notifications
Use the LCARS-style AlgorithmDisplay for visual feedback and voice announcements:
# Start algorithm with effort level (shows banner + announces via voice)
bun run ~/.claude/skills/THEALGORITHM/Tools/AlgorithmDisplay.ts start THOROUGH -r "your request"
# Transition phases (updates display + voice announcement)
bun run ~/.claude/skills/THEALGORITHM/Tools/AlgorithmDisplay.ts phase THINK
bun run ~/.claude/skills/THEALGORITHM/Tools/AlgorithmDisplay.ts phase EXECUTE
# Show current status anytime
bun run ~/.claude/skills/THEALGORITHM/Tools/AlgorithmDisplay.ts show
# Show just the effort banner
bun run ~/.claude/skills/THEALGORITHM/Tools/AlgorithmDisplay.ts effort DETERMINED
The display shows:
- 🎯 Current effort level (TRIVIAL → DETERMINED) with color-coded banner
- 📊 Phase progression bar (7 phases with completion status)
- 📝 ISC summary (rows pending/active/done)
- 🔊 Voice announcements when transitioning phases
Phase Icons: 👁️ OBSERVE → 🧠 THINK → 📋 PLAN → 🔨 BUILD → ⚡ EXECUTE → ✅ VERIFY → 📚 LEARN
Quick Start
# 1. START WITH VISUAL DISPLAY (shows banner + voice announcement)
bun run ~/.claude/skills/THEALGORITHM/Tools/AlgorithmDisplay.ts start STANDARD -r "your request"
# 2. CLASSIFY EFFORT (if not using display start)
bun run ~/.claude/skills/THEALGORITHM/Tools/EffortClassifier.ts --request "your request"
# Or with override: --override DETERMINED
# Or inline: "algorithm effort THOROUGH: your request"
# 3. LOAD CAPABILITIES for effort level
bun run ~/.claude/skills/THEALGORITHM/Tools/CapabilityLoader.ts --effort STANDARD
# 4. CREATE ISC
bun run ~/.claude/skills/THEALGORITHM/Tools/ISCManager.ts create --request "your request"
# 5. TRANSITION PHASES (voice + visual update)
bun run ~/.claude/skills/THEALGORITHM/Tools/AlgorithmDisplay.ts phase THINK
bun run ~/.claude/skills/THEALGORITHM/Tools/AlgorithmDisplay.ts phase PLAN
# ... continues through EXECUTE, VERIFY, LEARN
# 6. MANAGE ISC during EXECUTE
bun run ~/.claude/skills/THEALGORITHM/Tools/ISCManager.ts capability --row 1 -c research.perplexity
bun run ~/.claude/skills/THEALGORITHM/Tools/ISCManager.ts update --row 1 --status DONE
bun run ~/.claude/skills/THEALGORITHM/Tools/ISCManager.ts show
Effort → Capability Matrix
Capabilities.yaml is the source of truth. See Data/Capabilities.yaml.
| Effort | Models | Thinking | Debate | Research | Agents | Parallel |
|---|---|---|---|---|---|---|
| TRIVIAL | — | — | — | — | — | 0 |
| QUICK | haiku | — | — | — | Intern | 1 |
| STANDARD | haiku, sonnet | deep thinking, FirstPrinciples | — | 1 agent | Engineer, QA, Designer | 1-3 |
| THOROUGH | haiku, sonnet | All | Council | parallel | All + Architect, Pentester | 3-5 |
| DETERMINED | all + opus | All | Council + RedTeam | all | unlimited | 10 |
ISC Scale (CRITICAL)
The ISC is NOT limited to a small number of rows. The ISC captures what "ideal" looks like, and for complex problems this can be massive:
| Scale | When | Examples |
|---|---|---|
| 5-10 rows | Quick fixes, simple features | "Fix typo", "Add button" |
| 20-50 rows | Standard development work | "Add dark mode", "Create API endpoint" |
| 50-200 rows | Complex features, refactors | "Redesign auth system", "Add new major feature" |
| 200-1000+ rows | Major projects, DETERMINED effort | "Build entire system", "Security audit" |
ISC rows come from:
- EXPLICIT requirements - What user literally asked for
- INFERRED requirements - From context (TECHSTACKPREFERENCES, past work, etc.)
- IMPLICIT requirements - Security, quality, testing standards
- Research findings - Best practices discovered via research agents
- Anti-patterns - Things to AVOID discovered via research
- Edge cases - Discovered through thinking/analysis
- Verification criteria - How to confirm each requirement is met
The algorithm's capabilities EXPAND the ISC:
- Research agents find best practices → add ISC rows
- Research agents find anti-patterns → add "avoid X" ISC rows
- Council debate surfaces edge cases → add ISC rows
- FirstPrinciples analysis reveals assumptions → add ISC rows
- Verification requirements → add ISC rows for each testable criterion
Higher effort = larger, higher quality ISC. The DETERMINED level can have thousands of ISC rows because we use ALL capabilities to discover everything that "ideal" looks like.
The 7 Phases
Execute IN ORDER. Each phase mutates the ISC:
| Phase | Action | ISC Mutation | Gate Question |
|---|---|---|---|
| OBSERVE | Understand request + user context | CREATE rows | Do I have 2+ rows? Used context to infer? |
| THINK | Ensure nothing missing | COMPLETE rows | All rows clear, testable? |
| PLAN | Sequence + assign capabilities | ORDER rows + ASSIGN capabilities | Dependencies mapped? Capabilities assigned? |
| BUILD | Make rows testable | REFINE rows | Each row specific enough to verify? |
| EXECUTE | Do the work (spawn agents per capability) | ADVANCE status | Every row has final status? |
| VERIFY | Test each DONE row (skeptical agent) | CONFIRM status | Tested/confirmed each completion? |
| LEARN | Output for user to rate | OUTPUT results | User rates for memory system |
CRITICAL: The LEARN phase does NOT self-rate. User rates outputs for the memory system.
The ISC Table (FRONT AND CENTER)
Every non-trivial task has an ISC. Display this prominently throughout execution:
## 🎯 IDEAL STATE CRITERIA
**Request:** Add dark mode to the settings page
**Effort:** STANDARD | **Phase:** EXECUTE | **Iteration:** 1
| # | What Ideal Looks Like | Source | Capability | Status |
|---|----------------------|--------|------------|--------|
| 1 | Research good patterns | INFERRED | 🔬 perplexity | ⏳ PENDING |
| 2 | Toggle component works | EXPLICIT | 🤖 engineer | 🔄 ACTIVE |
| 3 | Theme state persists | EXPLICIT | 🤖 engineer× | ⏳ PENDING |
| 4 | Uses TypeScript | INFERRED | — | ✅ DONE |
| 5 | Tests pass | IMPLICIT | ✅ qa_tester | ⏳ PENDING |
| 6 | Browser-verified | IMPLICIT | ✅ browser | ⏳ PENDING |
**Legend:** 🔬 Research | 💡 Thinking | 🗣️ Debate | 🔍 Analysis | 🤖 Execution | ✅ Verify | × Parallel
Source types:
EXPLICIT- User literally said thisINFERRED- Derived from user context (TECHSTACKPREFERENCES, etc.)IMPLICIT- Universal standards (security, quality)
Status progression:
PENDING→ACTIVE→DONEADJUSTED- Modified with reasonBLOCKED- Cannot achieve, triggers loop-back
Execution Flow
Step 1: EFFORT CLASSIFICATION (REQUIRED FIRST)
bun run ~/.claude/skills/THEALGORITHM/Tools/EffortClassifier.ts --request "your request"
Override methods:
- CLI flag:
--override THOROUGH - Inline pattern:
"algorithm effort DETERMINED: build this feature"
The effort level determines ALL available capabilities.
Step 2: CAPABILITY LOADING
bun run ~/.claude/skills/THEALGORITHM/Tools/CapabilityLoader.ts --effort STANDARD
Returns available: models, thinking modes, debate systems, research agents, execution agents, verification.
Step 3: ISC CREATION + CAPABILITY ASSIGNMENT
For each ISC row, select appropriate capability:
bun run ~/.claude/skills/THEALGORITHM/Tools/CapabilitySelector.ts --row "Research best practices" --effort STANDARD
# Returns: research.perplexity as primary, with icon 🔬
bun run ~/.claude/skills/THEALGORITHM/Tools/ISCManager.ts capability --row 1 -c research.perplexity
Step 4: ORCHESTRATED EXECUTION
Execute in phases based on capability assignments:
PHASE A: RESEARCH (parallel for independent queries)
├─ Row with 🔬 research.perplexity → Spawn PerplexityResearcher
├─ Row with 🔬 research.gemini → Spawn GeminiResearcher
└─ Row with 🔬 research.grok → Spawn GrokResearcher
PHASE B: THINKING (for creative/analysis needs)
├─ Row with 💡 thinking.deep thinking → Invoke BeCreative skill
├─ Row with 🔍 analysis.first_principles → Invoke FirstPrinciples skill
└─ Row with 🗣️ debate.council → Invoke Council skill
PHASE C: EXECUTION (parallel agents)
├─ Row with 🤖 execution.engineer → Spawn Engineer agent
├─ Row with 🤖 execution.architect → Spawn Architect agent (THOROUGH+)
└─ Rows marked × → Run in parallel
PHASE D: VERIFICATION (skeptical, different from executor)
├─ Row with ✅ verification.browser → Browser skill validation
└─ All rows → Skeptical verifier agent (skeptical,meticulous,adversarial traits)
For TRIVIAL (skip algorithm):
Direct response, no ISC, no capability loading. Just answer.
Capability Categories
Models (compute resources)
models.haiku- Fast, cheap (QUICK+)models.sonnet- Balanced reasoning (STANDARD+)models.opus- Maximum intelligence (DETERMINED)
Thinking Modes
thinking.deep thinking- BeCreative skill for creative solutions (STANDARD+)thinking.tree_of_thought- Branching exploration (THOROUGH+)thinking.plan_mode- EnterPlanMode for complex implementations (THOROUGH+)
Debate Systems
debate.council- 4 agents, collaborative analysis (THOROUGH+)debate.redteam- 32 agents, adversarial stress-testing (DETERMINED)
Analysis Modes
analysis.first_principles- Challenge assumptions (STANDARD+)analysis.science- Hypothesis-driven exploration (STANDARD+)
Research Agents
research.perplexity- Web research, current events (STANDARD+)research.gemini- Multi-perspective research (STANDARD+)research.grok- Contrarian fact-checking (STANDARD+)research.claude- Academic/scholarly sources (STANDARD+)research.codex- Technical code pattern research (STANDARD+)
Execution Agents
execution.intern- Parallel grunt work (QUICK+)execution.engineer- Implementation/coding (STANDARD+)execution.qa_tester- Testing/validation (STANDARD+)execution.designer- UX/UI design (STANDARD+)execution.architect- System design (THOROUGH+)execution.pentester- Security testing (THOROUGH+)
Verification
verification.browser- Web application validation (STANDARD+)verification.skeptical_verifier- Different agent than executor (STANDARD+)
Iteration Loop
When VERIFY finds issues:
BLOCKED row
│
├─ Unclear what ideal looks like? → Loop to THINK
├─ Wrong approach? → Loop to PLAN
└─ Execution error? → Loop to EXECUTE
Iteration count bounded by effort level:
- QUICK: 1 iteration max
- STANDARD: 2 iterations
- THOROUGH: 3-5 iterations
- DETERMINED: Unlimited until success
Integration
Uses
- Agents Skill - AgentFactory for dynamic agent composition
- CORE Skill - User context for ISC inference
- Browser Skill - Web verification in VERIFY phase
- BeCreative Skill - deep thinking for THINK phase
- Council Skill - Multi-perspective debate (THOROUGH+)
- RedTeam Skill - Adversarial analysis (DETERMINED)
- FirstPrinciples Skill - Assumption challenging
- Research Skill - Multi-source research agents
Memory
- ISC artifacts:
MEMORY/Work/{session}/ISC.md - Learnings:
MEMORY/Learning/ALGORITHM/ - Patterns:
MEMORY/Signals/algorithm-patterns.jsonl
Workflow Routing
| Trigger | Action |
|---|---|
| "run the algorithm" | Full execution |
| "use the algorithm" | Full execution |
| "algorithm effort LEVEL" | Force effort level + full execution |
| Complex multi-step request | Auto-invoke if appropriate |
Enhanced ISC Features
Verification Paired at Creation
CRITICAL: Every ISC row should have a verification method defined at creation, not after.
# Add row WITH verification (recommended)
bun run ISCManager.ts add -d "Toggle component works" -s EXPLICIT \
--verify-method browser --verify-criteria "Toggle visible in settings"
Verification methods: browser, test, grep, api, lint, manual, agent, inferred
See Data/VerificationMethods.yaml for selection guidance.
Agent Claim System
Prevents race conditions when multiple agents work on ISC items.
# Claim an item
bun run ISCManager.ts claim --row 1 --agent Engineer
# Release when done
bun run ISCManager.ts release --row 1
# See what's available
bun run ISCManager.ts available
Claims expire after 30 minutes (stale detection).
Research Override System
Research findings can BLOCK user assumptions. User must acknowledge.
# Research agent finds issue
bun run ISCManager.ts research-block --row 2 \
--reason "Best practice is X, not Y" --source research.perplexity
# User acknowledges
bun run ISCManager.ts acknowledge --row 2 --action OVERRIDE # or ACCEPT
Nested Algorithm
Complex items spawn child ISAs.
# Mark item as nested
bun run ISCManager.ts nest --row 5
# Update child status
bun run ISCManager.ts child-status --row 5 --child-status COMPLETE
Interview Protocol
When ideal state is unclear, use structured questions.
bun run ISCManager.ts interview -r "vague request"
The 5 Questions:
- What does success look like when this is done?
- Who will use this and what will they do with it?
- What would make you show this to your friends?
- What existing thing is this most similar to?
- What should this definitely NOT do?
Files
| File | Purpose |
|---|---|
Data/Capabilities.yaml |
Source of truth for all capabilities |
Data/VerificationMethods.yaml |
Verification method registry |
Tools/AlgorithmDisplay.ts |
LCARS visual display + voice announcements |
Tools/EffortClassifier.ts |
Classify TRIVIAL→DETERMINED |
Tools/CapabilityLoader.ts |
Load + filter capabilities by effort |
Tools/CapabilitySelector.ts |
Select capabilities for ISC rows |
Tools/ISCManager.ts |
Enhanced: ISC with verification, claims, research, nesting |
Tools/TraitModifiers.ts |
Effort → trait mappings |
Phases/*.md |
Detailed phase documentation (Observe.md has interview protocol) |
Reference/CapabilityMatrix.md |
Effort → capability documentation |
The Purpose
Produce euphoric, highly surprising, exceptional results that solve the user's problem better than expected.
The ISC captures what "ideal" looks like. Effort determines available capabilities. Execute against it. Verify against it. Iterate until achieved.
This is not documentation. This is a command. Execute the algorithm.
More from danielmiessler/personal_ai_infrastructure
osint
Structured OSINT investigations — people lookup, company intel, investment due diligence, entity/threat intel, domain recon, organization research using public sources with ethical authorization framework. USE WHEN OSINT, due diligence, background check, research person, company intel, investigate, company lookup, domain lookup, entity lookup, organization lookup, threat intel, discover OSINT sources.
259firstprinciples
Physics-based reasoning framework (Musk/Elon methodology) that deconstructs problems to irreducible fundamental truths rather than reasoning by analogy. Three-step structure: DECONSTRUCT (break to constituent parts and actual values), CHALLENGE (classify every element as hard constraint / soft constraint / unvalidated assumption — only physics is truly immutable), RECONSTRUCT (build optimal solution from fundamentals alone, ignoring inherited form). Outputs: constituent-parts breakdown, constraint classification table, and reconstructed solution with key insight. Three workflows: Deconstruct.md, Challenge.md, Reconstruct.md. Integrates with RedTeam (attack assumptions before deploying adversarial agents), Security (decompose threat model), Architecture (challenge design constraints), and Pentesters (decompose assumed security boundaries). Other skills invoke via: Challenge on all stated constraints → classify as hard/soft/assumption. Cross-domain synthesis: solutions from unrelated fields often apply once the fundamental truths are exposed. NOT FOR incident investigation and causal chains (use RootCauseAnalysis). NOT FOR structural feedback loops (use SystemsThinking). USE WHEN first principles, fundamental truths, challenge assumptions, is this a real constraint, rebuild from scratch, what are we actually paying for, what is this really made of, start over, physics first, question everything, reasoning by analogy, is this really necessary.
160documents
Read, write, convert, and analyze documents — routes to PDF, DOCX, XLSX, PPTX sub-skills for creation, editing, extraction, and format conversion. USE WHEN document, process file, create document, convert format, extract text, PDF, DOCX, XLSX, PPTX, Word, Excel, spreadsheet, PowerPoint, presentation, slides, consulting report, large PDF, merge PDF, fill form, tracked changes, redlining.
114council
Multi-agent collaborative debate that produces visible round-by-round transcripts with genuine intellectual friction. All council members are custom-composed via ComposeAgent (Agents skill) with domain expertise, unique voice, and personality tailored to the specific topic — never built-in generic types. ComposeAgent invoked as: bun run ~/.claude/skills/Agents/Tools/ComposeAgent.ts. Two workflows: DEBATE (3 rounds, full transcript + synthesis, parallel execution within rounds, 40-90 seconds total) and QUICK (1 round, fast perspective check). Context files: CouncilMembers.md (agent composition instructions), RoundStructure.md (three-round structure and timing), OutputFormat.md (transcript format templates). Agents are designed per debate topic to create real disagreement; 4-6 well-composed agents outperform 12 generic ones. Council is collaborative-adversarial (debate to find best path); for pure adversarial attack on an idea, use RedTeam instead. NOT FOR parallel task execution across agents (use Delegation skill). USE WHEN council, debate, multiple perspectives, weigh options, deliberate, get different views, multi-agent discussion, what would experts say, is there consensus, pros and cons from multiple angles.
112privateinvestigator
Ethical people-finding using 15 parallel research agents (45 search threads) across public records, social media, reverse lookups. Public data only, no pretexting. USE WHEN find person, locate, reconnect, people search, skip trace, reverse lookup, social media search, public records search, verify identity.
112redteam
Military-grade adversarial analysis that deploys 32 parallel expert agents (engineers, architects, pentesters, interns) to stress-test ideas, strategies, and plans — not systems or infrastructure. Two workflows: ParallelAnalysis (5-phase: decompose into 24 atomic claims → 32-agent parallel attack → synthesis → steelman → counter-argument, each 8 points) and AdversarialValidation (competing proposals synthesized into best solution). Context files: Philosophy.md (core principles, success criteria, agent types), Integration.md (how to combine with FirstPrinciples, Council, and other skills; output format). Targets arguments, not network vulnerabilities. Findings ranked by severity; goal is to strengthen, not destroy — weaknesses delivered with remediation paths. Collaborates with FirstPrinciples (decompose assumptions before attacking) and Council (Council debates to find paths; RedTeam attacks whatever survives). Also invoked internally by Ideate (TEST phase) and WorldThreatModel (horizon stress-testing). NOT FOR AI instruction set auditing (use BitterPillEngineering). NOT FOR network/system vulnerability testing (use a security assessment skill). USE WHEN red team, attack idea, counterarguments, critique, stress test, devil's advocate, find weaknesses, break this, poke holes, what could go wrong, strongest objection, adversarial validation, battle of bots.
112