rob-pike
Rob Pike's 5 Rules of Programming
The Rules
- You can't tell where a program is going to spend its time. Bottlenecks occur in surprising places. Don't guess — prove it.
- Measure. Don't tune for speed until you've measured. Even then, don't unless one part of the code overwhelms the rest.
- Fancy algorithms are slow when n is small, and n is usually small. Big-O doesn't matter when constants dominate. Use Rule 2 first.
- Fancy algorithms are buggier than simple ones. Use simple algorithms and simple data structures.
- Data dominates. Choose the right data structures and the algorithms become self-evident. "Write stupid code that uses smart objects."
How to Apply
Before Any Optimization
Step 0: Check for Existing Instrumentation
Before asking "have you measured?", determine whether measurement is even possible right now.
Scan the codebase for signs of existing instrumentation:
- Logging: look for logger imports, log calls, structured logging libraries
- Profiling: look for profiler imports, benchmark files, tracing setup
- Timing: look for duration measurements, stopwatch patterns, timing decorators
- APM/Observability: look for metrics exports, spans, trace contexts
Then ask the user:
- If instrumentation exists: "I found logging/profiling in [locations]. Are there specific areas you suspect are slow, or should we look at what the existing measurements tell us?"
- If instrumentation is missing or sparse: "There's no measurement in place to prove where time is being spent. Before optimizing anything — where do you suspect the bottleneck is? Let's add measurement there first, then let the data decide."
The goal is NOT to prescribe a specific tool — Claude already knows the right profiling approach for the language. The goal is to make sure measurement exists before any optimization conversation continues. If there is nothing to measure with, the first action is adding instrumentation, not changing code.
Step 1: Ask the Measurement Questions
Stop and ask these questions in order:
- "Have I measured?" — If no, measure first. Any optimization without measurement data is premature. Use whatever profiling tool is natural for the project's language and ecosystem.
- "Does one part overwhelm the rest?" — If no single area dominates, there is nothing worth optimizing. Small improvements spread across many areas rarely matter.
- "What's n?" — If n is small (and it usually is), the simple O(n²) approach likely beats the clever O(n log n) one due to constants, cache behavior, and implementation complexity.
- "Is this a data structure problem?" — Before changing the algorithm, consider whether a different data structure makes the problem trivial. The right structure often eliminates the need for a clever algorithm entirely.
- "Is the added complexity worth it?" — Simple code that is 10% slower is almost always preferable to clever code that is fragile and hard to maintain.
Anti-Patterns to Block
When you catch yourself or the user doing any of these, STOP and redirect:
| Impulse | Rule violated | Response |
|---|---|---|
| "This loop looks slow, let me optimize it" | Rule 1 | Have you profiled? The bottleneck may be elsewhere entirely. |
| "Let me add a cache here" | Rule 2 | Measure first. Does this path actually dominate runtime? |
| "Let me use a B-tree / trie / skip list" | Rule 3 | What's n? If small, a sorted slice + binary search wins. |
| "Let me implement a custom allocator" | Rule 4 | Start simple. Measure. Only get fancy if data forces you. |
| "The algorithm is O(n²), needs fixing" | Rule 3 | What's n? O(n²) with n=100 is 10μs. Measure first. |
| "Let me parallelize this" | Rule 2 | Is this actually CPU-bound? Measure. Often it's I/O. |
When Optimization IS Justified
Proceed with optimization only when ALL of these are true:
- You have measurement data showing a specific bottleneck
- That bottleneck dominates overall runtime (not just 5-10% of it)
- The proposed fix is the simplest change that addresses the measured problem
- You will re-measure after the change to confirm improvement
More from tmdgusya/engineering-discipline
clarification
Use when a user's request is vague, ambiguous, or underspecified. Launches an iterative Q&A loop to resolve ambiguity while a subagent explores the codebase in parallel. Outputs a clear, well-scoped context brief so the user can plan sharply. Triggers on "I want to...", "I need...", "let's build...", "can you help me...", "we should...", or any request where the full scope isn't immediately clear.
35run-plan
Use when you have a written implementation plan to execute. Loads the plan, reviews critically, executes tasks in dependency order, and reports completion. Triggers when the user says "run the plan", "execute the plan", or "let's start implementing".
34systematic-debugging
Use when encountering any bug, test failure, or unexpected behavior. Enforces a strict reproduce-first, root-cause-first, failing-test-first debugging workflow before fixing.
32plan-crafting
Use when a task's scope is clear and multi-step implementation is needed, before touching code. Triggered after clarification is complete, or when the user explicitly requests plan creation with a clear prompt.
31long-run
Orchestrates multi-day execution of complex tasks through milestones. Each milestone goes through plan-crafting, run-plan (worker-validator), and review-work phases with checkpoint/recovery. Triggers when the user says "long run", "start long run", "execute milestones", or "run all milestones".
29simplify
Review changed code for reuse opportunities, quality issues, and inefficiencies using three parallel review agents, then fix any issues found. Triggers when the user says "simplify", "clean up the code", "review the changes", or after run-plan execution when code quality verification is needed.
29