performance-optimization
Performance Optimization
Overview
Measure first, optimize second. Never optimize without data. The bottleneck is almost never where you think it is.
When to Use
- Code is measurably slow or resource-heavy
- Profiling has revealed specific bottlenecks
- Designing a system with known performance requirements
- Reviewing code for performance before production
When NOT to Use
- You "feel like" code might be slow but haven't measured
- Premature optimization during initial implementation
- Micro-optimizations that don't move the needle
The Process
1. Measure baseline Before touching anything, establish a benchmark.
- Record current performance numbers (time, memory, CPU)
- Document the test conditions (data size, concurrency, hardware)
- Save results as your baseline
2. Profile to find the bottleneck Use profiling tools appropriate to your stack:
- Python:
cProfile,py-spy,memory_profiler - Node.js:
--prof,clinic.js, Chrome DevTools - Go:
pprof - Generic: timing instrumentation, APM tools
The bottleneck is the one place where optimization actually matters.
3. Form hypothesis State explicitly: "I believe X is slow because Y." Don't optimize something you can't explain.
4. Apply targeted fix Change ONE thing at a time. Common high-impact areas:
- Database: N+1 queries, missing indexes, over-fetching
- Network: unnecessary round trips, large payloads, no caching
- Memory: leaks, excessive allocation, large objects in hot paths
- Algorithms: O(n²) where O(n log n) is possible
- I/O: synchronous blocking, missing batching
5. Measure again Compare to baseline. Did it improve? If not, revert and try something else.
6. Document the change Record what you changed, why, and the before/after numbers.
Common High-Impact Wins
| Area | Look For |
|---|---|
| Database | N+1 queries, full table scans, missing indexes |
| Caching | Repeated expensive computations with same inputs |
| Network | Chatty APIs, large payloads, synchronous chains |
| Algorithms | Nested loops over large collections |
| Memory | Objects created in tight loops, large in-memory datasets |
Red Flags
| Thought | Reality |
|---|---|
| "This looks slow" | Measure it. Looks are deceiving. |
| "I'll optimize as I go" | Premature optimization obscures intent. Measure first. |
| "I fixed the bottleneck" | Did you measure? Fix without measurement isn't a fix. |
| "This is the obvious bottleneck" | Profile anyway. You're probably wrong. |
More from derhaken/superantigravity
using-superantigravity
Use when starting any conversation — establishes how to find and use skills, requiring skill check before ANY response including clarifying questions
17browser-agent
Use when a task requires interacting with a web browser — testing UI flows, verifying web app behavior, clicking through screens, reading live web content, or automating browser workflows in Google Antigravity
5writing-plans
Use when you have a spec or requirements for a multi-step task, before touching code
4confidence-check
Use before implementing a feature or making significant changes to verify you have enough context and understanding to proceed — prevents wasted effort from proceeding with wrong assumptions
4brainstorming
You MUST use this before any creative work - creating features, building components, adding functionality, or modifying behavior. Explores user intent, requirements and design before implementation.
4writing-skills
Use when creating new skills, editing existing skills, or verifying skills work before deployment
4