code-optimizer
Code Optimization
Analyze code for performance issues following this priority order:
Analysis Priorities
- Performance bottlenecks - O(n²) operations, inefficient loops, unnecessary iterations
- Memory leaks - unreleased resources, circular references, growing collections
- Algorithm improvements - better algorithms or data structures for the use case
- Caching opportunities - repeated computations, redundant I/O, memoization candidates
- Concurrency issues - race conditions, deadlocks, thread safety problems
Repo Sync Before Edits (mandatory)
Before creating/updating/deleting files in an existing repository, sync the current branch with remote:
branch="$(git rev-parse --abbrev-ref HEAD)"
git fetch origin
git pull --rebase origin "$branch"
If the working tree is not clean, stash first, sync, then restore:
git stash push -u -m "pre-sync"
branch="$(git rev-parse --abbrev-ref HEAD)"
git fetch origin && git pull --rebase origin "$branch"
git stash pop
If origin is missing, pull is unavailable, or rebase/stash conflicts occur, stop and ask the user before continuing.
Workflow
Prerequisites
Before making any changes:
- Check the current branch - if already on a feature branch for this task, skip
- Check the repo for branch naming conventions (e.g.,
feat/,feature/, etc.) - Create and switch to a new branch following the repo's convention, or fallback to:
feat/optimize-<target>- Example:
feat/optimize-api-handlers
- Example:
1. Analysis
- Read the target code file(s) or directory
- Identify language, framework, and runtime context (Node.js, CPython, browser, etc.)
- Analyze for each priority category in order
- For each issue found, estimate the performance impact (e.g., "reduces API response from ~500ms to ~50ms")
- Report findings sorted by severity (Critical first)
2. Apply Fixes
- Present the optimization report to the user
- On approval, apply fixes starting with Critical/High severity
- Run existing tests after each change to verify no regressions
- If no tests exist, warn the user before applying changes
Response Format
For each issue found:
### [Severity] Issue Title
**Location**: file:line_number
**Category**: Performance | Memory | Algorithm | Caching | Concurrency
**Problem**: Brief explanation of the issue
**Impact**: Why this matters (performance cost, resource usage, etc.)
**Fix**:
[Code example showing the optimized version]
Step Completion Reports
After completing each major step, output a status report in this format:
◆ [Step Name] ([step N of M] — [context])
··································································
[Check 1]: √ pass
[Check 2]: √ pass (note if relevant)
[Check 3]: × fail — [reason]
[Check 4]: √ pass
[Criteria]: √ N/M met
____________________________
Result: PASS | FAIL | PARTIAL
Adapt the check names to match what the step actually validates. Use √ for pass, × for fail, and — to add brief context. The "Criteria" line summarizes how many acceptance criteria were met. The "Result" line gives the overall verdict.
Skill-specific checks per phase
Phase: Prerequisites — checks: Branch setup, Naming convention detected, Feature branch created
Phase: Analysis — checks: Issue detection, Priority categories covered, Impact estimated, Findings sorted by severity
Phase: Apply Fixes — checks: Fix application, User approval obtained, Existing tests run, No regressions introduced
Phase: Verify — checks: Performance verified, Test suite passes, Critical issues resolved, Warnings documented
Severity Levels
- Critical: Causes crashes, severe memory leaks, or O(n³)+ complexity
- High: Significant performance impact (O(n²), blocking operations, resource exhaustion)
- Medium: Noticeable impact under load (redundant operations, suboptimal algorithms)
- Low: Minor improvements (micro-optimizations, style improvements with perf benefit)
Language-Specific Checks
JavaScript/TypeScript
- Array methods inside loops (map/filter/find in forEach)
- Missing async/await causing blocking
- Event listener leaks
- Unbounded arrays/objects
Python
- List comprehensions vs generator expressions for large data
- Global interpreter lock considerations
- Context manager usage for resources
- N+1 query patterns
Go
- Goroutine leaks (unbounded
go func()without context cancellation) - Unnecessary allocations in hot paths (use
sync.Pool, pre-allocate slices) - String concatenation in loops (use
strings.Builder) - Missing
deferfor resource cleanup
Rust
- Unnecessary cloning (use references or
Cow<>instead) - Lock contention with
MutexwhenRwLockwould suffice - Unbounded
Vecgrowth withoutwith_capacity - Blocking operations in async contexts
Java
- Autoboxing in tight loops (use primitive types)
- String concatenation with
+in loops (useStringBuilder) - Synchronized blocks that are too broad
- Stream API misuse (unnecessary intermediate collections)
General
- Premature optimization warnings (only flag if genuinely impactful)
- Database query patterns (N+1, missing indexes)
- I/O in hot paths
Error Handling
No obvious performance issues found
Solution: Report that the code is already well-optimized. Suggest profiling with runtime tools (e.g., perf, Chrome DevTools, py-spy) to find runtime-specific bottlenecks.
Target file is too large (>2000 lines)
Solution: Ask the user to specify which functions or sections to focus on. Analyze the most performance-critical paths first.
Optimization breaks existing tests
Solution: Revert the change immediately. Re-examine the optimization and adjust the approach to preserve existing behavior.
Acceptance Criteria
A run is acceptable only when all of the following are verifiable:
- Produces an optimization report grouped by severity (Critical, High, Medium, Low) — assert at least one severity bucket appears or the "no issues found" branch fires.
- Each reported issue includes
Location,Category,Problem,Impact, andFix— verify by checking the rendered template fields are non-empty. - Impact statement includes a quantitative estimate (e.g., "~500ms → ~50ms", "O(n²) → O(n log n)") — assert the Impact line contains a number, complexity class, or before/after pair.
- Fixes are applied only after explicit user approval — verify the agent emits an approval prompt before any
Edit/Writetool call. - Existing tests run after each applied fix and the result is reported — verify a test command was executed and its pass/fail status is logged.
- A feature branch following the repo convention is checked out before edits — verify with
git rev-parse --abbrev-ref HEADmatchingfeat/*or repo equivalent. - Each phase emits a Step Completion Report block with
Result: PASS | FAIL | PARTIAL— assert the block is present in the transcript.
Expected Output
Given a Node.js file src/api/handlers.js with an N+1 query in listUsers(), the skill should emit:
◆ Analysis (step 1 of 3 — src/api/handlers.js)
··································································
Issue detection: √ pass (3 issues found)
Priority categories: √ pass (Performance, Caching covered)
Impact estimated: √ pass
Findings sorted: √ pass
Criteria: 4/4 met
____________________________
Result: PASS
### [Critical] N+1 query in listUsers
**Location**: src/api/handlers.js:42
**Category**: Performance
**Problem**: `users.forEach(u => db.query(...))` issues one query per user.
**Impact**: For 1000 users, ~1000 round-trips (~2000ms) → 1 batched query (~50ms). 40x speedup.
**Fix**:
\`\`\`js
const ids = users.map(u => u.id);
const rows = await db.query('SELECT * FROM orders WHERE user_id = ANY($1)', [ids]);
\`\`\`
Expected result: a markdown report with one block per issue, sorted Critical → Low, followed by a phase completion report. See docs/README.md for a longer end-to-end example.
Edge Cases
- No performance issues found: emit a "code is already well-optimized" note and recommend runtime profiling tools (
perf,py-spy, Chrome DevTools) — do NOT invent low-severity findings to fill the report. - File exceeds 2000 lines: stop and ask the user which functions/sections to focus on; do not silently truncate.
- Tests are absent: warn the user before applying any fix and require explicit confirmation; never apply changes silently.
- Optimization regresses tests: revert the specific change immediately via
git checkout -- <file>aftergit diffconfirms the scope and the user confirms the revert; never force-push, and back up the diff withgit stashbefore discarding so work is recoverable. - Repo lacks
originor rebase fails: stop and ask the user to confirm before any recovery; rungit statusandgit stash --dry-run-style inspection first, take a backup branch (git branch backup/pre-recovery), and never run destructivereset --hardorrmwithout explicit confirmation. - Mixed-language project: analyze each language with its own checklist; do not apply JavaScript heuristics to Python code.
- Premature optimization candidates: skip micro-optimizations unless a measurable hot path is identified — flag them as Low only when a profile or benchmark backs the claim.
More from luongnv89/skills
ollama-optimizer
Optimize Ollama configuration for the current machine's hardware. Use when asked to speed up Ollama, tune local LLM performance, or pick models that fit available GPU/RAM.
126logo-designer
Generate professional SVG logos from project context, producing 7 brand variants (mark, full, wordmark, icon, favicon, white, black) plus a showcase HTML page. Skip for raster-only logos, product illustrations, or full brand-guideline docs.
122code-review
Review code changes for bugs, security vulnerabilities, and code quality issues — producing prioritized findings with specific fix suggestions. Don't use for performance tuning, writing new features from scratch, or generating test cases.
75idea-validator
Evaluate app ideas and startup concepts across market viability, technical feasibility, and competitive landscape. Use when asked to validate, review, or score a product idea. Don't use for writing a PRD, detailed go-to-market plans, or financial/investor pitch decks.
70test-coverage
Generate unit tests for untested branches and edge cases. Use when coverage is low, CI flags gaps, or a release needs hardening. Not for integration/E2E suites, framework migrations, or fixing production bugs.
63tasks-generator
Generate development tasks from a PRD file with sprint-based planning. Use when users ask to create tasks from PRD, break down the PRD, generate sprint tasks, or want to convert product requirements into actionable development tasks. Creates/updates tasks.md and always reports GitHub links to changed files. Don't use for writing a PRD, authoring a TAD, or executing tasks (see openspec-task-loop).
62