code-review
Code Review
Analyze code changes for bugs, security issues, and correctness problems. Return structured findings.
Step 1: Determine the Diff Target
Determine what to review based on context:
- Uncommitted changes:
--uncommitted - Against a base branch:
--base <branch> - Specific commit:
--commit <sha>
Default to reviewing against the repository's default branch (detect via gh repo view --json defaultBranchRef --jq '.defaultBranchRef.name'). If the caller specifies a different target, use that.
Step 2: Review Changes
- Run the appropriate diff command to obtain the changes
- For each changed file, read enough surrounding context to understand the change
- Apply the bug determination criteria and return findings in the output format below
Bug Determination Criteria
Flag an issue only when ALL of these hold:
- It meaningfully impacts the accuracy, performance, security, or maintainability of the code
- The bug is discrete and actionable (not a general codebase issue or combination of multiple issues)
- Fixing it does not demand rigor beyond what exists in the rest of the codebase
- The bug was introduced in the changeset (do not flag pre-existing bugs)
- The author would likely fix the issue if aware of it
- The bug does not rely on unstated assumptions about the codebase or author's intent
- Speculation is insufficient — identify the parts of the code that are provably affected
- The issue is clearly not an intentional change by the original author
Comment Standards
- Be clear about why the issue is a bug
- Communicate severity accurately — do not overstate
- Keep the body to one paragraph maximum
- No code chunks longer than 3 lines. Use markdown inline code or code blocks
- Explicitly communicate the scenarios, environments, or inputs needed for the bug to arise
- Use a matter-of-fact tone, not accusatory or overly positive
- Write so the author can immediately grasp the idea without close reading
- No flattery ("Great job...", "Thanks for...")
Priority Levels
- P0 — Drop everything. Blocking release or operations. Only for universal issues that do not depend on assumptions about inputs
- P1 — Urgent. Should be addressed in the next cycle
- P2 — Normal. To be fixed eventually
- P3 — Low. Nice to have
What to Ignore
- Trivial style unless it obscures meaning or violates documented standards
- Pre-existing issues not introduced by this changeset
Output Format
Return findings as a numbered list. For each finding:
### [P<N>] <title (imperative, ≤80 chars)>
**File:** `<file path>` (lines <start>-<end>)
<one paragraph explaining why this is a bug, what scenarios trigger it, and the impact>
After all findings, add:
## Overall Verdict
**Correctness:** <correct | incorrect>
<1-3 sentence explanation>
If there are no qualifying findings, state that the code looks correct and explain briefly.
More from tobihagemann/turbo
find-dead-code
Find dead code using parallel subagent analysis and optional CLI tools, treating code only referenced from tests as dead. Use when the user asks to \"find dead code\", \"find unused code\", \"find unused exports\", \"find unreferenced functions\", \"clean up dead code\", or \"what code is unused\". Analysis-only — does not modify or delete code.
30simplify-code
Run a multi-agent review of changed files for reuse, quality, efficiency, and clarity issues followed by automated fixes. Use when the user asks to \"simplify code\", \"review changed code\", \"check for code reuse\", \"review code quality\", \"review efficiency\", \"simplify changes\", \"clean up code\", \"refactor changes\", or \"run simplify\".
23smoke-test
Launch the app and hands-on verify that it works by interacting with it. Use when the user asks to \"smoke test\", \"test it manually\", \"verify it works\", \"try it out\", \"run a smoke test\", \"check it in the browser\", or \"does it actually work\". Not for unit/integration tests.
22finalize
Run the post-implementation quality assurance workflow including tests, code polishing, review, and commit. Use when the user asks to \"finalize implementation\", \"finalize changes\", \"wrap up implementation\", \"finish up\", \"ready to commit\", or \"run QA workflow\".
22self-improve
Extract lessons from the current session and route them to the appropriate knowledge layer (project AGENTS.md, auto memory, existing skills, or new skills). Use when the user asks to \"self-improve\", \"distill this session\", \"save learnings\", \"update CLAUDE.md with what we learned\", \"capture session insights\", \"remember this for next time\", \"extract lessons\", \"update skills from session\", or \"what did we learn\".
22evaluate-findings
Critically assess external feedback (code reviews, AI reviewers, PR comments) and decide which suggestions to apply using adversarial verification. Use when the user asks to \"evaluate findings\", \"assess review comments\", \"triage review feedback\", \"evaluate review output\", or \"filter false positives\".
22