code-review
Code Review
Structured code review with severity-labeled feedback. Discovers scope from git state, loads language-specific rules on demand.
When to Use
- Reviewing uncommitted or committed code changes
- Reviewing a GitHub PR
- MR review or quality check
- Establishing review standards
Project Conventions
Before reviewing, check if the project has a docs/CodeStyle.md file (or similar convention document). If present, load it and treat project-specific rules as overrides to the general rules below.
Review Process
Phase 1: Scope Discovery
Determine what to review. The user specifies the mode, or the skill asks.
Mode 1 — Uncommitted changes:
- Run
git statusto collect changed, staged, and untracked files - Present a compact file list grouped by status
- Ask the user to confirm or exclude files
Mode 2 — Committed changes:
- Ask the user if they are on the correct branch
- Ask for the diff target:
master, a branch name, or a commit ID - Run
git diff <target>...HEAD --name-statusto get the file list - Present the list, ask the user to confirm or exclude files
Mode 3 — GitHub PR:
- Ask the user for the PR number
- Run
gh pr diff <number> --name-onlyto get changed files - Present the list, ask the user to confirm or exclude files
After confirmation, record the confirmed file list and the diff mode. Each file's diff will use the matching method:
- Uncommitted → working tree diff (
git diff/git diff --cached) - Committed →
git diff <target>...HEAD -- <file> - GitHub PR →
gh pr diff <number> -- <file>
Also check for docs/CodeStyle.md and load if present.
Note the change size. If the confirmed list exceeds ~400 changed lines, suggest splitting.
Phase 2: High-Level Review
- Apply general code quality rules (see below)
- Check file organization and architecture fit
- Group confirmed files by language:
.vue→ Vue group.ts/.js→ TypeScript group.rs→ Rust group
- Determine the review order (one language at a time)
Phase 3: Detailed Review
Process one language group at a time to avoid filling context with all references at once.
For each language group:
- Load that language's references (see Language References table below)
- For each file in the group: a. Get the diff using the recorded mode b. Read the full file for context where needed c. Apply language-specific rules from the loaded references d. Label each finding with a severity (see below) e. Use question-based feedback for non-blocking items
- When the group is done, move to the next language group
If security concerns are spotted in any file, load references/security.md at that point.
Phase 4: Summary & Report
- Run through checklists (general + language-specific)
- Write the review report to
docs/reviews/<branch-name>.md(create folder if needed)- Get branch name:
git branch --show-current - If uncommitted mode with no branch (detached HEAD), use the date:
review-YYYY-MM-DD.md - Ask user to confirm or change the save path before writing
- Get branch name:
- Report format — see Report Template below
Report Template
# Code Review: <branch-name>
**Date:** YYYY-MM-DD
**Reviewer:** AI (code-review skill)
**Mode:** Uncommitted / Committed (diff target: `<target>`) / GitHub PR #N
**Files reviewed:** N
## Summary
<1-3 sentence overall assessment. State the verdict: looks good / has issues to address / needs significant rework.>
### Stats
| Severity | Count |
|----------|-------|
| blocking | N |
| important | N |
| nit | N |
| suggestion | N |
| praise | N |
## Findings
### <file-path>
- **[severity]** <finding title>
<description — what, why it matters, suggestion>
- **[severity]** <finding title>
<description>
### <file-path>
- **[severity]** ...
## Good Patterns
- <what was done well and why it's worth keeping>
## Checklist
<paste the filled general + language-specific checklists with [x] for passed items>
Severity Labels
Use one label per finding. Every finding must have a label.
| Label | Meaning | Action required |
|---|---|---|
[blocking] |
Bug, security issue, data corruption risk | Must fix before merge |
[important] |
Test gap, unclear naming, moderate perf issue | Should fix; discuss if disagree |
[nit] |
Style, minor naming, readability | Nice to have, not blocking |
[suggestion] |
Alternative approach worth considering | No action needed |
[learning] |
Educational — explains why something matters | No action needed |
[praise] |
Good work, reinforcement of good patterns | No action needed |
Feedback Approach
Ask questions instead of making statements:
- "What happens if
itemsis an empty array?" instead of "This will fail on empty arrays" - "How should this behave if the API call fails?" instead of "You need error handling"
Use collaborative language:
- "Consider..." / "Have you thought about..." / "Would it make sense to..."
- Not: "You must..." / "This is wrong" / "Why didn't you..."
Be specific and actionable:
- Include what the problem is, why it matters, and a concrete suggestion
- Reference the relevant rule or pattern when applicable
Balance criticism with praise:
- Call out good patterns with
[praise] - Acknowledge thoughtful decisions
General Code Quality Rules
These apply to any language. Brief checks — detailed patterns live in language references.
- Single responsibility — each function/component does one thing
- Size limits — functions < 50 lines, files < 300 lines, function parameters < 4
- Naming — descriptive names, no single-letter variables (except loop indices), consistent casing
- Dead code — no commented-out code, no unused imports or variables
- Complexity — no nesting deeper than 3 levels, no long boolean chains (extract to named variables)
- Error handling — no swallowed errors, no empty catch blocks, no ignored promise rejections
- DRY — no copy-paste blocks longer than 5 lines (extract to shared function)
- Change scope — the change does one thing; < 400 changed lines preferred
General Checklist
- Each function/component has a single clear responsibility
- No function exceeds ~50 lines
- No file exceeds ~300 lines
- Names are descriptive and consistent
- No commented-out code or unused imports
- No nesting deeper than 3 levels
- All errors are handled (no empty catch, no swallowed rejections)
- No duplicated logic blocks > 5 lines
- Change is focused — does one thing
Language References
Load the matching references based on file extensions in the confirmed list.
| File extension | Reference | Load |
|---|---|---|
.vue |
references/vue/ |
All files in folder |
.ts, .js |
references/typescript/ |
All files in folder |
.rs |
references/rust/ |
All files in folder |
| Any (security concern) | references/security.md |
When security issues spotted |
Each reference folder contains topic files with ❌/✅ patterns and a checklist.md for quick final-pass.
More from olamedia/analytics-skills
analyze-project
Use when starting work on any project to produce or update living documentation (TechStack.md, ProjectStructure.md) that bootstraps context for any AI agent session. Run before any feature work, or periodically to keep docs current.
13humanizer
>-
12architect
>-
12goal-definition
Use when you have a raw idea or request and need to define a clear goal with success criteria before exploring solutions. Use when requirements are vague, when "what does done look like" is unclear, or when assumptions need surfacing.
11prd
Use when you have a chosen direction and need to formalize requirements into a Product Requirements Document. Use when user stories, acceptance criteria, and scope boundaries need to be written down before architecture or implementation.
10analyze
Use when you have a raw idea or request and want to run the full analytics pipeline automatically — from research through to an interlinked task list. Best for straightforward problems where the full pipeline can flow with minimal back-and-forth.
10