review-implementation
Review Implementation
Perform a thorough code review of an implementation done by another AI agent, using a review context document as the starting point.
When to Use
- The user provides a review context document (generated by
generate-review-contextor written manually). - The user asks you to review work done by another AI agent.
- The user wants a list of issues to send back to the implementing agent.
Inputs
Required:
- A review context document with scope, files created, files modified, and architecture summary.
Also needed (read these yourself):
- The design doc or spec referenced in the context.
- Every file listed in the context (both created and modified).
If the review context is missing, ask the user to provide one or run generate-review-context first.
Process
Step 1: Read Everything
- Read the review context document completely.
- Read the design doc / spec referenced in the context.
- Read every file listed in Files Created and Files Modified.
- Read any files called out in the Review Notes section.
Do not skip files. Do not skim. Read every file fully before writing any feedback.
Step 2: Check Architecture Alignment
Compare the implementation against the design doc:
- Does the data flow match what the spec describes?
- Are the abstractions (types, interfaces, components) shaped as specified?
- Are the rules from the spec followed (e.g., "blocks never run SQL", "URL is source of truth")?
- Are there deviations? If so, are they intentional improvements or drift?
Report alignment as a short section. Call out any deviations with specific file paths and line numbers.
Step 3: Find Issues
Scan every file for these categories, in this priority order:
High Priority — Data Correctness
- Do computed values match what the spec says they should compute?
- Are SQL queries correct? Do JOINs, WHERE clauses, and aggregations produce the right result?
- Do debug/display SQL strings accurately mirror the live SQL?
- Are numeric types handled correctly (string-to-number coercion, null handling, division by zero)?
High Priority — Security
- SQL injection vectors (especially
sql.unsafe(), string interpolation in queries, unvalidated user input in query parameters). - XSS vectors in rendered content.
- Sensitive data exposure (API keys, connection strings, internal IDs in client bundles).
High Priority — Runtime Failures
- Null/undefined access on data that could be empty (empty arrays, missing rows, optional fields).
- Async operations without error handling.
- Client code that assumes DOM APIs exist (breaks in SSR).
- Missing
"use client"directives on components that use hooks or browser APIs.
Medium Priority — Spec Divergence
- Fields, types, or behaviors that exist in the implementation but not the spec (or vice versa).
- Naming mismatches between spec examples and implementation.
- Features the spec says are required but the implementation marks as "coming soon" or skips.
Medium Priority — Dead Code & Duplicates
- Files that exist but are not imported anywhere. Verify with grep before reporting.
- Exported functions/types with zero callers. Verify with grep before reporting.
- Duplicate utility functions across files (same logic, different names or locations).
- Interfaces that extend another interface but add nothing.
Low Priority — Code Quality
- No-op expressions (e.g., conditional that evaluates to the same value on both branches).
- Developer-facing text in user-facing UI (architecture notes in footers, implementation comments shown to users).
- Unnecessary directives (e.g.,
"use client"on a module that's only imported by client components). - Hardcoded values that should track a definition (e.g., skeleton assumes 6 cards matching current block count).
Step 4: Review Flagged Areas
Go through every item in the Review Notes section of the context document. For each one:
- Investigate the specific concern.
- Report whether it's a real issue or a false alarm.
- If it's a real issue, classify it by priority.
Step 5: Write the Review
Output Format
## [Feature Name] Review
### Architecture Alignment — [Good / Concerns / Major Drift]
[2-5 bullet points on how well the implementation matches the spec. Call out any deviations.]
### Issues to Fix
For each issue, use this format:
**[Number]. [Short title] — in [file path:line range]**
[1-3 sentences explaining the issue. Include what's wrong, why it matters, and what the fix should be.]
Group issues by priority:
- **High** — data bugs, security issues, runtime failures
- **Medium** — spec divergence, dead code, duplicates
- **Low** — code quality, cosmetic issues
### Summary of Action Items
| Priority | Item | Location |
|----------|------|----------|
| **High** | One-line description | `file:lines` |
| **Medium** | One-line description | `file:lines` |
| **Low** | One-line description | `file:lines` |
Rules
- Verify before reporting. Don't report dead code without grepping for imports. Don't report a bug without reading the actual code path. False positives waste the implementing agent's time.
- Include file paths and line numbers. Every issue must be traceable. Use
file.ts:42format. - Explain why, not just what. "This is wrong" is not useful. "This computes margin from userRevenue but the spec says it should use revenue.total (payment revenue), which is a different number" is useful.
- Don't suggest rewrites. Report the issue and the expected behavior. Let the implementing agent decide the fix approach.
- Don't nitpick style. Formatting, naming conventions, and code organization preferences are not review issues unless they cause confusion or bugs.
- Prioritize honestly. Not everything is high priority. A no-op
cn()call is low priority. A SQL injection vector is high priority. Don't inflate severity. - Be direct. Write for an AI agent that will read this and fix the issues. No pleasantries, no hedging, no "you might want to consider".
- Trust the review context. If the context says certain files are out of scope, don't review them.
More from marclelamy/skills
builder-review-loop
Use when one agent is implementing code and another agent must review the resulting changes, compare the summary against the actual files, decide whether to fix now or move on, and write the next tightly scoped prompt with context handoff guidance.
10code-commenting
Code commenting conventions for TypeScript/React projects. Use when adding comments to new files, reviewing uncommented code, or when user asks to document/comment code. Covers file headers, type annotations, function docs, inline comments, and what NOT to comment.
9frontend-design
Create distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, artifacts, posters, or applications (examples include websites, landing pages, dashboards, React components, HTML/CSS layouts, or when styling/beautifying any web UI). Generates creative, polished code and UI design that avoids generic AI aesthetics.
9find-skills
Helps users discover and install agent skills when they ask questions like "how do I do X", "find a skill for X", "is there a skill that can...", or express interest in extending capabilities. This skill should be used when the user is looking for functionality that might exist as an installable skill.
9marketing-ideas
When the user needs marketing ideas, inspiration, or strategies for their SaaS or software product. Also use when the user asks for 'marketing ideas,' 'growth ideas,' 'how to market,' 'marketing strategies,' 'marketing tactics,' 'ways to promote,' 'ideas to grow,' 'what else can I try,' 'I don't know how to market this,' 'brainstorm marketing,' or 'what marketing should I do.' Use this as a starting point whenever someone is stuck or looking for inspiration on how to grow. For specific channel execution, see the relevant skill (paid-ads, social-content, email-sequence, etc.).
8tdd
Test-driven development with red-green-refactor loop. Use when user wants to build features or fix bugs using TDD, mentions "red-green-refactor", wants integration tests, or asks for test-first development.
8