mr-code-reviewer
MR Code Reviewer
Performs deep, context-aware code review of GitLab merge requests. Pulls together: the full diff, commit history, CI status, Jira acceptance criteria, ADRA handbook standards, and local branch inspection — then produces inline GitLab comments plus a concise in-chat summary.
Inputs
The user will typically provide one or more of:
- MR IID (e.g.
248,!248) - Project path (e.g.
adra-network/backend/adra-backends) — infer from current repo if not provided - Jira ticket key (e.g.
DEVT-1234) — extract from MR title/description if not provided
Workflow
Run these phases in order. Don't skip phases — each one feeds the next.
Phase 1: Gather MR Context (GitLab MCP)
Call all four tools in parallel (they're independent):
get_merge_request → title, description, source/target branch, author, state
get_merge_request_diffs → full diff (paginate: page=1, per_page=20 → repeat until all pages fetched)
get_merge_request_commits → commit messages and authors
get_merge_request_pipelines → CI status (passed/failed/pending)
Required arguments: id = project path string, merge_request_iid = integer IID.
Extracting the Jira ticket: Look in the MR title, description, and branch name for a pattern like DEVT-\d+, ADRA-\d+, or similar. If found, use it in Phase 2. If not found, note it and proceed without Jira context.
Paginating diffs: The diffs endpoint returns one page at a time. After the first call, check if there are more pages by comparing the returned count to per_page. Keep fetching until you have all diff entries. Large diffs (>500 lines total) — focus review on changed logic; skip auto-generated or vendor files.
Phase 2: Fetch Ticket Requirements (Jira MCP)
Use user-mcp-router → getJiraIssue:
cloudId:adrasource.atlassian.netissueIdOrKey: the ticket key extracted in Phase 1responseContentFormat:markdown
Extract and keep:
- Acceptance criteria (every line/checkbox in the description or AC field)
- Functional requirements (what the feature must do)
- Out of scope notes (so you don't flag things intentionally excluded)
- Linked tickets (use
searchJiraIssuesUsingJqlif you need related context)
If no ticket found or Jira call fails, continue without AC and note it in the summary.
Phase 3: Fetch Coding Standards (Handbook MCP)
Use user-mcp-router to pull relevant standards. Run in parallel based on what the diff touches:
get-rules context="backend" → PHP/Laravel conventions (if backend files changed)
get-rules context="frontend" → Vue/Nuxt conventions (if frontend files changed)
get-rules context="general" → Always fetch this one
search-handbook query="..." → Search for specific patterns seen in the diff (e.g. "feature flags", "RBAC", "queued jobs")
Only load handbook pages that are relevant — don't bulk-load all pages.
Phase 4: Local Branch Inspection (Shell)
For things the diff alone can't tell you — test coverage, file context, dependency changes:
git fetch origin
git checkout <source_branch>
Use the Shell tool for these commands. After review, do not commit or push anything.
Useful local checks depending on what's in the diff:
- Read full file context around changed methods (diff truncates)
- Check for related tests:
find . -name "*Test*" -path "*/tests/*" | xargs grep -l "ClassName" 2>/dev/null - Check for config or migration files that should accompany model/schema changes
- Inspect
composer.jsonorpackage.jsonif dependencies changed
Phase 5: Analyze
Now that you have: diff + commit history + CI status + Jira AC + coding standards + local context — perform the analysis.
Go through every changed file. For each, consider:
Requirements alignment
- Does this implement every acceptance criterion from the Jira ticket?
- Are there ACs that aren't covered by any code change?
- Does anything in the diff contradict or exceed the ticket scope?
Code quality
- Are there N+1 queries, unbounded loops, or inefficient data access patterns?
- Is error handling present and appropriate (not swallowed, not overly broad)?
- Are method/variable names clear and consistent with surrounding code?
- Is there dead code, commented-out blocks, or debug statements left in?
- Are there hardcoded values that should be config/env/constants?
Testing
- Are there tests for the changed logic?
- Do tests cover happy path AND edge cases (empty input, nulls, boundary values)?
- Are assertions meaningful (not just
assertTrue(true))? - Are tests isolated (no database bleed, proper use of factories/fakes)?
Security
- Is user input validated and sanitized before use?
- Are authorization checks present where data is accessed or mutated?
- Are secrets, credentials, or PII ever logged or exposed?
- Are SQL queries parameterized / using the ORM correctly?
Performance
- Are database queries optimized? (eager loading, indexed columns used in WHERE)
- Are expensive operations (API calls, file I/O) done synchronously when they should be async?
- Is caching used where appropriate?
Documentation & conventions
- Are public methods/classes documented where non-obvious?
- Does the code follow ADRA handbook conventions for this stack?
- Are feature flags used correctly (see
feature-flagsskill if applicable)? - Are translations added for any new user-facing strings?
Phase 6: Produce Findings
For each issue found, create a structured finding:
severity: Critical | Medium | Low
file: repo-relative path (as it appears in the diff)
line: line number in the new file
line_type: "new" (added line) | "old" (removed line) | "context" (unchanged)
title: short, specific description of the issue
detail: explanation — what's wrong and why it matters
suggestion: (optional) corrected code or GitLab suggestion block
reference: (optional) Jira AC number, handbook page, or standard
Severity guide:
Critical— security vulnerability, data loss risk, broken AC, test suite not runningMedium— logic bug, missing edge case, performance issue, missing test for changed logicLow— style, naming, missing doc, minor improvement
Aim for quality over quantity. Don't create Low findings for things that are clearly intentional or stylistically equivalent.
Phase 7: Write review-{IID}.json
Read the gl-review-json skill at /Users/francoisauclair/.cursor/skills/gl-review-json/SKILL.md and follow its format exactly.
Write the file to the repo root: review-{MR_IID}.json
Every finding from Phase 6 becomes one entry. Use GitLab suggestion syntax (\``suggestion:-N+M`) when you have a concrete one-click fix.
Post it:
bash scripts/gl-review.sh review-{MR_IID}.json
Phase 8: In-Chat Summary Report
After writing the JSON, output a summary directly in chat:
## MR !{IID} Review — {MR title}
**Branch:** {source} → {target}
**Author:** {author}
**CI:** {status}
**Jira:** {ticket key} — {ticket title or "not found"}
---
### Requirements Coverage
{For each AC: ✅ covered / ❌ missing / ⚠️ partially covered — one line each}
---
### Findings ({N} total: {X Critical, Y Medium, Z Low})
**Critical**
- [{file}:{line}] {title} — {one-line summary}
**Medium**
- [{file}:{line}] {title} — {one-line summary}
**Low**
- [{file}:{line}] {title} — {one-line summary}
---
### Overall Assessment
{2-4 sentences: Is this MR ready to merge? What are the blockers, if any? Any patterns worth calling out?}
**Verdict:** ✅ Approve | ⚠️ Approve with suggestions | 🚫 Request changes
Keep it skimmable. Don't repeat the full detail of each finding — that's in the JSON. The summary is the at-a-glance view.
MCP Tool Reference
GitLab (user-GitLab)
| Tool | Purpose |
|---|---|
get_merge_request |
MR metadata: title, description, author, branches, state |
get_merge_request_diffs |
Full diff (paginate with page/per_page) |
get_merge_request_commits |
Commit history |
get_merge_request_pipelines |
CI pipeline status |
get_workitem_notes |
Existing review comments (avoid duplicating) |
Jira (user-mcp-router)
| Tool | Purpose |
|---|---|
getJiraIssue |
Full ticket with AC, description, status |
searchJiraIssuesUsingJql |
Find linked/related tickets |
Handbook (user-mcp-router)
| Tool | Purpose |
|---|---|
get-rules |
Coding standards for backend, frontend, general |
search-handbook |
Find relevant documentation by keyword |
get-page |
Retrieve a specific handbook page |
Common Pitfalls
- Diff pagination: Always check if you got all pages. A review based on a partial diff will miss issues.
- Jira ticket not in MR: Check the branch name too — branches are often named
DEVT-1234-some-description. - Generated files: Skip
_ide_helper*.php,composer.lock,package-lock.json, minified assets — flag only if versions look suspicious. - Line numbers in suggestions: The
linefield must match the line in the new file version, not the old. Get it from the+lines in the diff. - Don't over-comment: If the MR is clean and straightforward, say so. An empty or minimal review is valid.