code-review
Code Review
Run CodeRabbit through the bundled script so the workflow stays deterministic.
Use CodeRabbit as the only review source for findings. Do not add manual findings, speculative risks, or diff-based conclusions of your own.
Available Script
scripts/coderabbit_review.pyreview: preflight checks, scope selection, optional base resolution, CodeRabbit plain-text execution, artifact generationrender: convert the normalized JSON artifact into the required Markdown report
Resolve that path from the skill directory, not from the repository being reviewed.
Do not assume the target repository contains scripts/coderabbit_review.py.
review writes these artifacts into --output-dir:
progress.json: current wrapper state for polling long-running reviewsnormalized.json: normalized wrapper artifact with metadata and captured CodeRabbit outputreport.md: rendered Markdown report when availablecoderabbit.stdout.log: raw CodeRabbit stdout streamcoderabbit.stderr.log: raw CodeRabbit stderr stream
Run python3 <skill-dir>/scripts/coderabbit_review.py --help or python3 <skill-dir>/scripts/coderabbit_review.py review --help if you need the exact interface.
Required Workflow
- Create a temporary artifact directory.
- Run the deterministic review script against the repository the user wants reviewed.
- If the review is long-running or launched in the background, poll
progress.jsonin the artifact directory instead of rerunning the command. - If
progress.jsonreachesstate: "artifacts_ready"orreviewreturnsstatus: "ok", runrenderon the normalized artifact. - If
progress.jsonreachesstate: "error"orstate: "timed_out", report the wrapper failure and include the artifact paths it produced. - Return the rendered Markdown to the user when available.
Reference flow:
ARTIFACT_DIR="$(mktemp -d)"
python3 <skill-dir>/scripts/coderabbit_review.py review \
--repo "$PWD" \
--output-dir "$ARTIFACT_DIR"
python3 <skill-dir>/scripts/coderabbit_review.py render \
--input "$ARTIFACT_DIR/normalized.json"
When polling:
- prefer
progress.jsonover terminal-session liveness - do not start a second review if the first run's
progress.jsonis still updating - treat
state: "artifacts_ready"as the success signal for rendering - if
state: "running", the task is still in progress and you must keep polling the same artifact directory - while
state: "running", do not return a status-only final answer and do not summarize intermediate wrapper state unless the user explicitly asks for progress - only stop polling when the state becomes
artifacts_ready,error, ortimed_out
Scope Rules
Prefer explicit scope from the user:
- review local WIP or unstaged work:
--scope uncommitted - review committed branch changes:
--scope committed - otherwise let the script choose with
--scope auto
Use --base <branch> only when branch comparison matters or the user explicitly names a base branch.
Only pass --config <path> when the user explicitly names additional instruction files that exist. Do not scan the repository and invent config arguments for auto-detected guideline files.
Preconditions
The script already enforces the required checks:
- inside a Git repository
HEADresolvescoderabbitexists onPATHcoderabbit auth status --agentsucceeds
If preflight fails, stop and report the script's error. Do not run manual fallback review logic.
Output Contract
Return the rendered Markdown report produced by render.
The report contains:
- summary metadata
- the captured CodeRabbit plain-text review when available
- structured finding sections only if CodeRabbit emits machine-readable findings despite the plain-text request
If the normalized report contains zero structured findings but includes plain-text review output, return that result directly. Do not add extra concerns.
Source Policy
- Use only CodeRabbit output for findings.
- Do not inspect the diff to add new findings.
- Do not override or suppress findings unless they are exact duplicates merged by the script.
- Keep any explanation faithful to the normalized CodeRabbit output.