review
Review
Independently audit existing code with concern-specific lenses and decide whether it is safe to ship. Review is the gate after verify — the builder proves the change works on the real surface, then review decides whether the change is good.
Principles
- Prefer parallel reviewer personas when the concerns are independent
- Evidence beats taste
- Load shared doctrine from the target repo's guidance files such as
AGENTS.md,CLAUDE.md, or repo rules - Keep the final verdict tied to concrete evidence, not reviewer instinct alone
- Keep findings risk-focused; do not drown the user in low-value nits
- Track reviewer personas internally; include them visibly only when asked or when the harness has compact metadata
- If runtime proof for your own completed change is the goal, hand off to
verify
Handoffs
- Self-checking a change you just authored, before handing it off for review → use
verify - Review is blocked because the repo cannot be booted or exercised reliably → use
agent-readiness - Main problem is stale AGENTS.md, README, specs, or repo docs → use
docs
Before You Start
- Define the scope: file, diff, branch, commit range, or PR
- Load the target repo's guidance files such as
AGENTS.md,CLAUDE.md, or repo rules, when present - Choose reviewer personas from references/reviewer-selection.md
- Decide which personas can run independently in parallel
Default personas:
Add conditional personas only when they earn their keep:
Persona shortcuts:
- doc-only or comment-only diffs: use
generalpluscomments; skiptests,types,silent-failures, andcleanupunless the diff actually justifies them - type-shape or schema changes: add
types - dead files, deprecated paths, or obviously unused helpers: add
cleanupand call out deletion explicitly when warranted - mock-heavy or shallow tests around risky behavior: make that a finding rather than treating test presence as proof
Workflow
1. Scope nearby risk
Review the requested code, but inspect adjacent behavior when the risk leaks past the named diff.
2. Run reviewer personas
Use parallel subagents when available. Keep each persona concern-focused and independent.
Concrete starting points:
git diff --stat <base>...HEADto size the changegit diff <base>...HEAD -- <path>to inspect risky files- targeted tests such as
pnpm test path/to/specwhen behavior claims need proof
3. Collect evidence
- Cite exact file references for static findings
- Run the smallest runtime check that changes the verdict when the repo supports it
- If something is unverified, say so explicitly instead of bluffing
- If legacy or dead code is still present, say whether it should be deleted or why it must stay
- If tests mock the main integrations or boundaries, say that the behavior is still unverified on the real surface
4. Synthesize the verdict
Order findings by severity. If no findings are discovered, say that explicitly and mention any residual risk or testing gap. Choose exactly one verdict: ship it, needs review, or blocked.
Output
After review, report a tiny verdict footer:
- verdict
- evidence summary: exact command names or runtime surfaces, not full logs
- unverified areas or readiness gaps
- next: implementation,
verify,agent-readiness, ordocs
Use those labels explicitly. Do not replace them with softer prose like "safe to merge" or "do not ship today".
Prefer the active harness's best native review representation instead of a prose-heavy wall of text.
Keep the final answer short:
- Put detailed issue text, file references, and line numbers in native findings or the fallback findings list
- Do not repeat native finding details in the verdict block
- Keep the verdict footer to 4 labeled lines or fewer
- Keep each label to one sentence; use comma-separated command names instead of log excerpts
- Omit scope and personas from the footer unless the user asked for them or the scope would be ambiguous without one short
reviewed:line - If there are no findings, say
findings: noneand keep the rest equally compact
Harness-specific presentation rules:
- Prefer the strongest structured finding format available: Codex/OpenAI native
P0/P1/P2/P3cards, or a compact table in Claude/Anthropic harnesses - If no richer primitive exists, use a short severity-ordered findings list with file/line, issue, impact, and evidence
- Never hide actionable findings inside the footer or a long prose recap
Example:
verdict: needs review
finding: high — src/auth/session.ts:42 fallback returns an anonymous session when token parsing fails
evidence: pnpm test src/auth/session.test.ts
unverified areas: runtime behavior for malformed OAuth callbacks
next: implementation
References
- references/reviewing.md — reviewer persona workflow, evidence expectations, and verdict synthesis
- references/reviewer-selection.md — which reviewer personas to run for which change shapes
- reviewers/ — specialized review lenses
More from uinaf/skills
verify
Self-check your own completed change before handing off to `review` — the pre-review sanity pass. Use when you want to verify your change, test it end-to-end, run the repo's guardrails (lint, typecheck, tests, build), exercise the real surface with evidence, and catch obvious self-correctable issues you can fix in seconds. Produces a `ready for review` / `needs more work` / `blocked` verdict — never a ship decision (that's `review`'s job). If the repo cannot be booted or exercised reliably, hand off to `agent-readiness`. If you are auditing someone else's diff, branch, or PR, use `review` instead.
37effect-ts
Implement, debug, refactor, migrate, review, or explain Effect TypeScript code. Use when a task touches `effect` or `@effect/*` APIs, especially services, layers, schemas, runtime wiring, platform or CLI packages, Effect testing, or Promise-to-Effect migration.
37docs
Update repo documentation and agent-facing guidance such as AGENTS.md, README.md, docs/, specs, plans, and runbooks. Use when code, skill, or infrastructure changes risk doc drift or when documentation needs cleanup or restructuring. Do not use for code review, runtime verification, or `agent-readiness` setup.
36viteplus
Migrate or align frontend repositories to the stock VitePlus workflow. Use when standardizing package or monorepo repos around `vp`, `voidzero-dev/setup-vp`, `vite-plus/test`, and VitePlus-native CI, test, packaging, and hook flows. Default to replacing direct package-manager and Vitest wiring with the VitePlus equivalents unless the repo has a proven exception.
29agent-readiness
Audit and build the infrastructure a repo needs so agents can work autonomously — boot scripts, smoke tests, CI/CD gates, dev environment setup, observability, and isolation. Use when a repo can't boot, tests are broken or missing, there's no dev environment, agents can't verify their work, or agents need human help to get anything done. Do not use for reviewing an existing diff or for documentation-only cleanup.
21skill-audit
Audit existing skills with Tessl scoring, metadata and trigger-coverage checks, repo conventions, and skill-authoring best practices. Use when creating or revising a skill, triaging weak self-activation, or comparing a skill against source-repo guidance such as `AGENTS.md`, `CLAUDE.md`, or repo rules, plus external skill guidance. Do not use to verify general application code or to rewrite unrelated docs.
19