vibe-review-code
Vibe Code Review Protocol
标准真源:
- 术语与默认动作语义以
docs/standards/glossary.md、docs/standards/action-verbs.md为准。 - Skill 与 Shell 边界以
docs/standards/skill-standard.md、docs/standards/command-standard.md、docs/standards/shell-capability-design.md为准。 - 触发时机与相邻 skill 分流以
docs/standards/skill-trigger-standard.md为准。 - 交付 flow / PR / worktree 语义以
docs/standards/git-workflow-standard.md、docs/standards/worktree-lifecycle-standard.md为准。
核心职责: 代码质量审查(PR 提交前后的深度分析)
使用场景:
- PR 前: 在运行
vibe flow pr之前,进行深度静态分析 - PR 后: 根据
vibe flow review的反馈修复代码
语义边界:
vibe-review-code负责 source code diff、实现风险、调用影响、测试覆盖与 review feedback 复核。- 仅当用户要审查代码实现本身,或根据 review feedback 回修代码时介入。
- 文档、标准、changelog、概念漂移的审查交给
vibe-review-docs。
When invoked as a code reviewer, you are a Senior Staff Engineer tasked with guarding the project against entropy, dead code, and standard violations.
0. Token 优化策略(推荐)
此 skill 消耗大量 token(读取大量代码文件)。强烈建议使用以下策略:
方案 A:使用 Subagent(推荐)
# 在独立 agent 中运行审查,避免污染主会话
# AI 会自动使用 Agent tool 启动 subagent
优势:
- 隔离执行环境
- 主会话 token 不被消耗
- 可并行执行其他任务
方案 B:使用 Codex 本地审查(最快)
# 使用 codex 进行本地代码审查(如果可用)
vibe flow review --local
优势:
- 零 token 消耗(本地 LLM)
- 执行速度最快
- 深度静态分析
Fallback: 如果 codex 不可用,自动回退到 copilot(如果配置)
方案 C:传统 AI 审查
直接在主会话中执行审查(不推荐,消耗大量 token)
1. 与 vibe-test-runner 的关系(互补)
vibe-test-runner:偏执行验证(Serena + Lint + Tests + Review Gate),通常在代码改完后自动跑。vibe-review-code:偏人工审查结论,适合 PR 前人工把关、PR 后针对 review comment 复核。- 推荐顺序:先让
vibe-test-runner跑出基础质量结果,再用本 skill 输出最终审查意见。
触发时机
- 你准备发起 PR,需要先做一轮严格审查时
- 你收到 review comment,需要确认修复是否引入回归时
- 你需要一份结构化审查结论(Blocking/Major/Minor/Nit)时
- 用户说“review 这段代码 / 这次改动 / 这个实现”且对象是 source changes 时
1. Context Gathering (Align Truth)
- Identify Intent: Run
vibe flow review(Physical Tier 1) to determine the current state of the PR and project health. - Fetch Diff:
- If a PR exists (opened by
flow reviewor confirmed): Usegh pr diffto fetch the source of truth for changes. - If local only: Use
git diff main...HEAD.
- If a PR exists (opened by
- If local: Use
git diffandgit diff --cachedfor uncommitted changes; usegit diff main...HEADfor committed branch diffs. - Review Context: Cross-reference with the Task README and the original goal from
.agent/context/task.md.
2. Serena 使用步骤(审查前)
Before deciding severity on function-level changes, run Serena impact analysis first.
Startup:
- Prefer on-demand startup:
uvx --from git+https://github.com/oraios/serena@v0.1.4 serena start-mcp-server - Preconditions:
uv/uvxavailable and project has.serena/project.yml - Evidence command:
bash scripts/serena_gate.sh --base main...HEAD - Required artifact:
.agent/reports/serena-impact.json
Required checks:
- For each changed function, run
find_referencing_symbols("<function_name>"). - For each removed function, verify caller count is
0; otherwise mark asBlocking. - For each signature change, verify all callers are updated; otherwise mark as
Major.
If Serena is unavailable:
- Record blocking reason (tool/network/config).
- Continue review with
git diff+ targeted grep as fallback. - Add one
Majorfinding: "AST impact analysis not completed".
3. Review Standards (MSC Paradigm Gate)
You MUST strictly evaluate the code against CLAUDE.md and DEVELOPER.md:
- LOC Hard Limits: Are new functions blowing up the line count? (Threshold: bin/ + lib/ <= 7000 LOC, max 300 lines per file).
- Zero Dead Code: Does every added shell function have a clear caller? If not, FLAG IT as a blocking issue.
- Safety & Robustness: Are Zsh/Bash parameters properly quoted? Are error cases handled gracefully?
- Testing: Does the branch include modifications or additions to
bats tests/if a bug was fixed or feature added? - Linting Check: Has the user passed
bash scripts/lint.sh? Run it if unsure.
3.1 Document Governance Check
When the change touches documentation, you MUST also review it against:
SOUL.mddocs/standards/glossary.mddocs/standards/action-verbs.mddocs/standards/doc-quality-standards.md
Check these questions:
- Is the document acting within its role (
入口文件/标准文件/参考文件/规则文件)? - Does it redefine a concept that should only live in
glossary.md? - Does it use a high-frequency action verb in a way that conflicts with
action-verbs.md? - Is an entry document carrying too much detail that should move to
docs/standards/or.agent/rules/? - If the file is historical or superseded, is that status made explicit?
If any answer fails, report it as a documentation governance finding even if the prose itself is clear.
4. Review Process
- Understand Intent: Compare implementation against the
docs/prds/or plan file. - Line-by-Line Analysis: Point out exact files and lines where issues exist.
- Actionability: Never just say "it's bad", always provide the code snippet to fix it.
5. Output: The Code Review Report
Construct a structured report using Markdown with strict severity buckets:
BlockingMajorMinorNit
Each finding MUST include:
file/functionissuefailure modeminimal fix