second-opinion
Second Opinion
Run an independent code review without changing the repo.
When to use it
- non-trivial code changes
- risky refactors
- security, performance, or concurrency-sensitive edits
- schema or API changes
- before opening or merging a PR
- when the user explicitly wants another model's view
When to skip it
- docs-only or formatting-only changes, unless the user still wants it
- repos or diffs that must not be sent to external services
- empty diffs
If repository guidance or user instructions forbid sending code to third-party tools, stop and ask before proceeding.
Infer the request first
Infer as much as possible from the user's message:
- tool:
codex,gemini, orboth - scope:
uncommitted,branch diff,commit, orPR - focus:
general,security,performance,error handling,architecture, or a custom concern
Ask one concise follow-up only if a missing detail blocks the run.
Read only what you need
- Read references/workflow.md for scope detection, diff sizing, review-brief construction, and synthesis rules.
- Read references/codex.md only if running Codex.
- Read references/gemini.md only if running Gemini.
Core workflow
- Detect the scope and compute diff stats.
- Stop on empty diffs.
- Warn on large diffs and suggest narrowing scope before spending tokens.
- Build a short review brief that tells the reviewer:
- what changed
- what to focus on
- how to inspect the diff locally
- that the review is read-only
- what output format to return
- Run the selected tool or tools in parallel if independent.
- Present findings first, then agreement and disagreement across tools.
- Never auto-apply suggested fixes unless the user explicitly asks.
Safety defaults
- Keep the review read-only.
- Do not commit, push, stage, or edit files as part of the second opinion run.
- Prefer tool-native review commands over manual diff pasting when the tool can inspect the repo directly.
- Prefer prompt files or stdin over fragile one-line shell quoting for long review briefs.
- Use explicit timeouts and capture output cleanly when automating external review CLIs.
- Clean up temporary prompt and output files after reading them.
Review result format
Present results in this order:
- Findings by tool, highest severity first.
- Explicit
No findingsif a tool returns nothing substantive. - A short synthesis:
- where the tools agree
- where they disagree
- what looks worth acting on first
Keep the synthesis separate from the raw reviewer output so the user can distinguish the outside opinion from your judgment.
More from blacktop/dotfiles
ratatui-tui
|
117code-simplifier
Simplifies and refines code for clarity, consistency, and maintainability while preserving all functionality. Focuses on recently modified code unless instructed otherwise.
3handoff
Generate optimized handoff prompts for delegating work to another LLM agent. Use when handing work to GPT-5.x/Codex, Claude 4.x, Gemini 3.x, or Grok 4.x, either as a shared-workspace sub-task handoff or a fresh-context handoff for a new session or model. Triggers on requests like "create a handoff prompt", "delegate this task to another agent", "hand this off", or "prepare context for another agent".
1rust-profiling
Profile Rust code using samply to identify CPU bottlenecks. Use when performance is slow, before optimizing, or when the user asks to profile.
1go-performance
Measure and improve Go program performance using current Go 1.26-era workflow. Use when profiling Go code, diagnosing CPU or memory bottlenecks, investigating latency or contention, writing or fixing benchmarks, comparing benchmark results, using pprof or trace data, applying PGO, or tuning hot-path Go code.
1humanizer
|
1