review

SKILL.md

Run all four reviewers in parallel on the recent changes.

Step 1: Parallel Reviews

Spawn ALL agents simultaneously using the Task tool in a single message:

  1. uncle-bob-reviewer: SOLID scan, TDD interrogation, clean code inspection
  2. cupid-reviewer: CUPID properties assessment (Composable, Unix, Predictable, Idiomatic, Domain-based)
  3. test-reviewer: Test pyramid placement, spec validation, naming conventions, flakiness vectors
  4. pii-reviewer: PII exposure, hardcoded secrets, sensitive data in tests/logs

Focus area: $ARGUMENTS

Step 2: Synthesize Results

After all complete, synthesize:

## Review Summary

### Uncle Bob (SOLID/Clean Code)
[Key findings]

### Dan North (CUPID)
[Key findings]

### Test Architect (Pyramid/Quality)
[Key findings on test placement, naming, and quality]

### Security (PII/Secrets)
[Key findings on sensitive data exposure]

### Conflicts Requiring Decision
[Any tensions between reviewers—these need user input]

### Agreed Improvements
[Recommendations all reviewers support]

Step 3: Surface Conflicts

If reviewers disagree on approach (e.g., "extract this class" vs "keep it unified", or "this is E2E" vs "this is integration"):

  • Clearly state all positions
  • Explain the tradeoff
  • Mark as NEEDS USER DECISION for the orchestrator to surface

Do not resolve design tradeoffs yourself—that's a human judgment call.

Weekly Installs
21
GitHub Stars
3.1K
First Seen
14 days ago
Installed on
opencode21
gemini-cli21
github-copilot21
codex21
kimi-cli21
amp21