review
SKILL.md
Run all four reviewers in parallel on the recent changes.
Step 1: Parallel Reviews
Spawn ALL agents simultaneously using the Task tool in a single message:
- uncle-bob-reviewer: SOLID scan, TDD interrogation, clean code inspection
- cupid-reviewer: CUPID properties assessment (Composable, Unix, Predictable, Idiomatic, Domain-based)
- test-reviewer: Test pyramid placement, spec validation, naming conventions, flakiness vectors
- pii-reviewer: PII exposure, hardcoded secrets, sensitive data in tests/logs
Focus area: $ARGUMENTS
Step 2: Synthesize Results
After all complete, synthesize:
## Review Summary
### Uncle Bob (SOLID/Clean Code)
[Key findings]
### Dan North (CUPID)
[Key findings]
### Test Architect (Pyramid/Quality)
[Key findings on test placement, naming, and quality]
### Security (PII/Secrets)
[Key findings on sensitive data exposure]
### Conflicts Requiring Decision
[Any tensions between reviewers—these need user input]
### Agreed Improvements
[Recommendations all reviewers support]
Step 3: Surface Conflicts
If reviewers disagree on approach (e.g., "extract this class" vs "keep it unified", or "this is E2E" vs "this is integration"):
- Clearly state all positions
- Explain the tradeoff
- Mark as NEEDS USER DECISION for the orchestrator to surface
Do not resolve design tradeoffs yourself—that's a human judgment call.
Weekly Installs
21
Repository
langwatch/langwatchGitHub Stars
3.1K
First Seen
14 days ago
Security Audits
Installed on
opencode21
gemini-cli21
github-copilot21
codex21
kimi-cli21
amp21