create-compelling-prs
create-compelling-prs
Your PR competes for attention. The reviewer is looking at many PRs — if yours isn't immediately convincing, it gets skipped or rejected. Evidence beats rhetoric. A single well-implemented PR that convinces in 2 minutes is worth more than five that require follow-up.
PR Body Templates
Pick the template matching your change type.
Bugfix
## What broke
[One sentence: what failed and where]
## Root cause
[The underlying cause — missing guard, race condition, wrong assumption]
## Fix
[What changed and why this approach over alternatives]
## Before / After
| Before | After |
|--------|-------|
|  |  |
## Test output
[paste test run]
## How to verify
1. [Step to reproduce original bug — should now pass]
2. [Regression check]
Closes #ISSUE
Feature
## What this adds
[One sentence: the user-visible capability]
## Why
[Business or product motivation]
## Implementation
[2-3 sentences: what was added/changed, key design decisions]
## Demo

## Test output
[paste test run]
## How to verify
1. [Golden path step]
2. [Edge case]
Closes #ISSUE
Refactor
## What changed
[What was moved, renamed, or restructured]
## Why
[The underlying problem that made this necessary]
## What stays the same
[Public API, behavior, outputs — nothing visible changed]
## Test output
[paste — proves no regressions]
Closes #ISSUE
Deps
## Update
[Package] vX.Y.Z → vA.B.C
## Why now
[Security advisory / feature needed / routine bump]
## Risk
[Low/Medium/High — breaking changes? Coverage of affected areas?]
## Test output
[paste — full suite green]
Visual Evidence with S3
For any UI-impacting change, capture before/after screenshots and embed them.
If AWS_BUCKET + AWS_ACCESS_KEY_ID are in env (preferred):
# Capture screenshots (Playwright preferred, manual fallback)
# Then upload:
aws s3 cp before.png "s3://$AWS_BUCKET/evidence/${REPO}/${PR}/before.png" --acl public-read
aws s3 cp after.png "s3://$AWS_BUCKET/evidence/${REPO}/${PR}/after.png" --acl public-read
# Embed in PR body:
# Before: https://${AWS_BUCKET}.s3.amazonaws.com/evidence/${REPO}/${PR}/before.png
# After: https://${AWS_BUCKET}.s3.amazonaws.com/evidence/${REPO}/${PR}/after.png
Fallback — git evidence branch:
git checkout -b "evidence/${BRANCH}-screenshots"
cp *.png specs/${ISSUE}/
git add specs/ && git commit -m "evidence: before/after screenshots"
git push origin "evidence/${BRANCH}-screenshots"
# Embed as raw GitHub URLs in the PR body, then:
git checkout -
Skip if: backend-only, CLI-only, config/infra, test-only, or capture exceeds 120s. Visual evidence is a bonus, never a gate.
Self-Audit Checklist
Run this before opening or marking a PR ready for review:
- Complete? Does this finish every deliverable in the original task?
- Shippable? If merged as-is, would the task be done — no follow-up tickets created?
- No manual caveats? Zero "you'll need to X manually" instructions in the PR body.
- Tests pass? Ran them yourself right now — not trusting earlier cached output.
- Evidence present? Screenshots or test output embedded for every meaningful change.
- Issue linked?
Closes #Nin the body — orRefs #Nif the issue has unchecked acceptance criteria (prevents premature auto-close on multi-phase work).
If the "No manual caveats?" check fails: close the PR. File a blocker report instead. A PR that punts work back is worse than no PR. Reroute around obstacles — if the UI is the only path, use the API; if the API is missing, script it.
Lead Self-Assessment Loop
After a worker reports done, do not immediately accept. Press harder.
Iteration protocol:
- Ask: "What would you improve? How can you go the extra mile?"
- Require actions, not claims — "I'd add tests" → demand they write them now. "I'd verify it renders" → demand a screenshot.
- When improvement is done, ask again.
- Stop after ~3 iterations if returns are marginal. After 10 with persistent gaps → respawn with stricter instructions.
Rules:
- Never accept rhetoric. "I'm confident this is solid" is not evidence — demand it.
- Verify independently. Run the tests yourself. Open the PR URL. Load the live site.
- Track the diff between iterations. No file changes = worker is stalling → push harder.
Rotation questions (vary to avoid formulaic answers):
- "What would a senior engineer reject in code review?"
- "Run the full test suite now and paste the output."
- "Screenshot the affected page. Does it match the design system?"
- "What did you punt on? Re-read the task and list every deliverable."
- "If this gets rejected, what's the most likely reason? Fix it preemptively."
More from fellowship-dev/dogfooded-skills
entropy-check
Sensor — checks doc freshness and computes domain quality grades. Never fixes. Detects staleness, missing coverage, and FlowChad gaps. Updates QUALITY_SCORE.md. Skips inapplicable signals per repo.
16distill
Post-mission audit and distillation — capture mode classifies a completed mission using an 8-code failure taxonomy and writes an audit JSON; analyze mode aggregates audit JSONs into a findings report and creates GitHub issues with recommendations.
14migrate-skill
Move a skill from claude-toolkit plugin (or local .claude/skills) into the dogfooded-skills library, then import it back. Use when consolidating skills into the shared repo.
14skill-builder
Write a high-quality agent skill — covers frontmatter spec, section structure, quality criteria, and common antipatterns.
13popsicle
Agent-native onboarding doc generator — builds coverage maps, health baselines, generated docs, and agent adapters so any AI tool can autonomously navigate your repo.
8setup-harness
Scaffold the knowledge layer for a repo — ARCHITECTURE.md, QUALITY_SCORE.md, enhanced docs/code-structure.md, docs/code-guidelines.md, and FlowChad flow stubs. Gives agents a map, not a 1,000-page manual.
8