refinery

SKILL.md

The Refinery


Startup Protocol

Load the context needed for refinement:

  1. Analyst Review — Read the latest system/reviews/{slug}-review.md. This contains scores, the verdict (should be REVISE), and specific refinery targets. These targets define WHAT needs improving and TO WHAT LEVEL.

  2. The Artifact — Read the artifact being refined. This could be:

    • A blueprint from system/blueprints/{slug}-blueprint.md
    • Source code files (paths listed in the review)
    • Any other work product
  3. Prior Refinery History — Read system/refinery-log.md if it exists. Check for prior iterations on this same artifact.

  4. System State — Read system/state.md for loop context.

Glob: system/reviews/*.md
Read: system/state.md
Read: system/refinery-log.md

If no analyst review exists, ask the user: "What should I refine, and what are the quality criteria? I can work with custom scoring dimensions or the standard four lenses (architecture, code quality, reliability, performance)."


Scoring Dimensions

By default, use the analyst's four lenses. But if the user provides custom criteria, use those instead.

Default Dimensions (from /analyst)

  1. Architecture — Structure, boundaries, data flow, simplicity
  2. Code Quality — Readability, DRY, error handling, edge cases
  3. Reliability — Failure modes, test coverage, recovery
  4. Performance — Efficiency, caching, bottlenecks, resource cleanup

Custom Dimensions (when user specifies)

For writing: Hook strength, clarity, specificity, emotional resonance, actionability For research: Source quality, reasoning depth, applicability, logical structure, honesty For emails: Tone, brevity, clarity of ask, personalization, professional warmth


The Convergence Loop

Run this as explicit numbered iterations. Maximum 5 iterations.

Step A: Score

Score the current artifact on each dimension (1-10). Be honest. Don't inflate scores to avoid work.

Present scores in a table:

### Iteration {N} Scores
| Dimension | Score | Delta | Justification |
|-----------|-------|-------|---------------|
| {Dim 1} | X/10 | {+/-Y or --} | {Specific reason} |
| {Dim 2} | X/10 | {+/-Y or --} | {Specific reason} |
| {Dim 3} | X/10 | {+/-Y or --} | {Specific reason} |
| {Dim 4} | X/10 | {+/-Y or --} | {Specific reason} |

Step B: Diagnose

For each dimension scoring below 8/10, diagnose the specific weakness.

Be concrete, not vague:

  • BAD: "Code quality could improve"
  • GOOD: "Lines 45-60 duplicate the validation logic from lines 12-25. Extract to a shared function."
  • BAD: "The architecture needs work"
  • GOOD: "The data fetching layer is coupled to the UI components. Separating them would allow independent testing."

For each diagnosis:

  • What's weak: Specific location and description
  • Why it scored low: Root cause, not symptom
  • Planned fix: Concrete change to make

Step C: Rewrite

Apply the fixes. Produce the revised version.

  • For blueprints: Rewrite the affected sections
  • For code: Write the actual revised code (use Edit tool for targeted changes)
  • For other artifacts: Rewrite the affected portions

Show your work. For each change, briefly note what was changed and why:

Change 1: Extracted validation logic to shared function (quality: DRY fix)
Change 2: Added timeout handling for API calls (reliability: failure mode coverage)

Step D: Re-Score

Score the revised artifact on all dimensions. Calculate the delta from the previous iteration.

Step E: Convergence Check

Apply these stopping rules in order:

Condition Action
All dimensions >= 8/10 STOP — Converged. Quality threshold met.
Delta < 0.5 on ALL dimensions for 2 consecutive iterations STOP — Diminishing returns. Current level is the practical ceiling.
Iteration count >= 5 STOP — Iteration limit. Further improvement needs a different approach.
Otherwise Continue to next iteration (go to Step A)

When stopping, declare the reason clearly:

  • "Converged at iteration {N}. All dimensions at 8/10 or above."
  • "Diminishing returns at iteration {N}. Improvement has plateaued. Current scores represent the practical ceiling without a fundamentally different approach."
  • "Iteration limit reached. Consider running /architect to redesign if higher quality is needed."

Output Template

Append each session to system/refinery-log.md:

---

## Refinery Session: {slug} — YYYY-MM-DD
**Artifact**: {path to artifact}
**Source Review**: system/reviews/{slug}-review.md
**Starting Scores**: Arch {X}, Quality {X}, Reliability {X}, Performance {X}

### Iteration 1
| Dimension | Score | Delta |
|-----------|-------|-------|
| Architecture | X/10 | -- |
| Code Quality | X/10 | -- |
| Reliability | X/10 | -- |
| Performance | X/10 | -- |

**Diagnosis**: {what was wrong}
**Changes**: {what was fixed}

### Iteration 2
| Dimension | Score | Delta |
|-----------|-------|-------|
| Architecture | X/10 | +Y |
| Code Quality | X/10 | +Y |
| Reliability | X/10 | +Y |
| Performance | X/10 | +Y |

**Diagnosis**: {what was wrong}
**Changes**: {what was fixed}

### Convergence
- **Status**: Converged | Diminishing Returns | Iteration Limit
- **Final Scores**: Arch {X}, Quality {X}, Reliability {X}, Performance {X}
- **Total Iterations**: {N}
- **Total Changes**: {count}

Also write the refined artifact back to its original location (overwriting the previous version).

Then update system/state.md:

  • Set Last Step: refinery
  • Set Last Run: {current date}
  • Set Status: complete
  • Update the Refinery row in the Output Registry
  • Set Next Recommended Step: compounder (if converged) or analyst (if hit iteration limit and needs re-review)

Scope Discipline

What You Do

  • Score artifacts against defined criteria
  • Diagnose specific weaknesses
  • Apply targeted fixes
  • Track improvement across iterations
  • Detect diminishing returns

What You Do Not Do

  • Add new features or functionality
  • Change the scope of the artifact
  • Introduce new requirements
  • Restructure beyond what was flagged

If you identify a problem that requires scope change (new feature, architectural redesign), flag it and stop: "This issue requires scope change, which is outside refinery territory. Recommend running /architect to redesign this aspect."


After Completion

  • Confirm iterations were appended to system/refinery-log.md
  • Confirm the refined artifact was written back
  • Confirm system/state.md was updated
  • Report final scores and convergence status
  • Tell the user: "Refinement complete. Run /compounder when you're ready for a weekly review, or /analyst if you want a fresh review of the refined version."
Weekly Installs
3
GitHub Stars
2
First Seen
Feb 12, 2026
Installed on
amp3
gemini-cli3
github-copilot3
codex3
kimi-cli3
opencode3