retrospective
Skill: Retrospective & Self-Improvement
ultrathink - Take a deep breath. We're not here to write code. We're here to make a dent in the universe.
v2.88 Key Changes (MODEL-AGNOSTIC)
- Model-agnostic: Uses model configured in
~/.claude/settings.jsonor CLI/env vars - No flags required: Works with the configured default model
- Flexible: Works with GLM-5, Claude, Minimax, or any configured model
- Settings-driven: Model selection via
ANTHROPIC_DEFAULT_*_MODELenv vars
The Vision
Every retrospective should make the system inevitable and better.
Your Work, Step by Step
- Summarize outcomes: Task, complexity, iterations, models.
- Analyze effectiveness: Routing, clarification, and agents.
- Identify gaps: Missed checks or friction.
- Propose improvements: Concrete, minimal changes.
Ultrathink Principles in Practice
- Think Different: Question the status quo.
- Obsess Over Details: Use evidence, not guesses.
- Plan Like Da Vinci: Structure feedback before writing.
- Craft, Don't Code: Keep recommendations actionable.
- Iterate Relentlessly: Apply learnings immediately.
- Simplify Ruthlessly: Focus on the few changes that matter.
Purpose
Analyze completed tasks to improve the Ralph Wiggum system.
When to Use
MANDATORY after every task completion, before declaring VERIFIED_DONE.
Analysis Categories
1. Routing Effectiveness
- Was the complexity classification accurate?
- Did the chosen model perform well?
- Should routing thresholds change?
Agent Teams Integration (v2.88)
Optimal Scenario: Pure Agent Teams (Native)
This skill uses Pure Agent Teams with native coordination - no custom subagent specialization needed.
Why Scenario A for This Skill
- Retrospective is primarily analytical and sequential
- Read/Grep tools available to all native agents
- Analysis doesn't require specialized tool restrictions
- Native agent types sufficient for metric gathering
- Lower complexity, faster execution
Configuration
- TeamCreate: Optional, for simple retrospective tasks
- Task: Use native agent types (no ralph-* needed)
- Hooks: TeammateIdle + TaskCompleted available if needed
- Simple: Minimal setup overhead
Workflow Pattern
TeamCreate (optional)
→ Task(analyze completed work)
→ Native agent gathers metrics
→ Complete with improvement proposals
When This Is Sufficient
- Single-task retrospective analysis
- Simple metric gathering workflows
- No specialized analysis needed
- Quick post-task reviews preferred
2. Clarification Quality
- Were the right questions asked?
- Did any missed clarifications cause rework?
- Should question templates be updated?
3. Agent Performance
- Which subagents were most useful?
- Any agents that didn't add value?
- New agent patterns needed?
4. Quality Gate Effectiveness
- Did gates catch real issues?
- Any false positives/negatives?
- Missing validations?
5. Iteration Efficiency
- How many iterations were used?
- Could it have been done faster?
- Any wasted iterations?
Output Format
## 📊 Task Retrospective
### Summary
- Task: [description]
- Complexity: [classified] → [actual]
- Iterations: [used] / [limit]
- Models: [list used]
### What Went Well
- [positive 1]
- [positive 2]
### Improvement Opportunities
1. **[Category]**: [description]
- Current: [what happens now]
- Proposed: [improvement]
- Impact: [low/medium/high]
- Risk: [low/medium/high]
### Proposed Changes
```json
{
"type": "routing_adjustment|clarification_enhancement|agent_behavior|new_command|delegation_update|quality_gate",
"file": "[path to modify]",
"change": "[description]",
"justification": "[why]"
}
## Improvement Types
| Type | Example |
|------|---------|
| routing_adjustment | Change complexity thresholds |
| clarification_enhancement | Add new question templates |
| agent_behavior | Modify agent instructions |
| new_command | Create new slash command |
| delegation_update | Change model assignments |
| quality_gate | Add/modify validations |
More from alfredolopez80/multi-agent-ralph-loop
stop-slop
A skill for removing AI-generated writing patterns ('slop') from prose. Eliminates telltale signs of AI writing like filler phrases, excessive hedging, overly formal language, and mechanical sentence structures. Use when: writing content that should sound human and natural, editing AI-generated drafts, cleaning up prose for publication, or any content that needs to sound authentic rather than AI-generated. Triggers: 'stop-slop', 'remove AI tells', 'clean up prose', 'make it sound human', 'edit AI writing'.
10iterate
Ralph Loop pattern with swarm mode: iterative execution until VERIFIED_DONE with multi-agent coordination. Use when: (1) iterative refinement needed, (2) quality gates must pass, (3) automated validation required. Triggers: /iterate, 'iterate until done', 'keep trying', 'fix until passing', 'loop until done'.
2gemini-cli
|
2minimax
Custom skill for minimax
1clarify
Intensive requirement clarification using structured AskUserQuestion workflow. Gathers MUST_HAVE (blocking) and NICE_TO_HAVE (optional) information before implementation. Use when: (1) starting new feature implementation, (2) requirements are ambiguous, (3) multiple approaches possible, (4) before writing any code. Triggers: /clarify, 'clarify requirements', 'ask questions', 'gather requirements'.
1security
Security audit with Codex + MiniMax second opinion. Integrates ralph-security agent (6 quality pillars, OWASP A01-A10). Uses LSP for code navigation during analysis. Use when: (1) /security is invoked, (2) task relates to security functionality.
1