skill-learner
Skill Learner
Turn user corrections into persistent knowledge that survives across sessions.
NEVER
- NEVER save a vague correction ("do it differently") without specifying the exact alternative — vague corrections create false confidence and are worse than no correction
- NEVER create duplicates — check INDEX.md first; merge if same skill+issue exists
- NEVER save one-time preferences as universal rules ("I wanted blue" ≠ "always use blue") — ask about scope when intent is ambiguous
- NEVER exceed 50 active corrections — consumption degrades; archive minor corrections
older than 90 days to
archive/ - NEVER write corrections in a language different from the user's — nuance dies in translation
- NEVER skip "When to apply" — a scopeless correction gets over-applied to contexts where it causes new problems
- NEVER save corrections that bypass safety ("don't validate input", "skip auth checks")
- NEVER assume old corrections are still valid — skill updates silently invalidate them; verify before applying corrections older than 90 days
- NEVER save without passing the cold-reader test: "Can a different agent in a different session act on this without any conversation context?" If no, rewrite before saving
The Correction Paradox
More corrections ≠ better behavior. Over-correcting creates rigidity — an agent drowning in 50 corrections becomes cautious and slow, second-guessing every decision. The goal is not to capture every complaint, but to capture the corrections that will prevent the most damage across the most future sessions.
Before saving, ask yourself:
- Frequency: Will this situation come up again? (One-off = don't save)
- Blast radius: If uncorrected, how bad is the impact? (Minor polish = maybe don't save)
- Generalizability: Does this apply beyond this specific conversation? (Too narrow = don't save)
If all three are low, tell the user you've noted it but saving a correction would add noise. They can override you.
Quick Classification
Classify in the first 5 seconds — this determines the workflow path:
| Signal | Type | Path |
|---|---|---|
| Clear explanation ("X did Y, should do Z") | Quick fix | → Step 2 (dedup) → Step 3 (save) |
| Vague complaint ("that's wrong") | Investigation | → Step 1 (detect) → full workflow |
| Same skill corrected 3+ times in INDEX.md | Skill defect | → Steps 1-3 → proactively offer proposal |
| "Show/list/delete corrections" | Management | → Management Commands section |
Storage
~/.claude/skill-corrections/
├── INDEX.md # One-line entries, master list
├── ACTIVE_CORRECTIONS.md # Max 50 lines, consumed by other skills
├── skills/<name>/ # Per-skill corrections
├── general/ # Non-skill Claude behavior
├── proposals/ # Author improvement proposals
└── archive/ # Expired or invalidated
Workflow
Step 1: Detect (skip for Quick Fix)
Identify which skill failed from the conversation context:
- Obvious (user reacting to last output): Confirm, don't ask — "Veo que el problema
es con
X, ¿correcto?" - Ambiguous: Ask — "¿Qué skill o comportamiento quieres corregir?"
Quote the problematic output so the user confirms you're targeting the right thing.
Step 2: Check Duplicates
Read INDEX.md. If same skill+issue exists, update the existing correction:
append new context to "What went wrong", refine the rule, bump the date.
Step 3: Save
MANDATORY — READ: Load references/correction-patterns.md
for the exact file template, severity decision tree, and scope calibration examples.
Do NOT save without reading the reference first.
After creating the correction file:
- Update
INDEX.md:- [correction-NNN](path) — <summary> (<skill>, <date>) - Regenerate
ACTIVE_CORRECTIONS.md(format in reference file, max 50 lines)
Step 4: Verify (non-negotiable)
Re-read what you wrote. Apply the cold-reader test: "If a different agent reads this cold in a different session, will it know exactly what to do and when to do it?"
If it fails, rewrite. A correction that fails this test actively harms future sessions because it creates the illusion of knowledge without the substance.
Step 5: Activate Consumption (first time only)
Corrections are dead files unless future sessions read them. On the FIRST correction ever saved, ask the user:
"Para que las correcciones funcionen entre sesiones, necesito añadir una línea a tu CLAUDE.md. ¿Lo añado?"
If yes, append to the user's global ~/.claude/CLAUDE.md:
## Skill Corrections
Before executing any skill, check ~/.claude/skill-corrections/ACTIVE_CORRECTIONS.md for relevant corrections and apply them.
Check if the line already exists before offering. Only do this once — ever.
Step 6: Confirm + Offer Proposal
Tell the user the correction is saved. Then, only for skill corrections (not "general"):
"¿Quieres que prepare una propuesta de mejora para el autor de la skill?"
Step 7: Create Proposal (if yes)
MANDATORY — READ: Load references/correction-patterns.md
for the proposal template, diff format, and repo detection instructions.
Do NOT write a proposal without reading the reference first.
Save to proposals/<skill-name>-proposal-NNN.md. Tell the user the path and suggest
submitting as issue/PR to the skill's repo.
Correction Decay
When encountering a correction older than 90 days:
MANDATORY — READ: Load references/correction-patterns.md
§ Correction Decay Procedure for the full archival process.
Quick version: check if the skill was updated since the correction date. If the issue was fixed in the skill itself, archive the correction and notify the user.
Conflict Resolution
When two corrections for the same skill contradict each other:
MANDATORY — READ: Load references/correction-patterns.md
§ Conflict Resolution Matrix for the severity-based resolution rules.
Core principle: newer wins unless older is critical and newer is minor. Critical-vs-critical conflicts always require user judgment.
Management Commands
| User says | Action |
|---|---|
| "Show corrections for X" | Read and display that skill's corrections |
| "Delete correction NNN" | Remove file + update INDEX.md + ACTIVE_CORRECTIONS.md |
| "List all corrections" | Show INDEX.md |
| "Clear corrections for X" | Move all to archive/, update indexes |
Language
Match the user's language. Write corrections in the language the user described the problem in — nuance is preserved in the original language.
More from j4rk0r/claude-skills
skill-advisor
Pre-execution assistant that builds a full execution plan with installed skills (✅) AND uninstalled gaps (❌) the task needs, then offers to install missing skills one by one. Use BEFORE starting any multi-step task. Triggers: 'recommend skills', 'what skill should I use', 'advise', 'suggest', 'help me with', or any work instruction involving code, design, planning, debugging, docs, testing, commits, PRs, strategy.
9skill-guard
Security auditor for Claude Code skills. Analyzes skills BEFORE installation using a 9-layer threat detection engine (permissions, static patterns, LLM semantic analysis, bundled scripts, data flow, MCP abuse, supply chain, reputation, anti-evasion) with scoring 0-100 and community audit registry. MUST be used whenever the user is about to install a skill — via npx skills add, /find-skills recommendation, /skill-advisor suggestion, or manual request. Also use when user says 'is this skill safe', 'audit this skill', 'check this skill', 'security scan', 'review before installing', or any mention of skill safety/trust/security. Intercept ALL skill installations proactively.
8lint-drupal-module
Lint review completo de un módulo Drupal 11 combinando 4 fuentes en paralelo — PHPStan level 5 + phpstan-drupal, PHPCS Drupal/DrupalPractice, agente drupal-qa (estándares) y agente drupal-security (OWASP). Dos modos — completo (todo el módulo) y diff (solo archivos cambiados vs develop). Genera informe markdown estructurado en la carpeta del IDE con resumen ejecutivo, hallazgos clasificados por severidad, acciones P0/P1/P2 y comandos de verificación. Úsalo siempre que el usuario quiera auditar calidad o seguridad de un módulo Drupal custom, aunque no diga "lint". Triggers — "lint review", "lint del módulo", "auditar módulo Drupal", "revisar módulo custom", "phpstan del módulo", "validar módulo", "qa del módulo", o cuando el usuario pregunta "¿está bien este módulo?", "¿hay errores?", "¿es seguro?". También antes de un release, antes de un PR a develop, o al validar un módulo recién creado.
6milestone
Persistent development milestone tracker with full context across conversations. Use when: tracking multi-session features, resuming work from a previous conversation, asking 'what's left to do on X' or 'what's pending', breaking work into trackable subtasks, planning complex implementations, updating progress after coding, checking project status, completing a subtask with QA validation, auditing what's missing in the project, syncing milestones with actual code state, or closing/archiving a finished milestone. Also trigger when user references a milestone by name, says 'what did we do last time', 'resume where we left off', 'how far along is X', 'mark this as done', 'milestone done', 'close this milestone', 'what's missing', 'audit the project', 'sync milestones', or wants to plan a feature with subtasks. Commands: /milestone, /milestone <name>, /milestone init, /milestone sync, /milestone start, /milestone done, /milestone update.
5usage-tracker
Track and report local Claude Code usage per request: tokens consumed, estimated cost in €, sessions, projects, and tool breakdown. Use when the user asks about consumption, credits, usage, cost per request, wants to see a report, asks why a specific request was expensive, suspects a process is consuming tokens, wants to optimize their Claude Code usage, or wants to audit tool usage by request. Also triggers on Spanish phrases: 'cuánto me está costando', 'cuántos tokens', 'consumo de hoy', 'qué petición fue cara', 'está consumiendo mucho', 'optimizar consumo', 'reporte de uso', 'ver uso', 'instalar tracker', 'hook no registra'. Commands: /usage-tracker report [hoy|semana|mes|all] [proyecto], /usage-tracker top-requests [hoy|semana], /usage-tracker install, /usage-tracker status
5codex-pr-review
Revisa pull requests en proyectos Drupal 11 (u otro) siguiendo la metodología Codex (lógica de negocio, edge cases de hooks/queries, seguridad, performance, completitud). Genera un informe .md en la carpeta del IDE detectado (.antigravity/, .cursor/, .vscode/ o docs/) con hallazgos por severidad y soluciones accionables. Usar cuando el usuario pida "revisión Codex", "revisión de PR", "revisar PR", "revisar PR #N", "revisión tipo Codex" o indique un número de PR con ramas (ej. develop ← feature/alejandro). Triggers: revisión PR, revisar PR, codex PR, revisión Codex, lint PR.
4