skill-audit

Installation
SKILL.md

Audit the skill at $ARGUMENTS.

Steps

  1. If $ARGUMENTS empty or no SKILL.md found, report error.
  2. Run STATIC analysis: python "$SKILL_DIR/scripts/analyze_skill.py" "$ARGUMENTS"
  3. Read $SKILL_DIR/references/CHECKLIST.md and $SKILL_DIR/references/PATTERNS.md.
  4. Cross-reference JSON with CHECKLIST and PATTERNS.
  5. If sibling skills exist, run: python "$SKILL_DIR/scripts/detect_overlap.py" "<parent>" --target "<name>"
  6. DYNAMIC evaluation (optional): If user agrees, generate evals.json (schema: $SKILL_DIR/assets/schemas.md) and run: python "$SKILL_DIR/scripts/run_loop.py" --eval-set <path> --skill-path "$ARGUMENTS" --model <current_model> --max-iterations 3 --report auto
  7. Present: Critical → Recommended → Optional, each with before/after fix.
  8. Output optimized SKILL.md resolving Critical and Recommended issues, applying the best description if dynamically optimized.

Output

Issues by severity, quality score (format/completeness/writing, /24), token budget table (Before/After/Δ), overlap report (if any), optimized SKILL.md.

Rules

  • Official frontmatter fields only.
  • Body < 300 tokens, imperative voice, no educational content.
  • Preserve intent. Move reference content to references/.
Weekly Installs
1
GitHub Stars
13
First Seen
Mar 15, 2026
Installed on
amp1
cline1
opencode1
cursor1
kimi-cli1
codex1