skills-eval

SKILL.md

Skills Evaluation and Improvement

Table of Contents

  1. Overview
  2. Quick Start
  3. Evaluation Workflow
  4. Evaluation and Optimization
  5. Resources

Overview

This framework audits Claude skills against quality standards to improve performance and reduce token consumption. Automated tools analyze skill structure, measure context usage, and identify specific technical improvements.

The skills-auditor provides structural analysis, while the improvement-suggester ranks fixes by impact. Compliance is verified through the compliance-checker. Runtime efficiency is monitored by tool-performance-analyzer and token-usage-tracker.

Quick Start

Basic Audit

Run a full audit of all skills or target a specific file to identify structural issues.

# Audit all skills
make audit-all

# Audit specific skill
make audit-skill TARGET=path/to/skill/SKILL.md

Analysis and Optimization

Use skill_analyzer.py for complexity checks and token_estimator.py to verify the context budget.

make analyze-skill TARGET=path/to/skill/SKILL.md
make estimate-tokens TARGET=path/to/skill/SKILL.md

Improvements

Generate a prioritized plan and verify standards compliance using improvement_suggester.py and compliance_checker.py.

make improve-skill TARGET=path/to/skill/SKILL.md
make check-compliance TARGET=path/to/skill/SKILL.md

Evaluation Workflow

Start with make audit-all to inventory skills and identify high-priority targets. For each skill requiring attention, run analysis with analyze-skill to map complexity. Generate an improvement plan, apply fixes, and run check-compliance to verify the skill meets project standards. Finalize by checking the token budget for efficiency.

Evaluation and Optimization

Quality assessments use the skills-auditor and improvement-suggester to generate detailed reports. Performance analysis focuses on token efficiency through the token-usage-tracker and tool performance via tool-performance-analyzer. For standards compliance, the compliance-checker automates common fixes for structural issues.

Scoring and Prioritization

We evaluate skills across five dimensions: structure compliance, content quality, token efficiency, activation reliability, and tool integration. Scores above 90 represent production-ready skills, while scores below 50 indicate critical issues requiring immediate attention.

Improvements are prioritized by impact. Critical issues include security vulnerabilities or broken functionality. High-priority items cover structural flaws that hinder discoverability. Medium and low priorities focus on best practices and minor optimizations.

Resources

Shared Modules: Cross-Skill Patterns

Skill-Specific Modules

  • Trigger Isolation Analysis: See modules/trigger-isolation-analysis.md
  • Skill Authoring Best Practices: See modules/skill-authoring-best-practices.md
  • Authoring Checklist: See modules/authoring-checklist.md
  • Evaluation Workflows: See modules/evaluation-workflows.md
  • Quality Metrics: See modules/quality-metrics.md
  • Advanced Tool Use Analysis: See modules/advanced-tool-use-analysis.md
  • Evaluation Framework: See modules/evaluation-framework.md
  • Integration Patterns: See modules/integration.md
  • Troubleshooting: See modules/troubleshooting.md
  • Pressure Testing: See modules/pressure-testing.md

Tools and Automation

  • Tools: Executable analysis utilities in scripts/ directory.
  • Automation: Setup and validation scripts in scripts/automation/.
Weekly Installs
2
Installed on
opencode2
codex2
claude-code2
kilo1
windsurf1
zencoder1