ai-evals

Installation
Summary

Systematic evaluation framework for AI products using practitioner-driven methodologies.

  • Guides users through understanding what "good" looks like, designing rubrics and test cases, and implementing scoring criteria aligned with actual user needs
  • Emphasizes manual review and error analysis as prerequisites to building meaningful evals, with structured workflows for clustering failure patterns
  • Flags common pitfalls including vague criteria, LLM-as-judge without validation, and Likert scales; recommends binary Pass/Fail decisions instead
  • Positions evals as core product specifications rather than optional quality checks, essential for product builders and non-ML roles alike
SKILL.md

AI Evals

Help the user create systematic evaluations for AI products using insights from AI practitioners.

How to Help

When the user asks for help with AI evals:

  1. Understand what they're evaluating - Ask what AI feature or model they're testing and what "good" looks like
  2. Help design the eval approach - Suggest rubrics, test cases, and measurement methods
  3. Guide implementation - Help them think through edge cases, scoring criteria, and iteration cycles
  4. Connect to product requirements - Ensure evals align with actual user needs, not just technical metrics

Core Principles

Evals are the new PRD

Brendan Foody: "If the model is the product, then the eval is the product requirement document." Evals define what success looks like in AI products—they're not optional quality checks, they're core specifications.

Evals are a core product skill

Related skills
Installs
1.2K
GitHub Stars
879
First Seen
Jan 29, 2026