evaluation-anchor-checker
SKILL.md
Evaluation Anchor Checker (make numbers reviewer-safe)
Purpose: fix a reviewer-magnet failure mode in agent surveys:
- strong numeric/performance statements appear
- but the minimal evaluation context is missing
This skill treats numeric claims as contracts:
- if a number stays, the same sentence must contain enough protocol context to interpret it
- if that context is not in evidence, the claim must be downgraded (no guessing)
Inputs
Preferred (pre-merge, keeps anchoring intact):
- the affected
sections/*.mdfiles
Optional context (read-only; helps you avoid guessing):
outline/writer_context_packs.jsonl(look forevaluation_anchor_minimal,evaluation_protocol,anchor_facts)outline/evidence_drafts.jsonl/outline/anchor_sheet.jsonlcitations/ref.bib
Outputs
- Updated
sections/*.md(oroutput/DRAFT.mdif you are post-merge), with safer evaluation anchoring - Optional completion marker:
output/eval_anchors_checked.refined.ok
Read Order
Always read:
references/numeric_hygiene.md
Machine-readable asset:
assets/numeric_hygiene.json
The asset defines the keyword families and qualitative fallback templates. Keep the script deterministic and let the policy live in the asset/reference pair.
Role prompt: Reviewer-minded Editor (evaluation hygiene)
You are a reviewer-minded editor for evaluation claims in a technical survey.
Goal:
- make every numeric/performance claim interpretable and reviewer-safe
Hard constraints:
- do not invent numbers
- do not add/remove/move citation keys
- if protocol context is missing, weaken or remove the numeric claim
Minimum context to include when keeping a number:
- task / setting (what kind of task)
- metric (what is being measured)
- constraint (budget/cost/tool access/horizon/seed/logging) when relevant
Avoid:
- ambiguous model naming that looks hallucinated (e.g., “GPT-5”) unless the cited paper uses it verbatim
Workflow (explicit inputs)
- Use
outline/writer_context_packs.jsonlto locate the subsection's allowed citations and any extractedevaluation_protocol/anchor_facts. - Cross-check
outline/evidence_drafts.jsonlandoutline/anchor_sheet.jsonlfor task/metric/constraint context before touching numbers. - Validate every cited key against
citations/ref.bib(do not introduce new keys).
What to enforce (the “minimum protocol trio”)
When a sentence contains digits (%, x, or numbers):
- Keep the number only if you can attach at least 2 of the following in the same sentence without guessing:
- task family / benchmark name
- metric definition
- constraint (budget, tool access, cost model, retries, horizon)
If you cannot, downgrade:
- remove the number and rewrite as qualitative (“often”, “can”, “may”) with the same citation
- or move the specificity into a verification target (“evaluations need to report …”) without adding new facts
Mini examples (paraphrase; do not copy)
Bad (underspecified):
Model X achieves 75% exact performance [@SomeBench].
Better (minimal context):
On <task/benchmark>, Model X reaches ~75% <metric>, under <constraint/budget/tool access> [@SomeBench].
Better (downgrade when context is missing):
Reported gains vary, but comparisons remain fragile when budgets and retry policies are not reported [@SomeBench].
Done checklist
- No numeric claim remains without minimal protocol context.
- No ambiguous model naming remains unless explicitly supported by citations.
- Citation keys are unchanged.
- If you removed/downgraded numbers, the paragraph still makes a defensible, evidence-bounded point.
Script
Quick Start
python .codex/skills/evaluation-anchor-checker/scripts/run.py --workspace workspaces/<ws>
Weekly Installs
22
Repository
willoscar/resea…e-skillsGitHub Stars
301
First Seen
Jan 25, 2026
Security Audits
Installed on
gemini-cli21
opencode20
codex20
cursor19
claude-code19
github-copilot17