claims-extractor

SKILL.md

Claims Extractor (peer review)

Goal: turn a manuscript into an auditable list of claims that downstream skills can check.

Inputs

Required:

  • output/PAPER.md (or equivalent plain-text manuscript)

Optional:

  • DECISIONS.md (review scope or constraints)

Outputs

  • output/CLAIMS.md

Output format (recommended)

For each claim, include at minimum:

  • Claim: one sentence
  • Type: empirical | conceptual
  • Scope: what the claim applies to / what it does not apply to
  • Source: a locatable pointer into output/PAPER.md (section + page/figure/table + a short quote)

Workflow

  1. If DECISIONS.md exists, apply any review scope/format constraints.
  2. Read the manuscript (output/PAPER.md) end-to-end (at least abstract + intro + method + experiments + limitations).
  3. Extract:
    • primary contributions (what is new)
    • key claims (what is asserted)
    • assumptions (what must be true for claims to hold)
  4. Normalize each item into one sentence.
  5. Attach a source pointer for every item.
  6. Split into two sections:
    • Empirical claims (must be backed by experiments/data)
    • Conceptual claims (must be backed by argument/definition)

Definition of Done

  • output/CLAIMS.md exists.
  • Every claim has a source pointer that can be located in output/PAPER.md.
  • Empirical vs conceptual claims are clearly separated.

Troubleshooting

Issue: the paper is only a PDF or HTML

Fix:

  • Convert/extract it into a plain-text output/PAPER.md first (even rough extraction is OK), then run claim extraction.

Issue: claims are vague (“significant”, “better”, “state-of-the-art”)

Fix:

  • Rewrite each claim to include the measurable dimension (metric/dataset/baseline) or mark it as “underspecified” with a note.
Weekly Installs
24
GitHub Stars
301
First Seen
Jan 23, 2026
Installed on
claude-code20
gemini-cli20
cursor18
opencode18
codex18
antigravity15