paper-result-asset-builder
Paper Result Asset Builder
Turn CSV experiment outputs into paper-facing evidence assets: tables, figures, captions/callouts, and provenance records. This skill is the bridge from raw result files to paper artifacts.
Use this skill for:
- inventorying CSV result files and their columns, metrics, methods, datasets, seeds, and run identifiers
- deciding which CSV slices can support a paper claim
- generating or updating
tables/*.tex,figures/*.pdf,figures/*.png, andfigures/*.tex - recording filtering, aggregation, rounding, bolding, plotting, and provenance rules
- separating paper-facing visualization from experiment-time debugging plots
- preparing assets for
table-results-review,figure-results-review, andexperiment-story-writer
Do not use this skill to decide whether a paper claim is worth making. Use paper-evidence-board or paper-evidence-gap-miner for claim/evidence triage. Do not use it to write the results narrative; use experiment-story-writer after assets exist. Do not use it for experiment-time debugging dashboards.
Skill Directory Layout
<installed-skill-dir>/
├── SKILL.md
├── scripts/
│ └── inventory_csv_results.py
├── references/
│ ├── csv-result-contract.md
│ └── paper-asset-rules.md
└── templates/
├── result-inventory.md
└── result-asset-provenance.md
Progressive Loading
- Always read
references/csv-result-contract.mdandreferences/paper-asset-rules.md. - Use
scripts/inventory_csv_results.pyto inspect CSV files before reading large result files into context. - Use
templates/result-inventory.mdwhen creatingpaper/.agent/result-inventory.md. - Use
templates/result-asset-provenance.mdwhen creatingpaper/.agent/result-asset-provenance.md. - Read local
paper/.agent/writing-contract.md,paper/.agent/paper-evidence-board.md,paper/.agent/writing-memory/,paper/.agent/evidence-completion-plan.md,paper/.agent/visual-style.md,figures/*.tex,tables/*.tex, and current result sections when present.
Core Principles
- Paper assets exist to support claims, not to dump all available results.
- CSV files are source evidence; paper tables and figures are curated views over those sources.
- Experiment-time visualizations and paper-facing visualizations are different artifacts.
- Paper-facing assets may be visible to coauthors, reviewers, arXiv, or publishers; CSVs, plotting scripts, notebooks, provenance ledgers, and internal diagnostic plots are private unless explicitly cleaned for that audience.
- Every table or figure needs source CSV paths, filtering rules, aggregation rules, metric direction, rounding, styling, and claim mapping.
- Prefer reusing existing CSV results before asking for new compute.
- Never silently hand-enter numbers without provenance.
- If the asset requires a missing result, route to
paper-evidence-gap-minerbefore inventing placeholders. - After building assets, route to
table-results-revieworfigure-results-review.
Step 1 - Locate CSV Result Sources
Search likely locations:
code/docs/results/**/*.csv
code/docs/runs/**/*.csv
code/docs/reports/**/*.csv
code/outputs/**/*.csv
code/results/**/*.csv
outputs/**/*.csv
results/**/*.csv
If the project uses different result paths, follow the user's paths.
Run the inventory script when useful:
python3 <installed-skill-dir>/scripts/inventory_csv_results.py <project-dir>
Save the curated inventory to:
paper/.agent/result-inventory.md
If there is no paper/ directory and the current directory is the paper repo, save to:
.agent/result-inventory.md
Step 2 - Define the Paper Asset Job
For each requested table or figure, write:
- Asset ID:
- Asset type: table / figure
- Paper location:
- Claim supported:
- Reviewer question answered:
- Source CSV files:
- Required rows/slices:
- Metrics:
- Aggregation:
- Uncertainty:
- Styling:
- Output paths:
If this cannot be written, route to paper-evidence-board or paper-evidence-gap-miner.
Step 3 - Build the Table or Figure View
Use structured CSV parsing, not ad hoc copy/paste.
For tables:
- choose row and column groups around the reader's comparison path
- aggregate seeds/repeats before rounding
- define bolding/underlining and missing-value rules
- include metric direction arrows when appropriate
- output standalone
tables/<name>.tex - keep the source CSVs and generation scripts outside author-visible/public paper source unless the project explicitly chooses to release them
For figures:
- choose plot type based on the claim: bar, line, scatter, heatmap, Pareto frontier, calibration curve, slice plot, qualitative grid, or appendix diagnostic
- apply
paper/.agent/visual-style.mdwhen present - export stable paper assets such as
figures/<name>.pdfand optionallyfigures/<name>.png - output a LaTeX wrapper
figures/<name>.texwith caption and label scaffolding - keep private plotting code, notebooks, and debug plots out of author-visible/public source by default
Step 4 - Record Provenance
Update:
paper/.agent/result-asset-provenance.md
For every asset, record:
- source CSV path and file hash when practical
- run IDs, configs, seeds, datasets, splits, methods, and metrics
- filtering and aggregation code or exact rules
- rounding and bolding rules
- plotting parameters and visual style decisions
- manual edits, if any
- claim IDs and paper locations
- uncertainty or missing provenance
Step 5 - Handoff to Review and Writing
After creating assets:
- use
table-results-reviewfortables/*.tex - use
figure-results-reviewforfigures/*.pdf,figures/*.png, andfigures/*.tex - use
experiment-story-writerfor result prose - update
paper-evidence-boardwith asset-to-claim mappings - update
paper-writing-memory-managerwhen new or changed assets affect captions, result prose, abstract, introduction, limitations, or conclusion - update
memory/provenance-board.mdwith CSV/report-to-asset traceability, aggregation rules, scripts, and paper locations when project memory exists - update
memory/source-visibility-board.mdor route tosubmit-paperwhen generated assets are intended for an author-visible, anonymous, arXiv, camera-ready, or publisher-visible source surface - update
memory/handoff-board.mdwith ready handoffs tofigure-results-review,table-results-review,experiment-story-writer, orpaper-evidence-board - update
memory/phase-dashboard.mdwhen the project moves from evidence production to paper asset building or drafting
Final Sanity Check
Before finalizing:
- every number or plotted point traces back to a CSV source
- filtering and aggregation rules are explicit
- paper-facing visual style is separate from experiment-time visualization
- output files are paper-ready assets, not raw debug plots
- table/figure job maps to a claim or reviewer question
- missing results are routed to
paper-evidence-gap-miner - provenance is saved or returned
- private source files such as CSVs, notebooks, plotting scripts, and provenance ledgers are not placed in visible paper source unless intentionally cleaned
More from a-green-hand-jack/ml-research-skills
project-init
Initialize an ML research project control root. Use for paper/code/slides repos, shared memory, GitHub Project alignment, agent guidance, worktree policy, and lifecycle handoffs.
37project-sync
Sync verified code-side experiment results into paper memory. Use when logs, reports, run docs, or user-confirmed metrics should become paper-facing evidence.
36add-git-tag
Create annotated Git milestone tags. Use when completing a phase, releasing a version, or marking a research checkpoint.
36update-docs
Refresh project documentation after code changes. Use after implementing features, changing behavior, or preparing a milestone commit.
36init-latex-project
Initialize a LaTeX academic paper project. Use for new conference or journal papers needing templates, macros, venue preambles, and writing guidance.
36new-workspace
Create Git branches or worktrees for research code and paper versions. Use for experiments, baselines, rebuttal fixes, arXiv/camera-ready branches, and worktree memory.
36