paper-context-resolver
paper-context-resolver
When to apply
- README and repo files leave a reproduction-critical gap.
- The gap concerns dataset version, split, preprocessing, evaluation protocol, checkpoint mapping, or runtime assumptions.
- The main skill needs a narrow evidence supplement instead of a full paper summary.
- There is already a concrete reproduction question to answer.
When not to apply
- The README already gives enough reproduction detail.
- The user wants a general paper explanation rather than reproduction support.
- The goal is to override README instructions without documenting the conflict.
- The only available input is a paper title and there is no concrete reproduction gap yet.
Clear boundaries
- This skill is optional.
- This skill is helper-tier and should usually be orchestrator-invoked.
- It supplements README-first reproduction.
- It does not replace the main orchestration flow.
- It does not summarize the whole paper by default.
Input expectations
- target repo metadata
- reproduction-critical question
- existing README or repo evidence
- any already known paper links
Output expectations
- narrowed source list
- reproduction-relevant answer only
- explicit README-paper conflict note when applicable
- clear distinction between direct evidence and inference
Notes
Use references/paper-assisted-reproduction.md.
More from lllllllama/ai-paper-reproduction-skills
analyze-project
Trusted-lane analysis skill for deep learning research repositories. Use when the user wants to read and understand a repository, inspect model structure and training or inference entrypoints, review configs and insertion points, or flag suspicious implementation patterns without modifying code or running heavy jobs. Do not use for active command execution, broad refactoring, speculative code adaptation, or automatic bug fixing.
20ai-research-reproduction
Main orchestrator for README-first AI repo reproduction. Use when the user wants an end-to-end, minimal-trustworthy reproduction flow that reads the repository first, selects the smallest documented inference or evaluation target, coordinates intake, setup, trusted execution, optional trusted training, optional repository analysis, and optional paper-gap resolution, enforces conservative patch rules, records evidence assumptions deviations and human decision points, and writes the standardized `repro_outputs/` bundle. Do not use for paper summary, generic environment setup, isolated repo scanning, standalone command execution, silent protocol changes, or broad research assistance outside repository-grounded reproduction.
20explore-code
Explore-lane code adaptation skill for deep learning research repositories. Use when the researcher explicitly authorizes exploratory work on an isolated branch or worktree to transplant modules, adapt a backbone, add LoRA or adapter layers, replace a head, or stitch together low-risk migration ideas with summary-only records in `explore_outputs/`. Do not use for end-to-end exploration orchestration on top of `current_research`, trusted baseline reproduction, conservative debugging, environment setup, or default repository analysis.
19safe-debug
Trusted-lane debug skill for deep learning research work. Use when the user pastes a traceback, terminal error, CUDA OOM, checkpoint load failure, shape mismatch, NaN loss symptom, or training failure and wants conservative diagnosis before any patching. Do not use for broad refactoring, speculative adaptation, automatic exploratory patching, or general repository familiarization.
19explore-run
Explore-lane experimental execution skill for deep learning research repositories. Use when the researcher explicitly authorizes exploratory runs such as small-subset validation, short-cycle guess-and-check, batch sweeps, idle-GPU search, or quick transfer-learning trials, with results summarized in `explore_outputs/`. Do not use for end-to-end exploration orchestration on top of `current_research`, trusted baseline execution, conservative training verification, default routing, or implicit experimentation.
19run-train
Trusted-lane training execution skill for deep learning research repositories. Use when a documented or selected training command should be run conservatively for startup verification, short-run verification, full kickoff, or resume, with status, checkpoint, and metric capture written to standardized `train_outputs/`. Do not use for environment setup, exploratory sweeps, speculative idea implementation, or end-to-end orchestration.
19