ai-research-reproduction
ai-research-reproduction
Use when
- The user wants the agent to reproduce an AI paper repository.
- The target is a code repository with a README, scripts, configs, or documented commands.
- The goal is a minimal trustworthy run, not unlimited experimentation.
- The user needs standardized outputs that another human or model can audit quickly.
- The task spans more than one stage, such as intake plus setup, or setup plus execution plus reporting.
Do not use when
- The task is a general literature review or paper summary.
- The task is to design a new model, benchmark suite, or training pipeline from scratch.
- The repository is not centered on AI or does not expose a documented reproduction path.
- The user primarily wants a deep code refactor rather than README-first reproduction.
- The user is explicitly asking for only one narrow phase that a sub-skill already covers cleanly.
- The user is explicitly authorizing exploratory branch-only experimentation instead of trusted reproduction.
Success criteria
- README is treated as the primary source of reproduction intent.
- A minimum trustworthy target is selected and justified.
- Documented inference is preferred over evaluation, and evaluation is preferred over training.
- Any repo edits remain conservative, explicit, and auditable.
- Assumptions, protocol deviations, and human decision points are surfaced rather than hidden.
repro_outputs/is generated with consistent structure and stable machine-readable fields.- Final user-facing explanation is short and follows the user's language when practical.
Interaction and usability policy
- Keep the workflow simple enough for a new user to understand quickly.
- Prefer short, concrete plans over exhaustive research.
- Expose commands, assumptions, blockers, and evidence.
- Avoid turning the skill into an opaque automation layer.
- Preserve a low learning cost for both humans and downstream agents.
Language policy
- Human-readable Markdown outputs should follow the user's language when it is clear.
- If the user's language is unclear, default to concise English.
- Machine-readable fields, filenames, keys, and enum values stay in stable English.
- Paths, package names, CLI commands, config keys, and code identifiers remain unchanged.
See references/language-policy.md.
Reproduction policy
Core priority order:
- documented inference
- documented evaluation
- documented training startup or partial verification
- full training only when the user explicitly asks later
Rules:
- README-first: use repository files to clarify, not casually override, the README.
- Aim for minimal trustworthy reproduction rather than maximum task coverage.
- Treat smoke tests, startup verification, and early-step checks as valid training evidence when full training is not appropriate.
- In trusted reproduction, a documented training command should first be checked through startup verification or a short monitoring window, then paused for explicit human confirmation before broader training continues.
- In explicitly authorized explore-lane execution, the training record can continue without the trusted-lane confirmation pause, but it must stay isolated from trusted conclusions.
- Record unresolved gaps rather than fabricating confidence.
Patch policy
- Prefer no code changes.
- Prefer safer adjustments first:
- command-line arguments
- environment variables
- path fixes
- dependency version fixes
- dependency file fixes such as
requirements.txtorenvironment.yml
- Avoid changing:
- model architecture
- core inference semantics
- core training logic
- loss functions
- experiment meaning
- If repository files must change:
- create a patch branch first using
repro/YYYY-MM-DD-short-task - apply low-risk changes before medium-risk changes
- avoid high-risk changes by default
- commit only verified groups of changes
- keep verified patch commits sparse, usually
0-2 - use commit messages in the form
repro: <scope> for documented <command>
- create a patch branch first using
See references/patch-policy.md.
Research safety boundary
- Preserve experiment meaning over convenience.
- Do not silently change dataset, split, checkpoint, preprocessing, metric, loss, or model semantics.
- Distinguish direct evidence from inference and from user-approved decisions.
- Prefer a recorded blocker over an unrecorded workaround.
- Escalate for explicit human review before any change that could alter scientific meaning or reported conclusions.
See references/research-safety-principles.md.
Workflow
- Read README and repo signals.
- Call
repo-intake-and-planto scan the repository and extract documented commands. - Select the smallest trustworthy reproduction target.
- Call
env-and-assets-bootstrapto prepare environment assumptions and asset paths. - Call
analyze-projectonly when repo structure, insertion points, or suspicious implementation patterns need a read-only pass before continuing. - Run a conservative smoke check or documented inference or evaluation command with
minimal-run-and-audit. - If the selected trustworthy target is documented training startup, short-run verification, or resume, hand execution to
run-traininstead ofminimal-run-and-audit. - When training is selected inside trusted reproduction, let
run-traincapture the startup evidence first, then surface a human review checkpoint before any fuller training claim. - Stop for human review if protocol meaning, model semantics, or result interpretation would otherwise be changed implicitly.
- Use
paper-context-resolveronly if README and repo files leave a narrow reproduction-critical gap that blocks the current target. - Never auto-route into
explore-codeorexplore-run; exploration requires explicit user authorization. - Write the standardized outputs with evidence, assumptions, deviations, and next safe action.
- Give the user a short final note in the user's language.
Required outputs
Always target:
repro_outputs/
SUMMARY.md
COMMANDS.md
LOG.md
status.json
PATCHES.md # only if patches were applied
Use the templates under assets/ and the field rules in references/output-spec.md.
Reporting policy
- Put the shortest high-value summary in
SUMMARY.md. - Put copyable commands in
COMMANDS.md. - Put process evidence, assumptions, failures, and decisions in
LOG.md. - Put durable machine-readable state in
status.json. - Put branch, commit, validation, and README-fidelity impact in
PATCHES.mdwhen needed. - Distinguish verified facts from inferred guesses.
Maintainability notes
- Keep this skill narrow: README-first AI repo reproduction only.
- Push specialized logic into sub-skills or helper scripts.
- Prefer stable templates and simple schemas over ad hoc prose.
- Keep machine-readable outputs backward compatible when possible.
- Add new evidence sources only when they improve auditability without raising learning cost.
- Treat
repo-intake-and-planandpaper-context-resolveras narrow helpers, not primary public entrypoints.
More from lllllllama/ai-paper-reproduction-skill
paper-context-resolver
Optional narrow helper skill for README-first AI repo reproduction. Use only when the README and repository files leave a narrow reproduction-critical gap and the task is to resolve a specific paper detail such as dataset split, preprocessing, evaluation protocol, checkpoint mapping, or runtime assumption from primary paper sources while recording conflicts. Do not use for general paper summary, repo scanning, environment setup, command execution, title-only paper lookup, or replacing README guidance by default.
81.7Kenv-and-assets-bootstrap
Environment and assets sub-skill for README-first AI repo reproduction. Use when the task is specifically to prepare a conservative conda-first environment, checkpoint and dataset path assumptions, cache location hints, and setup notes before any run on a README-documented repository. Do not use for repo scanning, full orchestration, paper interpretation, final run reporting, or generic environment setup that is not tied to a specific reproduction target.
81.6Krepo-intake-and-plan
Narrow helper skill for README-first AI repo reproduction. Use when the task is specifically to scan a repository, read the README and common project files, extract documented commands, classify inference, evaluation, and training candidates, and return the smallest trustworthy reproduction plan to the main orchestrator. Do not use for environment setup, asset download, command execution, final reporting, paper lookup, or end-to-end orchestration.
81.6Kminimal-run-and-audit
Trusted-lane execution and reporting skill for README-first AI repo reproduction. Use when the task is specifically to capture or normalize evidence from the selected smoke test or documented inference or evaluation command and write standardized `repro_outputs/` files, including patch notes when repository files changed. Do not use for training execution, initial repo intake, generic environment setup, paper lookup, target selection, or end-to-end orchestration by itself.
81.6Kai-paper-reproduction
Main orchestrator for README-first AI repo reproduction. Use when the user wants an end-to-end, minimal-trustworthy reproduction flow that reads the repository first, selects the smallest documented inference or evaluation target, coordinates intake, setup, trusted execution, optional trusted training, optional repository analysis, and optional paper-gap resolution, enforces conservative patch rules, records evidence assumptions deviations and human decision points, and writes the standardized `repro_outputs/` bundle. Do not use for paper summary, generic environment setup, isolated repo scanning, standalone command execution, silent protocol changes, or broad research assistance outside repository-grounded reproduction.
9.1Kexplore-code
Explore-lane code adaptation skill for deep learning research repositories. Use when the researcher explicitly authorizes exploratory work on an isolated branch or worktree to transplant modules, adapt a backbone, add LoRA or adapter layers, replace a head, or stitch together low-risk migration ideas with summary-only records in `explore_outputs/`. Do not use for end-to-end exploration orchestration on top of `current_research`, trusted baseline reproduction, conservative debugging, environment setup, or default repository analysis.
111