run-train
run-train
When to apply
- When the training command has already been selected and should be executed conservatively.
- When the researcher wants startup verification, short-run verification, full training kickoff, or resume handling.
- When the run needs structured training status, checkpoint, and metric reporting.
When not to apply
- When the main task is environment setup or asset download.
- When the researcher wants inference-only or evaluation-only execution.
- When the task is speculative exploration, multi-variant sweeps, or autonomous idea implementation.
- When the user still needs repository intake or paper gap resolution.
Clear boundaries
- This skill executes a selected training command and normalizes the resulting evidence.
- It does not choose the overall research goal on its own.
- It does not own exploratory branching or speculative code adaptation.
- It should record partial, blocked, resumed, and kicked-off states clearly.
Input expectations
- selected training goal
- runnable training command
- environment and asset assumptions
- run mode such as startup verification, short-run verification, full kickoff, or resume
Output expectations
train_outputs/SUMMARY.mdtrain_outputs/COMMANDS.mdtrain_outputs/LOG.mdtrain_outputs/status.json
Notes
Use references/training-policy.md, scripts/run_training.py, and scripts/write_outputs.py.
More from lllllllama/ai-paper-reproduction-skills
paper-context-resolver
Optional narrow helper skill for README-first AI repo reproduction. Use only when the README and repository files leave a narrow reproduction-critical gap and the task is to resolve a specific paper detail such as dataset split, preprocessing, evaluation protocol, checkpoint mapping, or runtime assumption from primary paper sources while recording conflicts. Do not use for general paper summary, repo scanning, environment setup, command execution, title-only paper lookup, or replacing README guidance by default.
21analyze-project
Trusted-lane analysis skill for deep learning research repositories. Use when the user wants to read and understand a repository, inspect model structure and training or inference entrypoints, review configs and insertion points, or flag suspicious implementation patterns without modifying code or running heavy jobs. Do not use for active command execution, broad refactoring, speculative code adaptation, or automatic bug fixing.
20ai-research-reproduction
Main orchestrator for README-first AI repo reproduction. Use when the user wants an end-to-end, minimal-trustworthy reproduction flow that reads the repository first, selects the smallest documented inference or evaluation target, coordinates intake, setup, trusted execution, optional trusted training, optional repository analysis, and optional paper-gap resolution, enforces conservative patch rules, records evidence assumptions deviations and human decision points, and writes the standardized `repro_outputs/` bundle. Do not use for paper summary, generic environment setup, isolated repo scanning, standalone command execution, silent protocol changes, or broad research assistance outside repository-grounded reproduction.
20explore-code
Explore-lane code adaptation skill for deep learning research repositories. Use when the researcher explicitly authorizes exploratory work on an isolated branch or worktree to transplant modules, adapt a backbone, add LoRA or adapter layers, replace a head, or stitch together low-risk migration ideas with summary-only records in `explore_outputs/`. Do not use for end-to-end exploration orchestration on top of `current_research`, trusted baseline reproduction, conservative debugging, environment setup, or default repository analysis.
19safe-debug
Trusted-lane debug skill for deep learning research work. Use when the user pastes a traceback, terminal error, CUDA OOM, checkpoint load failure, shape mismatch, NaN loss symptom, or training failure and wants conservative diagnosis before any patching. Do not use for broad refactoring, speculative adaptation, automatic exploratory patching, or general repository familiarization.
19explore-run
Explore-lane experimental execution skill for deep learning research repositories. Use when the researcher explicitly authorizes exploratory runs such as small-subset validation, short-cycle guess-and-check, batch sweeps, idle-GPU search, or quick transfer-learning trials, with results summarized in `explore_outputs/`. Do not use for end-to-end exploration orchestration on top of `current_research`, trusted baseline execution, conservative training verification, default routing, or implicit experimentation.
19