explore-code
explore-code
When to apply
- When the researcher explicitly authorizes exploratory code changes on an isolated branch or worktree.
- When the task is source-anchored module transplant, backbone adaptation, LoRA or adapter insertion, or low-risk module combination.
- When summary-level recording is sufficient and the result is a candidate, not a trusted conclusion.
When not to apply
- When the request is for trusted baseline work, conservative debugging, or normal training execution.
- When the user did not explicitly authorize exploratory modifications.
- When the task is a broad refactor or a from-scratch idea implementation.
Clear boundaries
- This skill owns exploratory code modifications only.
- It must keep work isolated from the trusted baseline.
- Use
ai-research-exploreinstead when the task spans both current_research coordination and exploratory runs. - It may hand off execution to
minimal-run-and-auditorrun-train. - It should favor source-anchored copying and minimal adaptation over freeform rewrites.
Output expectations
explore_outputs/CHANGESET.mdexplore_outputs/TOP_RUNS.mdexplore_outputs/status.json
Notes
Use references/explore-policy.md, scripts/plan_code_changes.py, and scripts/write_outputs.py.
More from lllllllama/ai-paper-reproduction-skills
analyze-project
Trusted-lane analysis skill for deep learning research repositories. Use when the user wants to read and understand a repository, inspect model structure and training or inference entrypoints, review configs and insertion points, or flag suspicious implementation patterns without modifying code or running heavy jobs. Do not use for active command execution, broad refactoring, speculative code adaptation, or automatic bug fixing.
18ai-research-reproduction
Main orchestrator for README-first AI repo reproduction. Use when the user wants an end-to-end, minimal-trustworthy reproduction flow that reads the repository first, selects the smallest documented inference or evaluation target, coordinates intake, setup, trusted execution, optional trusted training, optional repository analysis, and optional paper-gap resolution, enforces conservative patch rules, records evidence assumptions deviations and human decision points, and writes the standardized `repro_outputs/` bundle. Do not use for paper summary, generic environment setup, isolated repo scanning, standalone command execution, silent protocol changes, or broad research assistance outside repository-grounded reproduction.
18explore-run
Explore-lane experimental execution skill for deep learning research repositories. Use when the researcher explicitly authorizes exploratory runs such as small-subset validation, short-cycle guess-and-check, batch sweeps, idle-GPU search, or quick transfer-learning trials, with results summarized in `explore_outputs/`. Do not use for end-to-end exploration orchestration on top of `current_research`, trusted baseline execution, conservative training verification, default routing, or implicit experimentation.
17run-train
Trusted-lane training execution skill for deep learning research repositories. Use when a documented or selected training command should be run conservatively for startup verification, short-run verification, full kickoff, or resume, with status, checkpoint, and metric capture written to standardized `train_outputs/`. Do not use for environment setup, exploratory sweeps, speculative idea implementation, or end-to-end orchestration.
17minimal-run-and-audit
Trusted-lane execution and reporting skill for README-first AI repo reproduction. Use when the task is specifically to capture or normalize evidence from the selected smoke test or documented inference or evaluation command and write standardized `repro_outputs/` files, including patch notes when repository files changed. Do not use for training execution, initial repo intake, generic environment setup, paper lookup, target selection, or end-to-end orchestration by itself.
17repo-intake-and-plan
Narrow helper skill for README-first AI repo reproduction. Use when the task is specifically to scan a repository, read the README and common project files, extract documented commands, classify inference, evaluation, and training candidates, and return the smallest trustworthy reproduction plan to the main orchestrator. Do not use for environment setup, asset download, command execution, final reporting, paper lookup, or end-to-end orchestration.
17