algorithm-design-planner
Algorithm Design Planner
Convert a validated research idea into a concrete method design that can be implemented, ablated, evaluated, and explained in a paper.
Use this skill when:
- an idea has passed early validation and needs an actual algorithm
- a method, loss, architecture, inference procedure, or training recipe is underspecified
- the user needs a method design document before coding
- a project needs assumptions, failure modes, ablations, and implementation boundaries
- early results suggest revising the algorithm rather than only rerunning experiments
- a paper's method section is hard to write because the method itself is not precise
Do not use this skill to launch experiments. Pair it with experiment-design-planner after the design is specific enough to test.
Pair this skill with:
research-project-memorywhen the design changes claims, assumptions, risks, actions, or worktree purposeresearch-idea-validatorbefore this skill if the idea itself may not be worth pursuingliterature-review-sprintwhen the closest prior method is unclearexperiment-design-plannerafter the method produces testable hypotheses and ablationsrun-experimentonly after implementation and experiment design are readyconference-writing-adapterwhen translating the final design into paper prose
Skill Directory Layout
<installed-skill-dir>/
├── SKILL.md
└── references/
├── ablation-implications.md
├── design-rubric.md
├── failure-mode-map.md
├── implementation-handoff.md
├── method-spec-template.md
└── paper-method-bridge.md
Progressive Loading
- Always read
references/design-rubric.mdandreferences/method-spec-template.md. - Read
references/failure-mode-map.mdwhen assumptions, edge cases, or negative results matter. - Read
references/ablation-implications.mdwhen the method has components, losses, objectives, schedules, architectures, or inference changes. - Read
references/implementation-handoff.mdbefore producing coding tasks or worktree plans. - Read
references/paper-method-bridge.mdwhen the design must become a method section. - If novelty depends on current methods or baselines, verify with web search or user-provided papers.
Core Principles
- Design the mechanism before designing the experiment.
- Separate the problem, method, claim, and evidence plan.
- Make the smallest method that could test the core idea.
- State assumptions and invariants explicitly.
- Identify what is genuinely new relative to the closest baseline.
- Every method component should have a reason, an ablation, and a failure mode.
- Avoid adding knobs that cannot be justified, tuned fairly, or explained to reviewers.
- Produce an implementation handoff that prevents hidden design decisions from being made during coding.
Step 1 - Recover Context
Collect:
- validated idea or project direction
- current decision from
research-idea-validator, if available - target paper claim
- target model/task/domain
- closest baseline or prior method
- available codebase and implementation constraints
- known experiments or negative results
- project memory IDs such as
CLM-###,RSK-###, orACT-###, if present
If the idea is still vague, rewrite it into:
For [problem/setting], modify [baseline] by [mechanism] so that [expected property] improves because [assumption].
If this sentence cannot be written, route back to research-idea-validator or literature-review-sprint.
Step 2 - Choose Design Mode
Classify the design:
method: new algorithm, training recipe, or inference procedureobjective: new loss, regularizer, constraint, reward, or optimization criterionarchitecture: new module, representation, layer, routing, memory, or parameterizationtheory: formal method derived from assumptions, theorem, bound, or principlesystem: pipeline, infrastructure, scheduling, retrieval, data, or tooling designrevision: method update after negative or ambiguous results
Use one primary mode and optional secondary modes.
Step 3 - Build the Method Spec
Read references/design-rubric.md and references/method-spec-template.md.
Define:
- problem formulation
- inputs and outputs
- baseline being modified
- core mechanism
- training objective or loss, if any
- inference or sampling procedure, if any
- architecture or module changes, if any
- assumptions and invariants
- hyperparameters and schedules
- computational cost
- expected behavior
- what stays unchanged from the baseline
Use math, pseudocode, or structured bullets as appropriate. Do not hide important design decisions in prose.
Step 4 - Check Novelty and Minimality
Ask:
- What is the irreducible difference from the closest baseline?
- Which part is necessary for the claim?
- Which part is convenience, engineering, or tuning?
- Can the first implementation test a smaller version?
- Could a reviewer call this a minor tweak?
If the new idea depends on multiple changes, separate core design from optional extensions.
Step 5 - Map Failure Modes
Read references/failure-mode-map.md.
List:
- assumptions that may be false
- data or task regimes where the method should fail
- optimization or stability risks
- metric mismatch risks
- computational risks
- confounds that could explain gains
- signs that the design should be revised, parked, or killed
Negative outcomes should map to decisions, not vague concern.
Step 6 - Derive Ablations and Diagnostics
Read references/ablation-implications.md.
For each method component, define:
- why it exists
- what happens if removed
- what diagnostic tests its mechanism
- what hyperparameter or schedule must be swept
- what baseline or control separates the mechanism from tuning or compute
This output should feed directly into experiment-design-planner.
Step 7 - Prepare Implementation Handoff
Read references/implementation-handoff.md.
Produce:
- files/modules likely to change
- public interfaces or config names
- minimal prototype plan
- unit/smoke tests
- logging requirements
- worktree or branch purpose
- exit condition: merge, continue, park, or kill
- risks that coding should not decide silently
If no codebase exists, define a minimal scaffold or prototype boundary instead of a full engineering plan.
Step 8 - Bridge to Paper Method Section
Read references/paper-method-bridge.md when useful.
Produce:
- method name, if needed
- method-section outline
- algorithm box contents
- equations or definitions required
- assumptions to state
- reviewer-facing explanation of why the mechanism should work
- claims to avoid until evidence exists
Step 9 - Write the Design Document
If saving to a project and no path is given, use:
docs/designs/algorithm_design_YYYY-MM-DD_<short-name>.md
Use this structure:
# Algorithm Design: [Name]
## Design Context
## Target Claim
## Design Decision
## Problem Formulation
## Method Specification
## Assumptions and Invariants
## Relation to Baseline and Prior Work
## Failure Modes
## Ablations and Diagnostics
## Implementation Handoff
## Experiment Handoff
## Paper Method Bridge
## Project Memory Writeback
Step 10 - Write Back to Project Memory
If the project uses research-project-memory, update:
memory/decision-log.md: durable design choices and whymemory/claim-board.md: method claims that are planned, revised, weakened, or cutmemory/risk-board.md: mechanism, implementation, baseline, tuning, compute, and evaluation risksmemory/action-board.md: implementation, ablation, diagnostic, literature, or experiment-design actionsmemory/evidence-board.md: planned diagnostics or experiment families when concrete enough- worktree
.agent/worktree-status.md: purpose, linked claims, linked experiments, and exit condition for implementation branches
Use planned for evidence and inferred for failure modes until observed.
Final Sanity Check
Before finalizing:
- problem, baseline, and method are explicit
- core mechanism is distinguishable from optional engineering
- assumptions and invariants are stated
- every new component has an ablation or diagnostic
- implementation handoff is concrete enough for coding
- experiment handoff is concrete enough for
experiment-design-planner - paper-method bridge does not overclaim beyond planned evidence
- project memory is updated when present
More from a-green-hand-jack/ml-research-skills
project-init
Initialize an ML research project control root. Use for paper/code/slides repos, shared memory, GitHub Project alignment, agent guidance, worktree policy, and lifecycle handoffs.
37project-sync
Sync verified code-side experiment results into paper memory. Use when logs, reports, run docs, or user-confirmed metrics should become paper-facing evidence.
36add-git-tag
Create annotated Git milestone tags. Use when completing a phase, releasing a version, or marking a research checkpoint.
36update-docs
Refresh project documentation after code changes. Use after implementing features, changing behavior, or preparing a milestone commit.
36new-workspace
Create Git branches or worktrees for research code and paper versions. Use for experiments, baselines, rebuttal fixes, arXiv/camera-ready branches, and worktree memory.
36init-latex-project
Initialize a LaTeX academic paper project. Use for new conference or journal papers needing templates, macros, venue preambles, and writing guidance.
36