mixed-precision
Mixed Precision Training Best Practice
Use torch.cuda.amp for automatic mixed precision:
- Wrap forward pass in torch.cuda.amp.autocast()
- Use GradScaler for loss scaling
- BF16 preferred over FP16 on Ampere+ GPUs (RTX 3xxx, A100, RTX 4xxx)
- Watch for NaN gradients — reduce learning rate if needed
- Do NOT use amp with custom CUDA kernels unless tested
More from aiming-lab/autoresearchclaw
scientific-writing
Academic manuscript writing with IMRAD structure, citation formatting, and reporting guidelines. Use when drafting or revising research papers.
10hypothesis-formulation
Structured scientific hypothesis generation from observations. Use when formulating testable hypotheses, competing explanations, or experimental predictions.
9scientific-visualization
Publication-ready scientific figure design with matplotlib and seaborn. Use when creating journal submission figures with proper formatting, accessibility, and statistical annotations.
9literature-search
Systematic literature review methodology including search strategy, screening, and synthesis. Use when conducting literature reviews or writing background sections.
9statistical-reporting
Statistical test selection, assumption checking, and APA-formatted reporting. Use when analyzing experimental results or writing results sections.
9a-evolve
>
8