nlp-alignment
LLM Alignment Best Practice
Methods:
- RLHF: Train reward model → PPO fine-tuning (complex but powerful)
- DPO: Direct preference optimization (simpler, no reward model needed)
- GRPO: Group relative policy optimization
- SFT: Supervised fine-tuning as alignment baseline
Training recipe:
- Start with SFT on high-quality instruction data
- DPO: lr=5e-7, beta=0.1, batch_size=64
- PPO: lr=1e-6, clip=0.2, KL coeff=0.02
- Use reference model for KL penalty
- Evaluate on safety benchmarks (TruthfulQA, BBQ, etc.)
Common pitfalls:
- Reward hacking: model finds shortcuts to high reward
- Mode collapse: model generates repetitive outputs
- Catastrophic forgetting: loses general capabilities
More from aiming-lab/autoresearchclaw
scientific-writing
Academic manuscript writing with IMRAD structure, citation formatting, and reporting guidelines. Use when drafting or revising research papers.
10hypothesis-formulation
Structured scientific hypothesis generation from observations. Use when formulating testable hypotheses, competing explanations, or experimental predictions.
9scientific-visualization
Publication-ready scientific figure design with matplotlib and seaborn. Use when creating journal submission figures with proper formatting, accessibility, and statistical annotations.
9literature-search
Systematic literature review methodology including search strategy, screening, and synthesis. Use when conducting literature reviews or writing background sections.
9statistical-reporting
Statistical test selection, assumption checking, and APA-formatted reporting. Use when analyzing experimental results or writing results sections.
9a-evolve
>
8