together-fine-tuning

Installation
SKILL.md

Together Fine-Tuning

Overview

Use Together AI fine-tuning when the user needs to adapt a model to their own data or behavior.

Supported workflows in this repo:

  • LoRA fine-tuning
  • full fine-tuning
  • DPO preference tuning
  • VLM fine-tuning
  • function-calling fine-tuning
  • reasoning fine-tuning
  • BYOM upload paths

When This Skill Wins

  • Train a model on custom instruction or conversational data
  • Improve function-calling reliability with supervised examples
  • Train on preferences rather than only demonstrations
  • Fine-tune multimodal or reasoning-oriented models
  • Deploy a fine-tuned output model later through dedicated endpoints

Hand Off To Another Skill

  • Use together-chat-completions for plain inference without training
  • Use together-evaluations to measure a model before or after tuning
  • Use together-dedicated-endpoints to host the resulting tuned model
  • Use together-gpu-clusters only when the user needs raw infrastructure rather than managed tuning

Quick Routing

Workflow

  1. Choose the tuning method that matches the desired behavior change.
  2. Validate dataset format before spending tokens on training.
  3. Upload training data and keep the returned file ID.
  4. Create the job with explicit method-specific parameters.
  5. Monitor job state, events, and checkpoints before handing off to deployment.

High-Signal Rules

  • Python scripts require the Together v2 SDK (together>=2.0.0). If the user is on an older version, they must upgrade first: uv pip install --upgrade "together>=2.0.0".
  • Prefer LoRA unless the user has a specific reason to pay for full fine-tuning.
  • Keep data-format validation close to the upload step so bad files fail early.
  • Treat deployment as a separate phase; fine-tuning success does not automatically mean serving success.
  • Use the method-specific script instead of overloading one generic workflow for all modes.
  • Parameterize dataset paths, model IDs, and suffixes in automation instead of embedding one demo dataset forever.

Resource Map

Official Docs

Related skills

More from togethercomputer/skills

Installs
32
GitHub Stars
24
First Seen
Mar 31, 2026