together-dedicated-endpoints
Together Dedicated Endpoints
Overview
Use dedicated endpoints for managed single-tenant model hosting with predictable performance and no shared serverless pool.
Typical fits:
- production inference with stable latency
- fine-tuned model hosting
- uploaded custom model hosting
- autoscaled model APIs
When This Skill Wins
- The user needs always-on or single-tenant hosting
- The model is supported for dedicated deployment
- Fine-tuned or uploaded models must be served as endpoints
- Hardware, scaling, or idle-time settings need explicit control
Hand Off To Another Skill
- Use
together-chat-completionsfor serverless chat inference - Use
together-dedicated-containersfor custom runtimes or nonstandard inference pipelines - Use
together-gpu-clustersfor raw infrastructure or cluster orchestration
Quick Routing
- Create and manage a standard endpoint
- Start with scripts/manage_endpoint.py or scripts/manage_endpoint.ts
- Read references/api-reference.md
- Lifecycle tuning or troubleshooting
- Deploy a fine-tuned model
- Start with scripts/deploy_finetuned.py
- Read references/dedicated-models.md
- Upload and deploy a custom model
- Start with scripts/upload_custom_model.py
- Read references/dedicated-models.md
- Hardware and sizing choices
Workflow
- Confirm that the task needs dedicated hosting instead of serverless or containers.
- Verify model eligibility and inspect available hardware.
- Create the endpoint with explicit scaling and timeout settings.
- Wait for readiness before sending inference traffic.
- Stop or delete the endpoint when the workload no longer needs to run.
High-Signal Rules
- Python scripts require the Together v2 SDK (
together>=2.0.0). If the user is on an older version, they must upgrade first:uv pip install --upgrade "together>=2.0.0". - Model eligibility and hardware availability are gating constraints; check them early.
- Endpoint management uses endpoint IDs, while inference usually uses the endpoint name as
model. - Autoscaling, auto-shutdown, prompt caching, and speculative decoding materially affect operations and cost.
- For custom or fine-tuned models, do not skip the intermediate verification steps before deployment.
Resource Map
- API reference: references/api-reference.md
- Operational controls and troubleshooting: references/api-reference.md
- Dedicated model guide: references/dedicated-models.md
- Hardware guide: references/hardware-options.md
- Python endpoint lifecycle: scripts/manage_endpoint.py
- TypeScript endpoint lifecycle: scripts/manage_endpoint.ts
- Fine-tuned deployment: scripts/deploy_finetuned.py
- Custom model upload and deployment: scripts/upload_custom_model.py
Official Docs
More from togethercomputer/skills
together-chat-completions
Real-time and streaming text generation via Together AI's OpenAI-compatible chat/completions API, including multi-turn conversations, tool and function calling, structured JSON outputs, and reasoning models. Reach for it whenever the user wants to build or debug text generation on Together AI, unless they specifically need batch jobs, embeddings, fine-tuning, dedicated endpoints, dedicated containers, or GPU clusters.
40together-batch-inference
High-volume, asynchronous offline inference at up to 50% lower cost via Together AI's Batch API. Prepare JSONL inputs, upload files, create jobs, poll status, and download outputs. Reach for it whenever the user needs non-interactive bulk inference rather than real-time chat or evaluation jobs.
39together-images
Text-to-image generation and image editing via Together AI, including FLUX and Kontext models, LoRA-based styling, reference-image guidance, and local image downloads. Reach for it whenever the user wants to generate or edit images on Together AI rather than create videos or build text-only chat applications.
33together-fine-tuning
LoRA, full fine-tuning, DPO preference tuning, VLM training, function-calling tuning, reasoning tuning, and BYOM uploads on Together AI. Reach for it whenever the user wants to adapt a model on custom data rather than only run inference, evaluate outputs, or host an existing model.
32together-embeddings
Dense vector embeddings, semantic search, RAG pipelines, and reranking via Together AI. Generate embeddings with open-source models and rerank results behind dedicated endpoints. Reach for it whenever the user needs vector representations or retrieval quality improvements rather than direct text generation.
31together-evaluations
LLM-as-a-judge evaluation framework on Together AI. Classify, score, and compare model outputs, select judge models, use external-provider judges or targets, poll results and download reports. Reach for it whenever the user wants to benchmark outputs, grade responses, compare A/B variants, or operationalize automated evaluations.
31