together-video
Together Video
Overview
Use Together AI video APIs for:
- text-to-video generation
- image-to-video generation
- first-frame and last-frame keyframe control
- asynchronous job polling
- local download of completed outputs
When This Skill Wins
- Generate short videos from prompts
- Animate an existing image
- Choose among Veo, Sora, Kling, Seedance, PixVerse, Vidu, or other supported models
- Add polling and download logic to a product or script
Hand Off To Another Skill
- Use
together-imagesfor still-image generation or editing - Use
together-dedicated-containersonly when a custom video-serving runtime is required
Quick Routing
- Text-to-video generation
- Start with scripts/generate_video.py or scripts/generate_video.ts
- Read references/api-reference.md
- Image-to-video with keyframes
- Start with scripts/image_to_video.py
- Read references/api-reference.md
- Parameter tuning, polling, or troubleshooting
- Model, dimension, and prompt-limit selection
- Read references/models.md
Workflow
- Confirm whether the user needs text-to-video or image-to-video.
- Choose the model based on duration, dimension, keyframe support, and audio support.
- Submit the async job and poll until a terminal state.
- Download the result promptly before signed URLs expire.
High-Signal Rules
- Python scripts require the Together v2 SDK (
together>=2.0.0). If the user is on an older version, they must upgrade first:uv pip install --upgrade "together>=2.0.0". - Together video generation is asynchronous; do not treat it like a synchronous image call.
- Keyframe support is model-specific. Validate support before promising first-plus-last-frame control.
- Keep polling and download logic as part of the workflow, not as an afterthought.
- Use explicit dimensions and generation parameters rather than relying on unstable defaults.
Resource Map
- API reference: references/api-reference.md
- Polling, parameter tuning, and troubleshooting: references/api-reference.md
- Model guide: references/models.md
- Python text-to-video workflow: scripts/generate_video.py
- TypeScript text-to-video workflow: scripts/generate_video.ts
- Python image-to-video workflow: scripts/image_to_video.py
Official Docs
More from zainhas/togetherai-skills
together-code-interpreter
Use this skill for Together AI Code Interpreter workflows: remote Python execution, session reuse, file uploads, data analysis, plots, and stateful notebook-like runs through the TCI API. Reach for it whenever the user wants managed remote Python execution on Together AI instead of local execution, raw clusters, or full model hosting.
33together-audio
Text-to-speech and speech-to-text via Together AI, including REST, streaming, and realtime WebSocket TTS, plus transcription, translation, diarization, timestamps, and live STT. Reach for it whenever the user needs audio in or audio out on Together AI rather than chat generation, image or video creation, or model training.
14together-images
Text-to-image generation and image editing via Together AI, including FLUX and Kontext models, LoRA-based styling, reference-image guidance, and local image downloads. Reach for it whenever the user wants to generate or edit images on Together AI rather than create videos or build text-only chat applications.
14together-chat-completions
Real-time and streaming text generation via Together AI's OpenAI-compatible chat/completions API, including multi-turn conversations, tool and function calling, structured JSON outputs, and reasoning models. Reach for it whenever the user wants to build or debug text generation on Together AI, unless they specifically need batch jobs, embeddings, fine-tuning, dedicated endpoints, dedicated containers, or GPU clusters.
13together-dedicated-endpoints
Single-tenant GPU endpoints on Together AI with autoscaling and no rate limits. Deploy fine-tuned or uploaded models, size hardware, and manage endpoint lifecycle. Reach for it whenever the user needs predictable always-on hosting rather than serverless inference, custom containers, or raw clusters.
13together-fine-tuning
LoRA, full fine-tuning, DPO preference tuning, VLM training, function-calling tuning, reasoning tuning, and BYOM uploads on Together AI. Reach for it whenever the user wants to adapt a model on custom data rather than only run inference, evaluate outputs, or host an existing model.
11