seedance-video
Seedance Video Generation
Generate AI dance and motion videos through AceDataCloud's Seedance (ByteDance) API.
Setup: See authentication for token setup.
Quick Start
curl -X POST https://api.acedata.cloud/seedance/videos \
-H "Authorization: Bearer $ACEDATACLOUD_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{"model": "doubao-seedance-1-0-pro-250528", "content": [{"type": "text", "text": "a dancer performing contemporary ballet in a misty forest"}], "callback_url": "https://api.acedata.cloud/health"}'
Async: See async task polling. Poll via
POST /seedance/taskswith{"task_id": "..."}. This returns a task ID immediately. Poll for the result:
curl -X POST https://api.acedata.cloud/seedance/tasks \
-H "Authorization: Bearer $ACEDATACLOUD_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{"task_id": "<task_id from above>"}'
Models
| Model | Type | Best For |
|---|---|---|
doubao-seedance-1-0-pro-250528 |
Text+Image-to-Video | General-purpose, reliable quality |
doubao-seedance-1-0-pro-fast-251015 |
Text+Image-to-Video | Faster Pro generation |
doubao-seedance-1-5-pro-251215 |
Text+Image-to-Video | Latest model, highest quality, audio support |
doubao-seedance-1-0-lite-t2v-250428 |
Text-to-Video only | Lightweight text-to-video |
doubao-seedance-1-0-lite-i2v-250428 |
Image-to-Video only | Lightweight image-to-video |
Workflows
1. Text-to-Video
Pass a text content item in the content array.
POST /seedance/videos
{
"model": "doubao-seedance-1-0-pro-250528",
"content": [
{"type": "text", "text": "a street dancer doing breakdancing moves in an urban setting"}
],
"resolution": "1080p",
"ratio": "16:9",
"duration": 5
}
2. Image-to-Video
Include an image content item (with an optional role) alongside the text.
POST /seedance/videos
{
"model": "doubao-seedance-1-5-pro-251215",
"content": [
{"type": "text", "text": "the person starts dancing gracefully"},
{
"type": "image_url",
"role": "first_frame",
"image_url": {"url": "https://example.com/dancer.jpg"}
}
],
"resolution": "720p",
"duration": 5
}
Image roles:
first_frame— image is used as the opening framelast_frame— image is used as the closing framereference_image— image is used as a style/content reference
3. First-frame + Last-frame
Provide both a start and end frame image:
POST /seedance/videos
{
"model": "doubao-seedance-1-0-pro-250528",
"content": [
{"type": "text", "text": "smooth transition between two scenes"},
{"type": "image_url", "role": "first_frame", "image_url": {"url": "https://example.com/start.jpg"}},
{"type": "image_url", "role": "last_frame", "image_url": {"url": "https://example.com/end.jpg"}}
]
}
Parameters
| Parameter | Values | Description |
|---|---|---|
model |
see Models table | Model to use (required) |
content |
array | Input items: text and/or image_url objects (required) |
resolution |
"480p", "720p", "1080p" |
Output resolution (default: 720p for pro, 480p for lite) |
ratio |
"16:9", "4:3", "1:1", "3:4", "9:16", "21:9", "adaptive" |
Aspect ratio (default: 16:9) |
duration |
2 – 12 |
Duration in seconds |
frames |
29–289 (must satisfy 25+4n) | Frame count — mutually exclusive with duration |
seed |
-1 to 4294967295 | Seed for reproducible results (-1 = random) |
generate_audio |
true / false |
Generate audio (only supported by doubao-seedance-1-5-pro-251215) |
camerafixed |
true / false |
Fix the camera position during generation |
watermark |
true / false |
Add a watermark to the generated video |
return_last_frame |
true / false |
Return the last frame of the generated video |
service_tier |
"default", "flex" |
Processing tier (default: default) |
execution_expires_after |
number | Task timeout threshold in seconds |
Inline Parameter Syntax
You can also embed generation parameters directly in the text prompt using the --param value syntax:
A kitten yawning at the camera. --rs 720p --rt 16:9 --dur 5 --fps 24 --seed 42
Supported inline params: --rs (resolution), --rt (ratio), --dur (duration), --frames, --fps (24 only), --seed, --cf (camera_fixed), --wm (watermark).
Gotchas
- Model names use the
doubao-*convention (e.g.doubao-seedance-1-0-pro-250528) — old short names likeseedance-1.0are not valid - The
contentarray replaces the oldprompt+image_urlfields; always usecontent - Image and text scenarios are mutually exclusive per content item — each item has either
textorimage_url, not both first_frame,last_frame, andreference_imageroles are mutually exclusive scenarios — pick one pattern per requestgenerate_audio: trueis only supported bydoubao-seedance-1-5-pro-251215; other models ignore this field- Lite models are split:
*-lite-t2v-*only accepts text,*-lite-i2v-*only accepts image-to-video - Resolution options are
480p,720p,1080p— there is no 360p or 540p service_tiervalues are"default"and"flex"(not "standard"/"premium")- Duration range is 2–12 seconds — values outside this range will fail
- Task states use
"succeeded"(not "completed") — check for this value when polling
MCP:
pip install mcp-seedance| Hosted:https://seedance.mcp.acedata.cloud/mcp| See all MCP servers
More from acedatacloud/skills
short-url
Create short URLs via AceDataCloud API. Use when generating shortened links for sharing, or batch-creating multiple short URLs at once. Supports custom slugs and expiration.
9seedream-image
Generate and edit AI images with Seedream (ByteDance) via AceDataCloud API. Use when creating images from text prompts, editing existing images, or working with high-resolution outputs. Supports Seedream 3.0 T2I, 4.0, 4.5, 5.0, and SeedEdit 3.0 models.
9flux-image
Generate and edit images with Flux (Black Forest Labs) via AceDataCloud API. Use when creating images from text prompts, editing existing images with text instructions, or when high-quality image generation is needed. Supports multiple Flux models including dev, pro, ultra, and kontext for editing.
9veo-video
Generate AI videos with Google Veo via AceDataCloud API. Use when creating videos from text descriptions, animating still images into video, upscaling/extending videos, re-shooting with new camera motion, or inserting/removing objects. Supports Veo 2, Veo 3, and Veo 3.1 models including fast variants.
9sora-video
Generate AI videos with OpenAI Sora via AceDataCloud API. Use when creating videos from text prompts, generating videos from reference images, or using character references from existing videos. Supports text-to-video, image-to-video, and character-driven generation with multiple models and resolutions.
8ai-chat
Access 50+ LLM models through a unified OpenAI-compatible API via AceDataCloud. Use when you need chat completions from GPT, Claude, Gemini, DeepSeek, Grok, or other models through a single endpoint. Supports streaming, function calling, and vision.
8