sora-video
Sora Video Generation
Generate AI videos through AceDataCloud's OpenAI Sora API.
Setup: See authentication for token setup.
Quick Start
curl -X POST https://api.acedata.cloud/sora/videos \
-H "Authorization: Bearer $ACEDATACLOUD_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{"prompt": "a golden retriever running on a beach at sunset", "model": "sora-2", "callback_url": "https://api.acedata.cloud/health"}'
Async: See async task polling. Poll via
POST /sora/taskswith{"task_id": "..."}.
Models
| Model | Duration | Quality | Best For |
|---|---|---|---|
sora-2 |
10–15s | Standard | Most tasks (default) |
sora-2-pro |
10–25s | Higher | Premium quality, longer videos |
Workflows
1. Text-to-Video
POST /sora/videos
{
"prompt": "a busy Tokyo street at night with neon signs reflecting in rain puddles",
"model": "sora-2",
"size": "small",
"duration": 10,
"orientation": "landscape"
}
2. Image-to-Video
Use reference images to guide generation.
POST /sora/videos
{
"prompt": "the scene gradually comes alive with gentle motion",
"image_urls": ["https://example.com/scene.jpg"],
"model": "sora-2",
"orientation": "landscape"
}
3. Character-Driven Video
Extract a character from an existing video and use them in a new scene.
POST /sora/videos
{
"prompt": "the character walks through a futuristic city",
"character_url": "https://example.com/source-video.mp4",
"character_start": 2.0,
"character_end": 5.0,
"model": "sora-2-pro"
}
Parameters
| Parameter | Values | Description |
|---|---|---|
model |
"sora-2", "sora-2-pro" |
Model to use (required) |
size |
"small", "large" |
Video resolution |
duration |
10, 15, 25 |
Duration in seconds (25 only with sora-2-pro) |
orientation |
"landscape" (16:9), "portrait" (9:16), "square" (1:1) |
Video orientation |
version |
"1.0" |
API version — version 1.0 enables duration up to 25s, orientation, character references, and image inputs |
Gotchas
- Duration of 25 seconds is only available with
sora-2-promodel size: "large"produces higher resolution but costs more and takes longer- Character-driven generation requires
character_startandcharacter_endtimestamps (in seconds) from the source video orientationsets the aspect ratio — use"portrait"for mobile-first content- Task states use
"succeeded"(not "completed") — check for this value when polling
MCP:
pip install mcp-sora| Hosted:https://sora.mcp.acedata.cloud/mcp| See all MCP servers
More from acedatacloud/skills
luma-video
Generate AI videos with Luma Dream Machine via AceDataCloud API. Use when creating videos from text prompts, generating videos from reference images, extending existing videos, or any video generation task with Luma. Supports text-to-video, image-to-video, and video extension.
10short-url
Create short URLs via AceDataCloud API. Use when generating shortened links for sharing, or batch-creating multiple short URLs at once. Supports custom slugs and expiration.
9seedream-image
Generate and edit AI images with Seedream (ByteDance) via AceDataCloud API. Use when creating images from text prompts, editing existing images, or working with high-resolution outputs. Supports Seedream 3.0 T2I, 4.0, 4.5, 5.0, and SeedEdit 3.0 models.
9flux-image
Generate and edit images with Flux (Black Forest Labs) via AceDataCloud API. Use when creating images from text prompts, editing existing images with text instructions, or when high-quality image generation is needed. Supports multiple Flux models including dev, pro, ultra, and kontext for editing.
9veo-video
Generate AI videos with Google Veo via AceDataCloud API. Use when creating videos from text descriptions, animating still images into video, upscaling/extending videos, re-shooting with new camera motion, or inserting/removing objects. Supports Veo 2, Veo 3, and Veo 3.1 models including fast variants.
9ai-chat
Access 50+ LLM models through a unified OpenAI-compatible API via AceDataCloud. Use when you need chat completions from GPT, Claude, Gemini, DeepSeek, Grok, or other models through a single endpoint. Supports streaming, function calling, and vision.
8