veo-video
Veo Video Generation
Generate AI videos through AceDataCloud's Google Veo API.
Setup: See authentication for token setup.
Quick Start
curl -X POST https://api.acedata.cloud/veo/videos \
-H "Authorization: Bearer $ACEDATACLOUD_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{"action": "text2video", "prompt": "a whale breaching in slow motion at golden hour", "model": "veo3", "callback_url": "https://api.acedata.cloud/health"}'
Async: See async task polling. Poll via
POST /veo/taskswith{"id": "..."}. This returns a task ID immediately. Poll for the result:
curl -X POST https://api.acedata.cloud/veo/tasks \
-H "Authorization: Bearer $ACEDATACLOUD_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{"id": "<task_id from above>"}'
Models
| Model | Audio | Best For |
|---|---|---|
veo2 |
No | Cost-effective generation |
veo2-fast |
No | Fast, cost-effective generation (default) |
veo3 |
Yes (native) | Full audiovisual generation |
veo3-fast |
Yes (native) | Faster audiovisual generation |
veo31 |
Yes (native) | Veo 3.1, highest quality |
veo31-fast |
Yes (native) | Veo 3.1 fast variant |
veo31-fast-ingredients |
Yes (native) | Veo 3.1 fast, ingredient mode |
Workflows
1. Text-to-Video
POST /veo/videos
{
"action": "text2video",
"prompt": "cinematic aerial shot of the Northern Lights over Iceland",
"model": "veo3",
"resolution": "1080p"
}
2. Image-to-Video
Animate still images into video.
POST /veo/videos
{
"action": "image2video",
"prompt": "the scene gently comes to life with wind and subtle motion",
"image_urls": ["https://example.com/landscape.jpg"],
"model": "veo2",
"aspect_ratio": "16:9"
}
3. Ingredients-to-Video (Multi-Image Blend)
Blend 1–3 reference images into a video (only veo31-fast-ingredients).
POST /veo/videos
{
"action": "ingredients2video",
"image_urls": [
"https://example.com/img1.jpg",
"https://example.com/img2.jpg"
],
"model": "veo31-fast-ingredients"
}
4. Upscale to 1080p
Convert a previously generated video to full 1080p resolution.
POST /veo/videos
{
"action": "get1080p",
"video_id": "your-video-id",
"model": "veo3"
}
Parameters
| Parameter | Values | Description |
|---|---|---|
action |
"text2video", "image2video", "ingredients2video", "get1080p" |
Generation mode |
model |
see Models table | Model to use (default: veo2-fast) |
resolution |
"4k", "1080p", "gif" |
Output resolution (default: 720p) |
aspect_ratio |
"16:9", "9:16", "1:1", "4:3", "3:4" |
Aspect ratio — only valid for image2video |
image_urls |
array of strings | Reference image URLs — for image2video (up to 2) or ingredients2video (up to 3) |
video_id |
string | Video to upscale — only for get1080p |
translation |
true / false |
Auto-translate prompt to English (default: false) |
Post-Generation Endpoints
After generating a video, use these endpoints to further process it:
Upsample (POST /veo/upsample)
Upscale a generated video to 1080p, 4K, or convert to GIF.
POST /veo/upsample
{
"video_id": "your-video-id",
"action": "4k"
}
| Parameter | Values | Description |
|---|---|---|
video_id |
string | Task ID from /veo/videos, /veo/extend, /veo/reshoot, or /veo/objects |
action |
"1080p", "4k", "gif" |
Upsample target |
Extend (POST /veo/extend)
Continue an existing video — AI auto-generates the next segment.
POST /veo/extend
{
"video_id": "your-video-id",
"model": "veo31-fast",
"prompt": "the camera slowly zooms out"
}
| Parameter | Values | Description |
|---|---|---|
video_id |
string | Task ID from /veo/videos or a prior /veo/extend |
model |
"veo31-fast", "veo31" |
Only Veo 3.1 series is supported |
prompt |
string | Optional: guides the extended segment |
Reshoot (POST /veo/reshoot)
Re-render a video keeping the same content but applying new camera motion.
POST /veo/reshoot
{
"video_id": "your-video-id",
"motion_type": "LEFT_TO_RIGHT"
}
| Parameter | Values | Description |
|---|---|---|
video_id |
string | Task ID from /veo/videos (cannot use /veo/extend output) |
motion_type |
see table below | Camera motion to apply |
motion_type values:
STATIONARY, STATIONARY_UP, STATIONARY_DOWN, STATIONARY_LEFT, STATIONARY_RIGHT, STATIONARY_DOLLY_IN_ZOOM_OUT, STATIONARY_DOLLY_OUT_ZOOM_IN, UP, DOWN, LEFT_TO_RIGHT, RIGHT_TO_LEFT, FORWARD, BACKWARD, DOLLY_IN_ZOOM_OUT, DOLLY_OUT_ZOOM_IN
Objects (POST /veo/objects)
Insert or remove objects in a video using mask-based inpainting.
POST /veo/objects
{
"video_id": "your-video-id",
"action": "insert",
"prompt": "add a flying bird"
}
POST /veo/objects
{
"video_id": "your-video-id",
"action": "remove",
"image_mask": "https://example.com/mask.jpg"
}
| Parameter | Values | Description |
|---|---|---|
video_id |
string | Task ID (cannot use /veo/extend output) |
action |
"insert", "remove" |
Operation type |
prompt |
string | Required for insert; optional for remove |
image_mask |
string | URL or base64 JPEG — white pixels = target region. Required for remove; optional for insert |
Gotchas
- Veo 3 and 3.1 models generate native audio —
veo2/veo2-fastdo NOT support audio - The
get1080paction usesvideo_id(from a prior generation), not a URL aspect_ratiois only valid for theimage2videoactionimage_urlsaccepts an array — up to 2 images forimage2video, up to 3 foringredients2videoveo31-fast-ingredientsrequires image input — it cannot do text-only generationtranslation: trueauto-translates Chinese or other non-English prompts before sending to Veo- Task polling uses
id(nottask_id) in the/veo/tasksrequest body - Task states use
"succeeded"(not "completed") — check for this value when polling /veo/extendoutput cannot be used as input for/veo/reshootor/veo/objects
MCP:
pip install mcp-veo| Hosted:https://veo.mcp.acedata.cloud/mcp| See all MCP servers
More from acedatacloud/skills
luma-video
Generate AI videos with Luma Dream Machine via AceDataCloud API. Use when creating videos from text prompts, generating videos from reference images, extending existing videos, or any video generation task with Luma. Supports text-to-video, image-to-video, and video extension.
10short-url
Create short URLs via AceDataCloud API. Use when generating shortened links for sharing, or batch-creating multiple short URLs at once. Supports custom slugs and expiration.
9seedream-image
Generate and edit AI images with Seedream (ByteDance) via AceDataCloud API. Use when creating images from text prompts, editing existing images, or working with high-resolution outputs. Supports Seedream 3.0 T2I, 4.0, 4.5, 5.0, and SeedEdit 3.0 models.
9flux-image
Generate and edit images with Flux (Black Forest Labs) via AceDataCloud API. Use when creating images from text prompts, editing existing images with text instructions, or when high-quality image generation is needed. Supports multiple Flux models including dev, pro, ultra, and kontext for editing.
9sora-video
Generate AI videos with OpenAI Sora via AceDataCloud API. Use when creating videos from text prompts, generating videos from reference images, or using character references from existing videos. Supports text-to-video, image-to-video, and character-driven generation with multiple models and resolutions.
8ai-chat
Access 50+ LLM models through a unified OpenAI-compatible API via AceDataCloud. Use when you need chat completions from GPT, Claude, Gemini, DeepSeek, Grok, or other models through a single endpoint. Supports streaming, function calling, and vision.
8