alicloud-ai-video-aishi-generation
Installation
SKILL.md
Category: provider
Model Studio Aishi Video Generation
Validation
mkdir -p output/alicloud-ai-video-aishi-generation
python -m py_compile skills/ai/video/alicloud-ai-video-aishi-generation/scripts/prepare_aishi_request.py && echo "py_compile_ok" > output/alicloud-ai-video-aishi-generation/validate.txt
Pass criteria: command exits 0 and output/alicloud-ai-video-aishi-generation/validate.txt is generated.
Output And Evidence
- Save normalized request payloads, chosen model variant, and task polling snapshots under
output/alicloud-ai-video-aishi-generation/. - Record region, resolution/size, duration, and whether audio generation was enabled.
Use Aishi when the user explicitly wants the non-Wan PixVerse family for video generation.
Critical model names
Use one of these exact model strings:
pixverse/pixverse-v5.6-t2vpixverse/pixverse-v5.6-it2vpixverse/pixverse-v5.6-kf2vpixverse/pixverse-v5.6-r2v
Selection guidance:
- Use
pixverse/pixverse-v5.6-t2vfor text-only generation. - Use
pixverse/pixverse-v5.6-it2vfor first-frame image-to-video. - Use
pixverse/pixverse-v5.6-kf2vfor first-frame + last-frame transitions. - Use
pixverse/pixverse-v5.6-r2vfor multi-image character/style consistency.
Prerequisites
- This family currently only supports China mainland (Beijing).
- Install SDK or call HTTP directly:
python3 -m venv .venv
. .venv/bin/activate
python -m pip install dashscope
- Set
DASHSCOPE_API_KEYin your environment, or adddashscope_api_keyto~/.alibabacloud/credentials.
Normalized interface (video.generate)
Request
model(string, required)prompt(string, optional forit2v, required for other variants)media(array, optional)size(string, optional): direct pixel size such as1280*720, used byt2vandr2vresolution(string, optional):360P/540P/720P/1080P, used byit2vandkf2vduration(int, required):5/8/10, except 1080P only supports5/8audio(bool, optional)watermark(bool, optional)seed(int, optional)
Response
task_id(string)task_status(string)video_url(string, when finished)
Endpoint and execution model
- Submit task:
POST https://dashscope.aliyuncs.com/api/v1/services/aigc/video-generation/video-synthesis - Poll task:
GET https://dashscope.aliyuncs.com/api/v1/tasks/{task_id} - HTTP calls are async only and must set header
X-DashScope-Async: enable.
Quick start
Text-to-video:
python skills/ai/video/alicloud-ai-video-aishi-generation/scripts/prepare_aishi_request.py \
--model pixverse/pixverse-v5.6-t2v \
--prompt "A compact robot walks through a rainy neon alley." \
--size 1280*720 \
--duration 5
Image-to-video:
python skills/ai/video/alicloud-ai-video-aishi-generation/scripts/prepare_aishi_request.py \
--model pixverse/pixverse-v5.6-it2v \
--prompt "The turtle swims slowly as the camera rises." \
--media image_url=https://example.com/turtle.webp \
--resolution 720P \
--duration 5
Operational guidance
t2vandr2vusesize;it2vandkf2vuseresolution.- For
kf2v, provide exactly onefirst_frameand onelast_frame. - For
r2v, you can pass up to 7 reference images. - Aishi returns task IDs first; do not treat the initial response as the final video result.
Output location
- Default output:
output/alicloud-ai-video-aishi-generation/request.json - Override base dir with
OUTPUT_DIR.
References
references/sources.md
Weekly Installs
7
Repository
cinience/alicloud-skillsGitHub Stars
383
First Seen
Mar 28, 2026
Security Audits