aliyun-qwen-generation
Installation
SKILL.md
Category: provider
Model Studio Qwen Text Generation
Validation
mkdir -p output/aliyun-qwen-generation
python -m py_compile skills/ai/text/aliyun-qwen-generation/scripts/prepare_generation_request.py && echo "py_compile_ok" > output/aliyun-qwen-generation/validate.txt
Pass criteria: command exits 0 and output/aliyun-qwen-generation/validate.txt is generated.
Output And Evidence
- Save prompt templates, normalized request payloads, and response summaries under
output/aliyun-qwen-generation/. - Keep one reproducible request example with model name, region, and key parameters.
Use this skill for general text generation, reasoning, tool-calling, and long-context chat on Alibaba Cloud Model Studio.
Critical model names
Prefer the current flagship families:
qwen3-maxqwen3-max-2026-01-23qwen3.5-plusqwen3.5-plus-2026-02-15qwen3.5-flashqwen3.5-flash-2026-02-23
Common related variants listed in the official model catalog:
qwen3.5-397b-a17bqwen3.5-122b-a10bqwen3.5-35b-a3bqwen3.5-27b
Prerequisites
- Install SDK in a virtual environment:
python3 -m venv .venv
. .venv/bin/activate
python -m pip install dashscope
- Set
DASHSCOPE_API_KEYin your environment, or adddashscope_api_keyto~/.alibabacloud/credentials.
Normalized interface (text.generate)
Request
messages(array, required): standard chat turns.model(string, optional): defaultqwen3.5-plus.temperature(number, optional)top_p(number, optional)max_tokens(int, optional)enable_thinking(bool, optional)tools(array, optional)response_format(object, optional)stream(bool, optional)
Response
text(string): assistant output.finish_reason(string, optional)usage(object, optional)raw(object, optional)
Quick start (OpenAI-compatible endpoint)
curl -sS https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions \
-H "Authorization: Bearer $DASHSCOPE_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "qwen3.5-plus",
"messages": [
{"role": "system", "content": "You are a concise assistant."},
{"role": "user", "content": "Summarize why object storage helps media pipelines."}
],
"stream": false
}'
Local helper script
python skills/ai/text/aliyun-qwen-generation/scripts/prepare_generation_request.py \
--prompt "Draft a concise architecture summary for a media ingestion pipeline." \
--model qwen3.5-plus
Operational guidance
- Use snapshot IDs when reproducibility matters.
- Prefer
qwen3.5-flashfor lower-latency simple tasks andqwen3-maxfor harder multi-step tasks. - Keep tool schemas minimal and explicit when enabling tool calls.
- For multimodal input, route to dedicated VL or Omni skills unless the task is primarily text-centric.
Output location
- Default output:
output/aliyun-qwen-generation/requests/ - Override base dir with
OUTPUT_DIR.
References
references/sources.md
Weekly Installs
35
Repository
cinience/alicloud-skillsGitHub Stars
383
First Seen
1 day ago
Security Audits