alicloud-ai-multimodal-qwen-ocr
Installation
SKILL.md
Category: provider
Model Studio Qwen OCR
Validation
mkdir -p output/alicloud-ai-multimodal-qwen-ocr
python -m py_compile skills/ai/multimodal/alicloud-ai-multimodal-qwen-ocr/scripts/prepare_ocr_request.py && echo "py_compile_ok" > output/alicloud-ai-multimodal-qwen-ocr/validate.txt
Pass criteria: command exits 0 and output/alicloud-ai-multimodal-qwen-ocr/validate.txt is generated.
Output And Evidence
- Save request payloads, selected OCR task name, and normalized output expectations under
output/alicloud-ai-multimodal-qwen-ocr/. - Keep the exact model, image source, and task configuration with each saved run.
Use Qwen OCR when the task is primarily text extraction or document structure parsing rather than broad visual reasoning.
Critical model names
Use one of these exact model strings:
qwen-vl-ocrqwen-vl-ocr-latestqwen-vl-ocr-2025-11-20qwen-vl-ocr-2025-08-28qwen-vl-ocr-2025-04-13qwen-vl-ocr-2024-10-28
Selection guidance:
- Use
qwen-vl-ocrfor the stable channel. - Use
qwen-vl-ocr-latestonly when you explicitly want the newest OCR behavior. - Pin
qwen-vl-ocr-2025-11-20when you need reproducible document parsing based on the Qwen3-VL OCR upgrade.
Prerequisites
- Install dependencies (recommended in a venv):
python3 -m venv .venv
. .venv/bin/activate
python -m pip install requests
- Set
DASHSCOPE_API_KEYin environment, or adddashscope_api_keyto~/.alibabacloud/credentials.
Normalized interface (ocr.extract)
Request
image(string, required): HTTPS URL, local path, ordata:URL.model(string, optional): defaultqwen-vl-ocr.prompt(string, optional): use when you want custom extraction instructions.task(string, optional): built-in OCR task.task_config(object, optional): configuration for built-in task such as extraction fields.enable_rotate(bool, optional): defaultfalse.min_pixels(int, optional)max_pixels(int, optional)max_tokens(int, optional)temperature(float, optional): recommended to keep near default/low values.
Response
text(string): extracted text or structured markdown/html-style output.model(string)usage(object, optional)
Built-in OCR tasks
Use one of these values in task:
text_recognitionkey_information_extractiondocument_parsingtable_parsingformula_recognitionmulti_lanadvanced_recognition
Quick start
Custom prompt:
python skills/ai/multimodal/alicloud-ai-multimodal-qwen-ocr/scripts/prepare_ocr_request.py \
--image "https://example.com/invoice.png" \
--prompt "Extract seller name, invoice date, amount, and tax number in JSON."
Built-in task:
python skills/ai/multimodal/alicloud-ai-multimodal-qwen-ocr/scripts/prepare_ocr_request.py \
--image "https://example.com/table.png" \
--task table_parsing \
--model qwen-vl-ocr-2025-11-20
Operational guidance
- Prefer built-in OCR tasks for standard parsing jobs because they use official task prompts.
- For critical business fields, add downstream validation rules after OCR.
qwen-vl-ocrand older snapshots default to4096max output tokens unless higher limits are approved by Alibaba Cloud;qwen-vl-ocr-2025-11-20follows the model maximum.- Increase
max_pixelsonly when small text is missed; this raises token cost.
Output location
- Default output:
output/alicloud-ai-multimodal-qwen-ocr/request.json - Override base dir with
OUTPUT_DIR.
References
references/api_reference.mdreferences/sources.md
Weekly Installs
8
Repository
cinience/alicloud-skillsGitHub Stars
383
First Seen
Mar 27, 2026
Security Audits