ascend-docker
Ascend Docker Container
Create Docker containers configured for Huawei Ascend NPU development.
Quick Start
# Privileged mode (default, auto-detect all devices)
./scripts/run-ascend-container.sh <image> <container_name>
# Basic mode with specific devices
./scripts/run-ascend-container.sh <image> <container_name> --mode basic
# Full mode with selected devices
./scripts/run-ascend-container.sh <image> <container_name> --mode full --device-list "0,1,2,3"
Device Selection
The script auto-detects available NPU devices from /dev/davinci*. Use --device-list to select specific devices:
# Use all detected devices (default)
./scripts/run-ascend-container.sh <image> <container_name>
# Use specific devices
./scripts/run-ascend-container.sh <image> <container_name> --device-list "0,1,2,3"
# Use device range
./scripts/run-ascend-container.sh <image> <container_name> --device-list "0-3"
# Combine ranges and individual devices
./scripts/run-ascend-container.sh <image> <container_name> --device-list "0-3,7,10-11"
Check available devices:
ls /dev/davinci* | grep -oE 'davinci[0-9]+$'
Container Modes
1. Privileged Mode (Default)
Maximum permissions, suitable when no specific requirements.
docker run -itd --privileged --name=<CONTAINER_NAME> --ipc=host --net=host \
--device=/dev/davinci_manager \
--device=/dev/devmm_svm \
--device=/dev/hisi_hdc \
-v /usr/local/sbin:/usr/local/sbin:ro \
-v /usr/local/Ascend/driver:/usr/local/Ascend/driver:ro \
-v /home:/home \
-w /home \
<IMAGE> \
/bin/bash
2. Basic Mode
Specific device mapping with network host, for inference workloads.
docker run -itd --net=host \
--name=<CONTAINER_NAME> \
--device=/dev/davinci_manager \
--device=/dev/hisi_hdc \
--device=/dev/devmm_svm \
--device=/dev/davinci0 \
--device=/dev/davinci1 \
... \
-v /usr/local/Ascend/driver:/usr/local/Ascend/driver:ro \
-v /usr/local/sbin:/usr/local/sbin:ro \
-v /etc/localtime:/etc/localtime \
-v /home:/home \
<IMAGE> \
/bin/bash
3. Full Mode
With profiling, logging, dump, and add-ons support.
docker run -itd --ipc=host \
--name=<CONTAINER_NAME> \
--device=/dev/davinci_manager \
--device=/dev/devmm_svm \
--device=/dev/hisi_hdc \
--device=/dev/davinci0 \
--device=/dev/davinci1 \
... \
-v /usr/local/Ascend/driver:/usr/local/Ascend/driver \
-v /usr/local/Ascend/add-ons/:/usr/local/Ascend/add-ons/ \
-v /usr/local/sbin/npu-smi:/usr/local/sbin/npu-smi \
-v /usr/local/sbin/:/usr/local/sbin/ \
-v /var/log/npu/conf/slog/slog.conf:/var/log/npu/conf/slog/slog.conf \
-v /var/log/npu/slog/:/var/log/npu/slog \
-v /var/log/npu/profiling/:/var/log/npu/profiling \
-v /var/log/npu/dump/:/var/log/npu/dump \
-v /var/log/npu/:/usr/slog \
-v /etc/localtime:/etc/localtime \
-v /home:/home \
<IMAGE> \
/bin/bash
Mode Comparison
| Feature | Privileged | Basic | Full |
|---|---|---|---|
| Network mode | host | host | - |
| IPC mode | host | - | host |
| Device access | All (via privileged) | Selected devices | Selected devices |
| Profiling support | ✓ | ✗ | ✓ |
| Dump support | ✓ | ✗ | ✓ |
| Logging (slog) | ✓ | ✗ | ✓ |
| Security | Lowest | Higher | Higher |
Device Parameters
| Device | Purpose |
|---|---|
/dev/davinci_manager |
NPU device manager |
/dev/devmm_svm |
Device memory management |
/dev/hisi_hdc |
HDC communication device |
/dev/davinci<N> |
Individual NPU devices (0, 1, 2, ...) |
Volume Parameters
| Volume | Purpose |
|---|---|
/usr/local/Ascend/driver |
Ascend driver libraries |
/usr/local/sbin |
NPU management tools (npu-smi) |
/usr/local/Ascend/add-ons |
Additional Ascend components |
/var/log/npu/slog |
System logs |
/var/log/npu/profiling |
Profiling data |
/var/log/npu/dump |
Dump data |
/etc/localtime |
Timezone sync |
/home |
User workspace |
Common Images
ascendhub.huawei.com/public-ascendhub/ascend-pytorch:24.0.RC1
ascendhub.huawei.com/public-ascendhub/ascend-mindspore:24.0.RC1
ascendhub.huawei.com/public-ascendhub/ascend-toolkit:24.0.RC1
Container Management
docker exec -it <container_name> bash
docker stop <container_name>
docker start <container_name>
docker rm -f <container_name>
Post-Setup
For self-built images, configure environment variables:
echo 'source /usr/local/Ascend/ascend-toolkit/set_env.sh' >> ~/.bashrc
source ~/.bashrc
Official References
More from ascend-ai-coding/awesome-ascend-skills
npu-smi
Huawei Ascend NPU npu-smi command reference. Use for device queries (health, temperature, power, memory, processes, ECC), configuration (thresholds, modes, fan), firmware upgrades (MCU, bootloader, VRD), virtualization (vNPU), and certificate management.
66atc-model-converter
Complete toolkit for Huawei Ascend NPU model conversion and end-to-end inference adaptation. Workflow 1 auto-discovers input shapes and parameters from user source code. Workflow 2 exports PyTorch models to ONNX. Workflow 3 converts ONNX to .om via ATC with multi-CANN version support. Workflow 4 adapts the user's full inference pipeline (preprocessing + model + postprocessing) to run end-to-end on NPU. Workflow 5 verifies precision between ONNX and OM outputs. Workflow 6 generates a reproducible README. Supports any standard PyTorch/ONNX model. Use when converting, testing, or deploying models on Ascend AI processors.
55ascendc
AscendC transformer/GMM/MoE 算子与 Matmul/Cube Kernel 的统一开发规范。用于在 ops-transformer 下新增或修改 op_host、tiling/infershape、op_kernel(含 MatmulImpl/Cube 调用)、以及对应的 CANN aclnn 示例和单测。
51msmodelslim
Huawei Ascend NPU model compression tool (msModelSlim). Use for LLM quantization (W4A8, W8A8, W8A8S, W8A16), MoE model compression, multimodal model compression (Qwen-VL, InternVL, HunyuanVideo, FLUX, SD3), calibration data preparation, precision auto-tuning, sensitive layer analysis, custom model integration, and deployment in MindIE/vLLM-Ascend. Supports Qwen, LLaMA, DeepSeek, GLM, Kimi, InternLM and more.
44vllm-ascend
vLLM Ascend plugin for LLM inference serving on Huawei Ascend NPU. Use for offline batch inference, API server deployment, quantization inference (with msmodelslim quantized models), tensor/pipeline parallelism for distributed serving, and OpenAI-compatible API endpoints. Supports Qwen, DeepSeek, GLM, LLaMA models with Ascend-optimized kernels.
41ais-bench
AISBench Benchmark - AI model evaluation tool for Ascend NPU. Supports accuracy evaluation (service/local models on text, multimodal datasets), performance evaluation (latency, throughput, stress testing, steady-state, real traffic simulation), vLLM/Triton inference services, 15+ benchmarks (MMLU, GSM8K, MMMU, docvqa, ocrbench_v2, etc.), multi-turn dialogue, Function Call (BFCL), and custom datasets.
41