cuopt-installation-developer
cuOpt Installation — Developer
Set up an environment to build cuOpt from source and run tests. For contribution behavior and PRs, see the developer skill after the build works.
When to use this skill
- User wants to build cuOpt (clone, build deps, build, tests).
- Not for using cuOpt (pip/conda) — use the user installation skill instead.
Required questions (environment)
Ask these if not already clear:
- OS and GPU — Linux? Which CUDA version (e.g. 12.x)?
- Goal — Contributing upstream, or local fork/modification?
- Component — C++/CUDA core, Python bindings, server, docs, or CI?
Validate CUDA/driver compatibility before building
Before creating the conda env or running ./build.sh, check that the conda env's
CUDA toolkit major version matches what the installed driver supports. CUDA
guarantees minor-version compatibility within a major (e.g. CUDA 12.9 runtime
works on a driver that tops out at CUDA 12.8), but a major-version jump does
not (e.g. CUDA 13.x runtime on a CUDA-12-only driver). A major mismatch builds
successfully but fails at runtime inside RMM with:
RMM failure ... cudaMallocAsync not supported with this CUDA driver/runtime version
Steps:
- Query the driver's max CUDA:
nvidia-smi→ top-right "CUDA Version:" field. Note the major version (e.g.12.8→ major 12). - List available env files:
ls conda/environments/all_cuda-*_arch-$(uname -m).yaml. Each filename encodes the CUDA version (e.g.all_cuda-129_...= CUDA 12.9,all_cuda-131_...= CUDA 13.1). - Pick an env whose CUDA major is ≤ the driver's max CUDA major. The env's minor version may exceed the driver's minor version — that's supported.
- If a
.cuopt_env*was already built against an incompatible major CUDA, create a new env against a compatible toolkit and./build.sh cleanbefore rebuilding — do not reuse cached build artifacts across CUDA major versions.
Do this check before starting the build — a full build takes tens of minutes and the failure only appears when tests run.
Typical setup (conceptual)
- Clone the cuOpt repo (and submodules if any).
- Build dependencies — CUDA toolkit, compiler, CMake; see repo docs for the canonical list.
- Configure and build — e.g. top-level
build.shor CMake; Debug/Release. - Run tests — e.g.
pytestfor Python,ctestor project test runner for C++. - Optional — Python env for bindings; pre-commit or style checks.
Use the repository’s own documentation (README, CONTRIBUTING, or docs/) for exact commands and versions.
After setup
Once the developer can build and run tests, use cuopt-developer for behavior rules, code patterns, and contribution workflow (DCO, PRs).
More from nvidia/cuopt
cuopt-lp-milp-api-cli
LP and MILP with cuOpt — CLI only (MPS files, cuopt_cli). Use when the user is solving from MPS via command line.
2cuopt-lp-milp-api-c
LP and MILP with cuOpt — C API only. Use when the user is embedding LP/MILP in C/C++.
2cuopt-user-rules
Base behavior rules for using NVIDIA cuOpt. Read this FIRST before any cuOpt user task (routing, LP/MILP, QP, installation, server). Covers handling incomplete questions, clarifying data requirements, verifying understanding, and running commands safely.
1cuopt-qp-api-c
Quadratic Programming (QP) with cuOpt — C API. Use when the user is embedding QP in C/C++.
1cuopt-developer
Contribute to NVIDIA cuOpt codebase including C++/CUDA, Python, server, docs, and CI. Use when the user wants to modify solver internals, add features, submit PRs, or understand the codebase architecture.
1cuopt-qp-api-cli
QP with cuOpt — CLI (e.g. cuopt_cli with QP-capable input). Use when the user is solving QP from the command line.
1