nemoclaw-configure-inference

Installation
SKILL.md

NemoClaw Configure Inference

Lists all inference providers offered during NemoClaw onboarding. Use when explaining which providers are available, what the onboard wizard presents, or how inference routing works.

Context

NemoClaw supports multiple inference providers. During onboarding, the nemoclaw onboard wizard presents a numbered list of providers to choose from. Your selection determines where the agent's inference traffic is routed.

How Inference Routing Works

The agent inside the sandbox talks to inference.local. It never connects to a provider directly. OpenShell intercepts inference traffic on the host and forwards it to the provider you selected.

Provider credentials stay on the host. The sandbox does not receive your API key.

Related skills

More from nvidia/nemoclaw

Installs
1
Repository
nvidia/nemoclaw
GitHub Stars
19.2K
First Seen
Apr 7, 2026