NemoClaw does not run on Windows natively. It relies on Linux kernel features like Landlock, seccomp, and network namespaces — none of which exist on Windows. Even experienced developers have struggled to get it working, and the official documentation only supports Ubuntu 22.04 or later.
But there is a way. By using WSL2 (Windows Subsystem for Linux), you can run a full Ubuntu environment inside Windows and install NemoClaw inside that. This guide walks you through every step — from installing WSL2 and Docker Desktop, to setting up NVIDIA GPU passthrough (the step most guides skip), to running the NemoClaw installer and getting your first agent response.
We also cover the common repo errors you’ll likely hit along the way and how to fix them. All commands are single-line and copy-paste friendly — no backslashes, no multi-line pipes. If you prefer to watch instead of read, the full video walkthrough is linked below.
Step 1: Install WSL2 with Ubuntu
PowerShell (Admin):
wsl --install -d Ubuntu-22.04
After restart, in Ubuntu:
sudo apt update && sudo apt upgrade -y
Step 2: Enable systemd
sudo nano /etc/wsl.conf
Add:
[boot]
systemd=true
PowerShell:
wsl --shutdown
Reopen Ubuntu, verify:
systemctl is-system-running
Step 3: Docker Desktop
- Install Docker Desktop for Windows: https://www.docker.com/products/docker-desktop/
- Settings → Resources → WSL Integration → toggle on Ubuntu → Apply & Restart
Verify in Ubuntu:
docker run hello-world
Step 4: NVIDIA GPU passthrough
- Install latest Windows NVIDIA driver: https://www.nvidia.com/Download/index.aspx
- Do NOT install a Linux NVIDIA driver inside WSL2
In Ubuntu — add signing key:
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg
Add repo (use the .list file URL, not the bare directory):
curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
Install:
sudo apt update
sudo apt install -y nvidia-container-toolkit
sudo nvidia-ctk runtime configure --runtime=docker
Restart Docker Desktop, then:
sudo apt install -y nvidia-cuda-toolkit
Verify (both must work):
nvidia-smi
nvcc --version
If nvidia-smi fails → update Windows NVIDIA driver → wsl --shutdown → retry.
Step 5: Node.js 20+
curl -fsSL https://deb.nodesource.com/setup_20.x | sudo -E bash -
sudo apt install -y nodejs
node -v && npm -v
Step 6: Install NemoClaw (CLI + onboard wizard)
curl -fsSL https://nvidia.com/nemoclaw.sh | bash
The wizard runs automatically through [1/7] to [7/7]:
- [4/7] — Enter your NVIDIA API key from https://build.nvidia.com
- [5/7] — Choose cloud model (default: nemotron-3-super-120b-a12b)
- [7/7] — Accept suggested policy presets (pypi, npm) by pressing Y
After it finishes:
source ~/.bashrc
nemoclaw --version
openshell --version
Step 7: Connect and test
Check sandbox status
nemoclaw (sandbox name) status
Connect:
nemoclaw (sandbox name) connect
Inside the sandbox, launch chat:
openclaw tui
Or test via CLI:
openclaw agent --agent main --local -m "hello" --session-id test
Exit sandbox:
exit
Check logs if anything feels off:
nemoclaw boxplant logs --follow
Step 8: Harden WSL2
sudo nano /etc/wsl.conf
Full config:
[boot]
systemd=true
[interop]
enabled=false
appendWindowsPath=false
[automount]
enabled=false
PowerShell:
wsl --shutdown
Optional — memory limit (create %UserProfile%\.wslconfig):
[wsl2]
memory=12GB
swap=8GB
Daily Use
nemoclaw (sandbox name) connect
openclaw tui
Nuclear Reset (if things break)
openshell sandbox delete (sandbox name)
openshell gateway destroy --name nemoclaw
docker volume rm openshell-cluster-nemoclaw
Then rerun curl -fsSL https://nvidia.com/nemoclaw.sh | bash from Step 6.
Manual Workaround (only if Step 6 wizard fails with sandbox errors)
bashopenshell sandbox delete my-sandbox 2>/dev/null
openshell gateway destroy --name nemoclaw 2>/dev/null
docker volume rm openshell-cluster-nemoclaw 2>/dev/null
openshell gateway start --name nemoclaw
openshell status
openshell provider create --name nvidia-nim --type nvidia --credential NVIDIA_API_KEY=nvapi-YOUR_KEY_HERE
openshell inference set --provider nvidia-nim --model nvidia/nemotron-3-super-120b-a12b
openshell sandbox create --name my-sandbox --from openclaw
openshell sandbox ssh my-sandbox
openclaw onboard
When prompted for provider → select Custom Provider → enter https://inference.local/v1
If Anthropic key is set: unset ANTHROPIC_API_KEY
