Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .agents/skills/nemoclaw-overview/SKILL.md
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,7 @@ NemoClaw provides the following capabilities on top of the OpenShell runtime.
| Hardened blueprint | A security-first Dockerfile with capability drops, least-privilege network rules, and declarative policy. |
| State management | Safe migration of agent state across machines with credential stripping and integrity verification. |
| Messaging bridges | Host-side processes that connect Telegram, Discord, and Slack to the sandboxed agent. |
| Routed inference | Provider-routed model calls through the OpenShell gateway, transparent to the agent. Supports NVIDIA Endpoints, OpenAI, Anthropic, Google Gemini, and local Ollama. |
| Routed inference | Provider-routed model calls through the OpenShell gateway, transparent to the agent. Supports NVIDIA Endpoints, OpenAI, Anthropic, Google Gemini, compatible OpenAI or Anthropic endpoints, and the caveated Local Ollama path. |
| Layered protection | Network, filesystem, process, and inference controls that can be hot-reloaded or locked at creation. |

## Challenge
Expand Down
2 changes: 1 addition & 1 deletion .agents/skills/nemoclaw-overview/references/overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ NemoClaw provides the following capabilities on top of the OpenShell runtime.
| Hardened blueprint | A security-first Dockerfile with capability drops, least-privilege network rules, and declarative policy. |
| State management | Safe migration of agent state across machines with credential stripping and integrity verification. |
| Messaging bridges | Host-side processes that connect Telegram, Discord, and Slack to the sandboxed agent. |
| Routed inference | Provider-routed model calls through the OpenShell gateway, transparent to the agent. Supports NVIDIA Endpoints, OpenAI, Anthropic, Google Gemini, and local Ollama. |
| Routed inference | Provider-routed model calls through the OpenShell gateway, transparent to the agent. Supports NVIDIA Endpoints, OpenAI, Anthropic, Google Gemini, compatible OpenAI or Anthropic endpoints, and the caveated Local Ollama path. |
| Layered protection | Network, filesystem, process, and inference controls that can be hot-reloaded or locked at creation. |

## Challenge
Expand Down
3 changes: 2 additions & 1 deletion .agents/skills/nemoclaw-reference/references/commands.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,8 @@ $ nemoclaw onboard
```

The wizard prompts for a provider first, then collects the provider credential if needed.
Supported non-experimental choices include NVIDIA Endpoints, OpenAI, Anthropic, Google Gemini, and compatible OpenAI or Anthropic endpoints.
Supported provider choices include NVIDIA Endpoints, OpenAI, Anthropic, Google Gemini, and compatible OpenAI or Anthropic endpoints.
Local Ollama is also available in the standard onboarding flow as a caveated provider path.
Credentials are stored in `~/.nemoclaw/credentials.json`.
The legacy `nemoclaw setup` command is deprecated; use `nemoclaw onboard` instead.

Expand Down
31 changes: 17 additions & 14 deletions .agents/skills/nemoclaw-reference/references/inference-profiles.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,18 +16,21 @@ At onboard time, NemoClaw configures:

That means the sandbox knows which model family to use, while OpenShell owns the actual provider credential and upstream endpoint.

## Supported Providers

The following non-experimental provider paths are available through `nemoclaw onboard`.

| Provider | Endpoint Type | Notes |
|---|---|---|
| NVIDIA Endpoints | OpenAI-compatible | Hosted models on `integrate.api.nvidia.com` |
| OpenAI | Native OpenAI-compatible | Uses OpenAI model IDs |
| Other OpenAI-compatible endpoint | Custom OpenAI-compatible | For compatible proxies and gateways |
| Anthropic | Native Anthropic | Uses `anthropic-messages` |
| Other Anthropic-compatible endpoint | Custom Anthropic-compatible | For Claude proxies and compatible gateways |
| Google Gemini | OpenAI-compatible | Uses Google's OpenAI-compatible endpoint |
## Provider Status

The following provider paths are available through `nemoclaw onboard`.

| Provider | Status | Endpoint Type | Notes |
|---|---|---|---|
| NVIDIA Endpoints | Supported | OpenAI-compatible | Hosted models on `integrate.api.nvidia.com` |
| OpenAI | Supported | Native OpenAI-compatible | Uses OpenAI model IDs |
| Other OpenAI-compatible endpoint | Supported | Custom OpenAI-compatible | For compatible proxies and gateways |
| Anthropic | Supported | Native Anthropic | Uses `anthropic-messages` |
| Other Anthropic-compatible endpoint | Supported | Custom Anthropic-compatible | For Claude proxies and compatible gateways |
| Google Gemini | Supported | OpenAI-compatible | Uses Google's OpenAI-compatible endpoint |
| Local Ollama | Caveated | Local Ollama API | Available in the standard onboarding flow when Ollama is installed or running on the host |
| Local NVIDIA NIM | Experimental | Local OpenAI-compatible | Requires `NEMOCLAW_EXPERIMENTAL=1` and a NIM-capable GPU |
| Local vLLM | Experimental | Local OpenAI-compatible | Requires `NEMOCLAW_EXPERIMENTAL=1` and a server already running on `localhost:8000` |

## Validation During Onboarding

Expand All @@ -46,7 +49,7 @@ If validation fails, the wizard does not continue to sandbox creation.

## Local Ollama

Local Ollama is available in the standard onboarding flow when Ollama is installed or running on the host.
Local Ollama is a caveated provider path available in the standard onboarding flow when Ollama is installed or running on the host.
It uses the same routed `inference.local` pattern, but the upstream runtime runs locally instead of in the cloud.

Ollama gets additional onboarding help:
Expand All @@ -58,7 +61,7 @@ Ollama gets additional onboarding help:

## Experimental Local Providers

The following local providers require `NEMOCLAW_EXPERIMENTAL=1`:
The following local providers remain experimental and require `NEMOCLAW_EXPERIMENTAL=1`:

- Local NVIDIA NIM (requires a NIM-capable GPU)
- Local vLLM (must already be running on `localhost:8000`)
Expand Down
16 changes: 9 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -54,13 +54,15 @@ The sandbox image is approximately 2.4 GB compressed. During image push, the Doc

#### Container Runtimes

| Platform | Supported runtimes | Notes |
|----------|--------------------|-------|
| Linux | Docker | Primary supported path. |
| macOS (Apple Silicon) | Colima, Docker Desktop | Install Xcode Command Line Tools (`xcode-select --install`) and start the runtime before running the installer. |
| macOS (Intel) | Podman | Not supported yet. Depends on OpenShell support for Podman on macOS. |
| Windows WSL | Docker Desktop (WSL backend) | Supported target path. |
| DGX Spark | Docker | Refer to the [DGX Spark setup guide](https://github.com/NVIDIA/NemoClaw/blob/main/spark-install.md) for cgroup v2 and Docker configuration. |
| Platform | Supported runtimes | Status | Notes |
|----------|--------------------|--------|-------|
| Linux | Docker | Supported | Primary supported path. |
| macOS (Apple Silicon) | Colima, Docker Desktop | Caveated | Install Xcode Command Line Tools (`xcode-select --install`) and start the runtime before running the installer. |
| macOS (Intel) | Podman | Caveated | Depends on future OpenShell support for Podman on macOS. |
| Windows WSL2 | Docker Desktop (WSL backend) | Out of scope | The CLI may run in WSL2, but WSL2 is not part of the supported NemoClaw target matrix. |
| Windows native | N/A | Out of scope | Native Windows hosts are not part of the supported NemoClaw target matrix. |
| Jetson | N/A | Out of scope | Jetson hosts are not part of the supported NemoClaw target matrix. |
| DGX Spark | Docker | Supported | Refer to the [DGX Spark setup guide](docs/reference/commands.md#nemoclaw-setup-spark) for cgroup v2 and Docker configuration. |

### Install NemoClaw and Onboard OpenClaw Agent

Expand Down
2 changes: 1 addition & 1 deletion docs/about/overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ NemoClaw provides the following capabilities on top of the OpenShell runtime.
| Hardened blueprint | A security-first Dockerfile with capability drops, least-privilege network rules, and declarative policy. |
| State management | Safe migration of agent state across machines with credential stripping and integrity verification. |
| Messaging bridges | Host-side processes that connect Telegram, Discord, and Slack to the sandboxed agent. |
| Routed inference | Provider-routed model calls through the OpenShell gateway, transparent to the agent. Supports NVIDIA Endpoints, OpenAI, Anthropic, Google Gemini, and local Ollama. |
| Routed inference | Provider-routed model calls through the OpenShell gateway, transparent to the agent. Supports NVIDIA Endpoints, OpenAI, Anthropic, Google Gemini, compatible OpenAI or Anthropic endpoints, and the caveated Local Ollama path. |
| Layered protection | Network, filesystem, process, and inference controls that can be hot-reloaded or locked at creation. |

## Challenge
Expand Down
16 changes: 9 additions & 7 deletions docs/get-started/quickstart.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,13 +57,15 @@ The sandbox image is approximately 2.4 GB compressed. During image push, the Doc

### Container Runtimes

| Platform | Supported runtimes | Notes |
|----------|--------------------|-------|
| Linux | Docker | Primary supported path. |
| macOS (Apple Silicon) | Colima, Docker Desktop | Install Xcode Command Line Tools (`xcode-select --install`) and start the runtime before running the installer. |
| macOS (Intel) | Podman | Not supported yet. Depends on OpenShell support for Podman on macOS. |
| Windows WSL | Docker Desktop (WSL backend) | Supported target path. |
| DGX Spark | Docker | Refer to the [DGX Spark setup guide](https://github.com/NVIDIA/NemoClaw/blob/main/spark-install.md) for cgroup v2 and Docker configuration. |
| Platform | Supported runtimes | Status | Notes |
|----------|--------------------|--------|-------|
| Linux | Docker | Supported | Primary supported path. |
| macOS (Apple Silicon) | Colima, Docker Desktop | Caveated | Install Xcode Command Line Tools (`xcode-select --install`) and start the runtime before running the installer. |
| macOS (Intel) | Podman | Caveated | Depends on future OpenShell support for Podman on macOS. |
| Windows WSL2 | Docker Desktop (WSL backend) | Out of scope | The CLI may run in WSL2, but WSL2 is not part of the supported NemoClaw target matrix. |
| Windows native | N/A | Out of scope | Native Windows hosts are not part of the supported NemoClaw target matrix. |
| Jetson | N/A | Out of scope | Jetson hosts are not part of the supported NemoClaw target matrix. |
| DGX Spark | Docker | Supported | Refer to the [DGX Spark setup guide](../reference/commands.md#nemoclaw-setup-spark) for cgroup v2 and Docker configuration. |

## Install NemoClaw and Onboard OpenClaw Agent

Expand Down
3 changes: 2 additions & 1 deletion docs/reference/commands.md
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,8 @@ $ nemoclaw onboard
```

The wizard prompts for a provider first, then collects the provider credential if needed.
Supported non-experimental choices include NVIDIA Endpoints, OpenAI, Anthropic, Google Gemini, and compatible OpenAI or Anthropic endpoints.
Supported provider choices include NVIDIA Endpoints, OpenAI, Anthropic, Google Gemini, and compatible OpenAI or Anthropic endpoints.
Local Ollama is also available in the standard onboarding flow as a caveated provider path.
Credentials are stored in `~/.nemoclaw/credentials.json`.
The legacy `nemoclaw setup` command is deprecated; use `nemoclaw onboard` instead.

Expand Down
31 changes: 17 additions & 14 deletions docs/reference/inference-profiles.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,18 +38,21 @@ At onboard time, NemoClaw configures:

That means the sandbox knows which model family to use, while OpenShell owns the actual provider credential and upstream endpoint.

## Supported Providers

The following non-experimental provider paths are available through `nemoclaw onboard`.

| Provider | Endpoint Type | Notes |
|---|---|---|
| NVIDIA Endpoints | OpenAI-compatible | Hosted models on `integrate.api.nvidia.com` |
| OpenAI | Native OpenAI-compatible | Uses OpenAI model IDs |
| Other OpenAI-compatible endpoint | Custom OpenAI-compatible | For compatible proxies and gateways |
| Anthropic | Native Anthropic | Uses `anthropic-messages` |
| Other Anthropic-compatible endpoint | Custom Anthropic-compatible | For Claude proxies and compatible gateways |
| Google Gemini | OpenAI-compatible | Uses Google's OpenAI-compatible endpoint |
## Provider Status

The following provider paths are available through `nemoclaw onboard`.

| Provider | Status | Endpoint Type | Notes |
|---|---|---|---|
| NVIDIA Endpoints | Supported | OpenAI-compatible | Hosted models on `integrate.api.nvidia.com` |
| OpenAI | Supported | Native OpenAI-compatible | Uses OpenAI model IDs |
| Other OpenAI-compatible endpoint | Supported | Custom OpenAI-compatible | For compatible proxies and gateways |
| Anthropic | Supported | Native Anthropic | Uses `anthropic-messages` |
| Other Anthropic-compatible endpoint | Supported | Custom Anthropic-compatible | For Claude proxies and compatible gateways |
| Google Gemini | Supported | OpenAI-compatible | Uses Google's OpenAI-compatible endpoint |
| Local Ollama | Caveated | Local Ollama API | Available in the standard onboarding flow when Ollama is installed or running on the host |
| Local NVIDIA NIM | Experimental | Local OpenAI-compatible | Requires `NEMOCLAW_EXPERIMENTAL=1` and a NIM-capable GPU |
| Local vLLM | Experimental | Local OpenAI-compatible | Requires `NEMOCLAW_EXPERIMENTAL=1` and a server already running on `localhost:8000` |

## Validation During Onboarding

Expand All @@ -68,7 +71,7 @@ If validation fails, the wizard does not continue to sandbox creation.

## Local Ollama

Local Ollama is available in the standard onboarding flow when Ollama is installed or running on the host.
Local Ollama is a caveated provider path available in the standard onboarding flow when Ollama is installed or running on the host.
It uses the same routed `inference.local` pattern, but the upstream runtime runs locally instead of in the cloud.

Ollama gets additional onboarding help:
Expand All @@ -80,7 +83,7 @@ Ollama gets additional onboarding help:

## Experimental Local Providers

The following local providers require `NEMOCLAW_EXPERIMENTAL=1`:
The following local providers remain experimental and require `NEMOCLAW_EXPERIMENTAL=1`:

- Local NVIDIA NIM (requires a NIM-capable GPU)
- Local vLLM (must already be running on `localhost:8000`)
Expand Down