diff --git a/.agents/skills/nemoclaw-overview/SKILL.md b/.agents/skills/nemoclaw-overview/SKILL.md index 247fe07f6..5e450d087 100644 --- a/.agents/skills/nemoclaw-overview/SKILL.md +++ b/.agents/skills/nemoclaw-overview/SKILL.md @@ -94,7 +94,7 @@ NemoClaw provides the following capabilities on top of the OpenShell runtime. | Hardened blueprint | A security-first Dockerfile with capability drops, least-privilege network rules, and declarative policy. | | State management | Safe migration of agent state across machines with credential stripping and integrity verification. | | Messaging bridges | Host-side processes that connect Telegram, Discord, and Slack to the sandboxed agent. | -| Routed inference | Provider-routed model calls through the OpenShell gateway, transparent to the agent. Supports NVIDIA Endpoints, OpenAI, Anthropic, Google Gemini, and local Ollama. | +| Routed inference | Provider-routed model calls through the OpenShell gateway, transparent to the agent. Supports NVIDIA Endpoints, OpenAI, Anthropic, Google Gemini, compatible OpenAI or Anthropic endpoints, and the caveated Local Ollama path. | | Layered protection | Network, filesystem, process, and inference controls that can be hot-reloaded or locked at creation. | ## Challenge diff --git a/.agents/skills/nemoclaw-overview/references/overview.md b/.agents/skills/nemoclaw-overview/references/overview.md index 0d9a7c36c..cedf21fd0 100644 --- a/.agents/skills/nemoclaw-overview/references/overview.md +++ b/.agents/skills/nemoclaw-overview/references/overview.md @@ -23,7 +23,7 @@ NemoClaw provides the following capabilities on top of the OpenShell runtime. | Hardened blueprint | A security-first Dockerfile with capability drops, least-privilege network rules, and declarative policy. | | State management | Safe migration of agent state across machines with credential stripping and integrity verification. | | Messaging bridges | Host-side processes that connect Telegram, Discord, and Slack to the sandboxed agent. | -| Routed inference | Provider-routed model calls through the OpenShell gateway, transparent to the agent. Supports NVIDIA Endpoints, OpenAI, Anthropic, Google Gemini, and local Ollama. | +| Routed inference | Provider-routed model calls through the OpenShell gateway, transparent to the agent. Supports NVIDIA Endpoints, OpenAI, Anthropic, Google Gemini, compatible OpenAI or Anthropic endpoints, and the caveated Local Ollama path. | | Layered protection | Network, filesystem, process, and inference controls that can be hot-reloaded or locked at creation. | ## Challenge diff --git a/.agents/skills/nemoclaw-reference/references/commands.md b/.agents/skills/nemoclaw-reference/references/commands.md index 15dbe9068..830fec08f 100644 --- a/.agents/skills/nemoclaw-reference/references/commands.md +++ b/.agents/skills/nemoclaw-reference/references/commands.md @@ -45,7 +45,8 @@ $ nemoclaw onboard ``` The wizard prompts for a provider first, then collects the provider credential if needed. -Supported non-experimental choices include NVIDIA Endpoints, OpenAI, Anthropic, Google Gemini, and compatible OpenAI or Anthropic endpoints. +Supported provider choices include NVIDIA Endpoints, OpenAI, Anthropic, Google Gemini, and compatible OpenAI or Anthropic endpoints. +Local Ollama is also available in the standard onboarding flow as a caveated provider path. Credentials are stored in `~/.nemoclaw/credentials.json`. The legacy `nemoclaw setup` command is deprecated; use `nemoclaw onboard` instead. diff --git a/.agents/skills/nemoclaw-reference/references/inference-profiles.md b/.agents/skills/nemoclaw-reference/references/inference-profiles.md index ba8a99471..cc4df281b 100644 --- a/.agents/skills/nemoclaw-reference/references/inference-profiles.md +++ b/.agents/skills/nemoclaw-reference/references/inference-profiles.md @@ -16,18 +16,21 @@ At onboard time, NemoClaw configures: That means the sandbox knows which model family to use, while OpenShell owns the actual provider credential and upstream endpoint. -## Supported Providers - -The following non-experimental provider paths are available through `nemoclaw onboard`. - -| Provider | Endpoint Type | Notes | -|---|---|---| -| NVIDIA Endpoints | OpenAI-compatible | Hosted models on `integrate.api.nvidia.com` | -| OpenAI | Native OpenAI-compatible | Uses OpenAI model IDs | -| Other OpenAI-compatible endpoint | Custom OpenAI-compatible | For compatible proxies and gateways | -| Anthropic | Native Anthropic | Uses `anthropic-messages` | -| Other Anthropic-compatible endpoint | Custom Anthropic-compatible | For Claude proxies and compatible gateways | -| Google Gemini | OpenAI-compatible | Uses Google's OpenAI-compatible endpoint | +## Provider Status + +The following provider paths are available through `nemoclaw onboard`. + +| Provider | Status | Endpoint Type | Notes | +|---|---|---|---| +| NVIDIA Endpoints | Supported | OpenAI-compatible | Hosted models on `integrate.api.nvidia.com` | +| OpenAI | Supported | Native OpenAI-compatible | Uses OpenAI model IDs | +| Other OpenAI-compatible endpoint | Supported | Custom OpenAI-compatible | For compatible proxies and gateways | +| Anthropic | Supported | Native Anthropic | Uses `anthropic-messages` | +| Other Anthropic-compatible endpoint | Supported | Custom Anthropic-compatible | For Claude proxies and compatible gateways | +| Google Gemini | Supported | OpenAI-compatible | Uses Google's OpenAI-compatible endpoint | +| Local Ollama | Caveated | Local Ollama API | Available in the standard onboarding flow when Ollama is installed or running on the host | +| Local NVIDIA NIM | Experimental | Local OpenAI-compatible | Requires `NEMOCLAW_EXPERIMENTAL=1` and a NIM-capable GPU | +| Local vLLM | Experimental | Local OpenAI-compatible | Requires `NEMOCLAW_EXPERIMENTAL=1` and a server already running on `localhost:8000` | ## Validation During Onboarding @@ -46,7 +49,7 @@ If validation fails, the wizard does not continue to sandbox creation. ## Local Ollama -Local Ollama is available in the standard onboarding flow when Ollama is installed or running on the host. +Local Ollama is a caveated provider path available in the standard onboarding flow when Ollama is installed or running on the host. It uses the same routed `inference.local` pattern, but the upstream runtime runs locally instead of in the cloud. Ollama gets additional onboarding help: @@ -58,7 +61,7 @@ Ollama gets additional onboarding help: ## Experimental Local Providers -The following local providers require `NEMOCLAW_EXPERIMENTAL=1`: +The following local providers remain experimental and require `NEMOCLAW_EXPERIMENTAL=1`: - Local NVIDIA NIM (requires a NIM-capable GPU) - Local vLLM (must already be running on `localhost:8000`) diff --git a/README.md b/README.md index fedb3b31f..73714edcb 100644 --- a/README.md +++ b/README.md @@ -54,13 +54,15 @@ The sandbox image is approximately 2.4 GB compressed. During image push, the Doc #### Container Runtimes -| Platform | Supported runtimes | Notes | -|----------|--------------------|-------| -| Linux | Docker | Primary supported path. | -| macOS (Apple Silicon) | Colima, Docker Desktop | Install Xcode Command Line Tools (`xcode-select --install`) and start the runtime before running the installer. | -| macOS (Intel) | Podman | Not supported yet. Depends on OpenShell support for Podman on macOS. | -| Windows WSL | Docker Desktop (WSL backend) | Supported target path. | -| DGX Spark | Docker | Refer to the [DGX Spark setup guide](https://github.com/NVIDIA/NemoClaw/blob/main/spark-install.md) for cgroup v2 and Docker configuration. | +| Platform | Supported runtimes | Status | Notes | +|----------|--------------------|--------|-------| +| Linux | Docker | Supported | Primary supported path. | +| macOS (Apple Silicon) | Colima, Docker Desktop | Caveated | Install Xcode Command Line Tools (`xcode-select --install`) and start the runtime before running the installer. | +| macOS (Intel) | Podman | Caveated | Depends on future OpenShell support for Podman on macOS. | +| Windows WSL2 | Docker Desktop (WSL backend) | Out of scope | The CLI may run in WSL2, but WSL2 is not part of the supported NemoClaw target matrix. | +| Windows native | N/A | Out of scope | Native Windows hosts are not part of the supported NemoClaw target matrix. | +| Jetson | N/A | Out of scope | Jetson hosts are not part of the supported NemoClaw target matrix. | +| DGX Spark | Docker | Supported | Refer to the [DGX Spark setup guide](docs/reference/commands.md#nemoclaw-setup-spark) for cgroup v2 and Docker configuration. | ### Install NemoClaw and Onboard OpenClaw Agent diff --git a/docs/about/overview.md b/docs/about/overview.md index 6f4f253d4..cafe60c21 100644 --- a/docs/about/overview.md +++ b/docs/about/overview.md @@ -45,7 +45,7 @@ NemoClaw provides the following capabilities on top of the OpenShell runtime. | Hardened blueprint | A security-first Dockerfile with capability drops, least-privilege network rules, and declarative policy. | | State management | Safe migration of agent state across machines with credential stripping and integrity verification. | | Messaging bridges | Host-side processes that connect Telegram, Discord, and Slack to the sandboxed agent. | -| Routed inference | Provider-routed model calls through the OpenShell gateway, transparent to the agent. Supports NVIDIA Endpoints, OpenAI, Anthropic, Google Gemini, and local Ollama. | +| Routed inference | Provider-routed model calls through the OpenShell gateway, transparent to the agent. Supports NVIDIA Endpoints, OpenAI, Anthropic, Google Gemini, compatible OpenAI or Anthropic endpoints, and the caveated Local Ollama path. | | Layered protection | Network, filesystem, process, and inference controls that can be hot-reloaded or locked at creation. | ## Challenge diff --git a/docs/get-started/quickstart.md b/docs/get-started/quickstart.md index 182120b73..44319be08 100644 --- a/docs/get-started/quickstart.md +++ b/docs/get-started/quickstart.md @@ -57,13 +57,15 @@ The sandbox image is approximately 2.4 GB compressed. During image push, the Doc ### Container Runtimes -| Platform | Supported runtimes | Notes | -|----------|--------------------|-------| -| Linux | Docker | Primary supported path. | -| macOS (Apple Silicon) | Colima, Docker Desktop | Install Xcode Command Line Tools (`xcode-select --install`) and start the runtime before running the installer. | -| macOS (Intel) | Podman | Not supported yet. Depends on OpenShell support for Podman on macOS. | -| Windows WSL | Docker Desktop (WSL backend) | Supported target path. | -| DGX Spark | Docker | Refer to the [DGX Spark setup guide](https://github.com/NVIDIA/NemoClaw/blob/main/spark-install.md) for cgroup v2 and Docker configuration. | +| Platform | Supported runtimes | Status | Notes | +|----------|--------------------|--------|-------| +| Linux | Docker | Supported | Primary supported path. | +| macOS (Apple Silicon) | Colima, Docker Desktop | Caveated | Install Xcode Command Line Tools (`xcode-select --install`) and start the runtime before running the installer. | +| macOS (Intel) | Podman | Caveated | Depends on future OpenShell support for Podman on macOS. | +| Windows WSL2 | Docker Desktop (WSL backend) | Out of scope | The CLI may run in WSL2, but WSL2 is not part of the supported NemoClaw target matrix. | +| Windows native | N/A | Out of scope | Native Windows hosts are not part of the supported NemoClaw target matrix. | +| Jetson | N/A | Out of scope | Jetson hosts are not part of the supported NemoClaw target matrix. | +| DGX Spark | Docker | Supported | Refer to the [DGX Spark setup guide](../reference/commands.md#nemoclaw-setup-spark) for cgroup v2 and Docker configuration. | ## Install NemoClaw and Onboard OpenClaw Agent diff --git a/docs/reference/commands.md b/docs/reference/commands.md index dae816471..427222ac5 100644 --- a/docs/reference/commands.md +++ b/docs/reference/commands.md @@ -67,7 +67,8 @@ $ nemoclaw onboard ``` The wizard prompts for a provider first, then collects the provider credential if needed. -Supported non-experimental choices include NVIDIA Endpoints, OpenAI, Anthropic, Google Gemini, and compatible OpenAI or Anthropic endpoints. +Supported provider choices include NVIDIA Endpoints, OpenAI, Anthropic, Google Gemini, and compatible OpenAI or Anthropic endpoints. +Local Ollama is also available in the standard onboarding flow as a caveated provider path. Credentials are stored in `~/.nemoclaw/credentials.json`. The legacy `nemoclaw setup` command is deprecated; use `nemoclaw onboard` instead. diff --git a/docs/reference/inference-profiles.md b/docs/reference/inference-profiles.md index 40c1f4572..55f7856e1 100644 --- a/docs/reference/inference-profiles.md +++ b/docs/reference/inference-profiles.md @@ -38,18 +38,21 @@ At onboard time, NemoClaw configures: That means the sandbox knows which model family to use, while OpenShell owns the actual provider credential and upstream endpoint. -## Supported Providers - -The following non-experimental provider paths are available through `nemoclaw onboard`. - -| Provider | Endpoint Type | Notes | -|---|---|---| -| NVIDIA Endpoints | OpenAI-compatible | Hosted models on `integrate.api.nvidia.com` | -| OpenAI | Native OpenAI-compatible | Uses OpenAI model IDs | -| Other OpenAI-compatible endpoint | Custom OpenAI-compatible | For compatible proxies and gateways | -| Anthropic | Native Anthropic | Uses `anthropic-messages` | -| Other Anthropic-compatible endpoint | Custom Anthropic-compatible | For Claude proxies and compatible gateways | -| Google Gemini | OpenAI-compatible | Uses Google's OpenAI-compatible endpoint | +## Provider Status + +The following provider paths are available through `nemoclaw onboard`. + +| Provider | Status | Endpoint Type | Notes | +|---|---|---|---| +| NVIDIA Endpoints | Supported | OpenAI-compatible | Hosted models on `integrate.api.nvidia.com` | +| OpenAI | Supported | Native OpenAI-compatible | Uses OpenAI model IDs | +| Other OpenAI-compatible endpoint | Supported | Custom OpenAI-compatible | For compatible proxies and gateways | +| Anthropic | Supported | Native Anthropic | Uses `anthropic-messages` | +| Other Anthropic-compatible endpoint | Supported | Custom Anthropic-compatible | For Claude proxies and compatible gateways | +| Google Gemini | Supported | OpenAI-compatible | Uses Google's OpenAI-compatible endpoint | +| Local Ollama | Caveated | Local Ollama API | Available in the standard onboarding flow when Ollama is installed or running on the host | +| Local NVIDIA NIM | Experimental | Local OpenAI-compatible | Requires `NEMOCLAW_EXPERIMENTAL=1` and a NIM-capable GPU | +| Local vLLM | Experimental | Local OpenAI-compatible | Requires `NEMOCLAW_EXPERIMENTAL=1` and a server already running on `localhost:8000` | ## Validation During Onboarding @@ -68,7 +71,7 @@ If validation fails, the wizard does not continue to sandbox creation. ## Local Ollama -Local Ollama is available in the standard onboarding flow when Ollama is installed or running on the host. +Local Ollama is a caveated provider path available in the standard onboarding flow when Ollama is installed or running on the host. It uses the same routed `inference.local` pattern, but the upstream runtime runs locally instead of in the cloud. Ollama gets additional onboarding help: @@ -80,7 +83,7 @@ Ollama gets additional onboarding help: ## Experimental Local Providers -The following local providers require `NEMOCLAW_EXPERIMENTAL=1`: +The following local providers remain experimental and require `NEMOCLAW_EXPERIMENTAL=1`: - Local NVIDIA NIM (requires a NIM-capable GPU) - Local vLLM (must already be running on `localhost:8000`)