Skip to content

Commit e3c6deb

Browse files
committed
Deploy preview for PR 495 🛫
1 parent 89dfe4c commit e3c6deb

23 files changed

+23
-23
lines changed
Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1 @@
1-
{"id":"about/architecture","title":"How OpenShell Works","url":"about/architecture.html","last_modified":"2026-03-20T08:38:21.076863+00:00","book":{"title":"NVIDIA OpenShell Developer Guide","version":"latest"},"product":{"name":"OpenShell","version":"latest"},"description":"OpenShell architecture overview covering the gateway, sandbox, policy engine, and privacy router.","tags":["AI Agents","Sandboxing","Security","Architecture"],"topics":["Generative AI","Cybersecurity"],"content_type":"concept","learning_level":"technical_beginner","audience":["engineer","data_scientist"],"content":"How OpenShell Works OpenShell runs inside a Docker container. Each sandbox is an isolated environment managed through the gateway. Four components work together to keep agents secure. Components The following table describes each component and its role in the system: Component Role Gateway Control-plane API that coordinates sandbox lifecycle and state, acts as the auth boundary, and brokers requests across the platform. Sandbox Isolated runtime that includes container supervision and policy-enforced egress routing. Policy Engine Policy definition and enforcement layer for filesystem, network, and process constraints. Defense in depth enforces policies from the application layer down to infrastructure and kernel layers. Privacy Router Privacy-aware LLM routing layer that keeps sensitive context on sandbox compute and routes based on cost and privacy policy. How a Request Flows Every outbound connection from agent code passes through the same decision path: The agent process opens an outbound connection (API call, package install, git clone, and so on). The proxy inside the sandbox intercepts the connection and identifies which binary opened it. If the target is https://inference.local , the proxy handles it as managed inference before policy evaluation. OpenShell strips sandbox-supplied credentials, injects the configured backend credentials, and forwards the request to the managed model endpoint. For every other destination, the proxy queries the policy engine with the destination, port, and calling binary. The policy engine returns one of two decisions: Allow - the destination and binary match a policy block. Traffic flows directly to the external service. Deny - no policy block matched. The connection is blocked and logged. For REST endpoints with TLS termination enabled, the proxy also decrypts TLS and checks each HTTP request against per-method, per-path rules before allowing it through. Deployment Modes OpenShell can run locally, on a remote host, or behind a cloud proxy. The architecture is identical in all cases — only the Docker container location and authentication mode change. Mode Description Command Local The gateway runs inside Docker on your workstation. The CLI provisions it automatically on first use. openshell gateway start Remote The gateway runs on a remote host via SSH. Only Docker is required on the remote machine. openshell gateway start --remote user@host Cloud A gateway already running behind a reverse proxy (e.g. Cloudflare Access). Register and authenticate via browser. openshell gateway add https://gateway.example.com You can register multiple gateways and switch between them with openshell gateway select . For the full deployment and management workflow, refer to the Gateways section. Next Steps Continue with one of the following: To deploy or register a gateway, refer to Gateways . To create your first sandbox, refer to the Quickstart . To learn how OpenShell enforces isolation across all protection layers, refer to Sandboxes","format":"text","summary":"OpenShell runs inside a Docker container. Each sandbox is an isolated environment managed through the gateway. Four components work together to keep agents secure.","headings":[{"text":"How OpenShell Works","level":1},{"text":"Components","level":2},{"text":"How a Request Flows","level":2},{"text":"Deployment Modes","level":2},{"text":"Next Steps","level":2}]}
1+
{"id":"about/architecture","title":"How OpenShell Works","url":"about/architecture.html","last_modified":"2026-03-20T08:50:37.808356+00:00","book":{"title":"NVIDIA OpenShell Developer Guide","version":"latest"},"product":{"name":"OpenShell","version":"latest"},"description":"OpenShell architecture overview covering the gateway, sandbox, policy engine, and privacy router.","tags":["AI Agents","Sandboxing","Security","Architecture"],"topics":["Generative AI","Cybersecurity"],"content_type":"concept","learning_level":"technical_beginner","audience":["engineer","data_scientist"],"content":"How OpenShell Works OpenShell runs inside a Docker container. Each sandbox is an isolated environment managed through the gateway. Four components work together to keep agents secure. Components The following table describes each component and its role in the system: Component Role Gateway Control-plane API that coordinates sandbox lifecycle and state, acts as the auth boundary, and brokers requests across the platform. Sandbox Isolated runtime that includes container supervision and policy-enforced egress routing. Policy Engine Policy definition and enforcement layer for filesystem, network, and process constraints. Defense in depth enforces policies from the application layer down to infrastructure and kernel layers. Privacy Router Privacy-aware LLM routing layer that keeps sensitive context on sandbox compute and routes based on cost and privacy policy. How a Request Flows Every outbound connection from agent code passes through the same decision path: The agent process opens an outbound connection (API call, package install, git clone, and so on). The proxy inside the sandbox intercepts the connection and identifies which binary opened it. If the target is https://inference.local , the proxy handles it as managed inference before policy evaluation. OpenShell strips sandbox-supplied credentials, injects the configured backend credentials, and forwards the request to the managed model endpoint. For every other destination, the proxy queries the policy engine with the destination, port, and calling binary. The policy engine returns one of two decisions: Allow - the destination and binary match a policy block. Traffic flows directly to the external service. Deny - no policy block matched. The connection is blocked and logged. For REST endpoints with TLS termination enabled, the proxy also decrypts TLS and checks each HTTP request against per-method, per-path rules before allowing it through. Deployment Modes OpenShell can run locally, on a remote host, or behind a cloud proxy. The architecture is identical in all cases — only the Docker container location and authentication mode change. Mode Description Command Local The gateway runs inside Docker on your workstation. The CLI provisions it automatically on first use. openshell gateway start Remote The gateway runs on a remote host via SSH. Only Docker is required on the remote machine. openshell gateway start --remote user@host Cloud A gateway already running behind a reverse proxy (e.g. Cloudflare Access). Register and authenticate via browser. openshell gateway add https://gateway.example.com You can register multiple gateways and switch between them with openshell gateway select . For the full deployment and management workflow, refer to the Gateways section. Next Steps Continue with one of the following: To deploy or register a gateway, refer to Gateways . To create your first sandbox, refer to the Quickstart . To learn how OpenShell enforces isolation across all protection layers, refer to Sandboxes","format":"text","summary":"OpenShell runs inside a Docker container. Each sandbox is an isolated environment managed through the gateway. Four components work together to keep agents secure.","headings":[{"text":"How OpenShell Works","level":1},{"text":"Components","level":2},{"text":"How a Request Flows","level":2},{"text":"Deployment Modes","level":2},{"text":"Next Steps","level":2}]}
Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1 @@
1-
{"id":"about/overview","title":"Overview of NVIDIA OpenShell","url":"about/overview.html","last_modified":"2026-03-20T08:38:21.082460+00:00","book":{"title":"NVIDIA OpenShell Developer Guide","version":"latest"},"product":{"name":"OpenShell","version":"latest"},"description":"OpenShell is the safe, private runtime for autonomous AI agents. Run agents in sandboxed environments that protect your data, credentials, and infrastructure.","tags":["AI Agents","Sandboxing","Security","Privacy","Inference Routing"],"topics":["Generative AI","Cybersecurity"],"content_type":"concept","learning_level":"technical_beginner","audience":["engineer","data_scientist"],"content":"Overview of NVIDIA OpenShell NVIDIA OpenShell is an open-source runtime for executing autonomous AI agents in sandboxed environments with kernel-level isolation. It combines sandbox runtime controls and a declarative YAML policy so teams can run agents without giving them unrestricted access to local files, credentials, and external networks. Why OpenShell Exists AI agents are most useful when they can read files, install packages, call APIs, and use credentials. That same access can create material risk. OpenShell is designed for this tradeoff: preserve agent capability while enforcing explicit controls over what the agent can access. Common Risks and Controls The table below summarizes common failure modes and how OpenShell mitigates them. Threat Without controls With OpenShell Data exfiltration Agent uploads source code or internal files to unauthorized endpoints. Network policies allow only approved destinations; other outbound traffic is denied. Credential theft Agent reads local secrets such as SSH keys or cloud credentials. Filesystem restrictions (Landlock) confine access to declared paths only. Unauthorized API usage Agent sends prompts or data to unapproved model providers. Privacy routing and network policies control where inference traffic can go. Privilege escalation Agent attempts sudo , setuid paths, or dangerous syscall behavior. Unprivileged process identity and seccomp restrictions block escalation paths. Protection Layers at a Glance OpenShell applies defense in depth across the following policy domains. Layer What it protects When it applies Filesystem Prevents reads/writes outside allowed paths. Locked at sandbox creation. Network Blocks unauthorized outbound connections. Hot-reloadable at runtime. Process Blocks privilege escalation and dangerous syscalls. Locked at sandbox creation. Inference Reroutes model API calls to controlled backends. Hot-reloadable at runtime. For details, refer to Sandbox Policies and Customize Sandbox Policies . Common Use Cases OpenShell supports a range of agent deployment patterns. Use Case Description Secure coding agents Run Claude Code, OpenCode, or OpenClaw with constrained file and network access. Private enterprise development Route inference to self-hosted or private backends while keeping sensitive context under your control. Compliance and audit Treat policy YAML as version-controlled security controls that can be reviewed and audited. Reusable environments Use community sandbox images or bring your own containerized runtime. Next Steps Explore these topics to go deeper: To understand the components that make up the OpenShell runtime, refer to the Architecture Overview . To install the CLI and create your first sandbox, refer to the Quickstart . To learn how OpenShell enforces isolation across all protection layers, refer to Sandboxes","format":"text","summary":"NVIDIA OpenShell is an open-source runtime for executing autonomous AI agents in sandboxed environments with kernel-level isolation. It combines sandbox runtime controls and a declarative YAML policy so teams can run agents without giving them unrestricted access to local files, credentials, and ...","headings":[{"text":"Overview of NVIDIA OpenShell","level":1},{"text":"Why OpenShell Exists","level":2},{"text":"Common Risks and Controls","level":2},{"text":"Protection Layers at a Glance","level":2},{"text":"Common Use Cases","level":2},{"text":"Next Steps","level":2}]}
1+
{"id":"about/overview","title":"Overview of NVIDIA OpenShell","url":"about/overview.html","last_modified":"2026-03-20T08:50:37.814396+00:00","book":{"title":"NVIDIA OpenShell Developer Guide","version":"latest"},"product":{"name":"OpenShell","version":"latest"},"description":"OpenShell is the safe, private runtime for autonomous AI agents. Run agents in sandboxed environments that protect your data, credentials, and infrastructure.","tags":["AI Agents","Sandboxing","Security","Privacy","Inference Routing"],"topics":["Generative AI","Cybersecurity"],"content_type":"concept","learning_level":"technical_beginner","audience":["engineer","data_scientist"],"content":"Overview of NVIDIA OpenShell NVIDIA OpenShell is an open-source runtime for executing autonomous AI agents in sandboxed environments with kernel-level isolation. It combines sandbox runtime controls and a declarative YAML policy so teams can run agents without giving them unrestricted access to local files, credentials, and external networks. Why OpenShell Exists AI agents are most useful when they can read files, install packages, call APIs, and use credentials. That same access can create material risk. OpenShell is designed for this tradeoff: preserve agent capability while enforcing explicit controls over what the agent can access. Common Risks and Controls The table below summarizes common failure modes and how OpenShell mitigates them. Threat Without controls With OpenShell Data exfiltration Agent uploads source code or internal files to unauthorized endpoints. Network policies allow only approved destinations; other outbound traffic is denied. Credential theft Agent reads local secrets such as SSH keys or cloud credentials. Filesystem restrictions (Landlock) confine access to declared paths only. Unauthorized API usage Agent sends prompts or data to unapproved model providers. Privacy routing and network policies control where inference traffic can go. Privilege escalation Agent attempts sudo , setuid paths, or dangerous syscall behavior. Unprivileged process identity and seccomp restrictions block escalation paths. Protection Layers at a Glance OpenShell applies defense in depth across the following policy domains. Layer What it protects When it applies Filesystem Prevents reads/writes outside allowed paths. Locked at sandbox creation. Network Blocks unauthorized outbound connections. Hot-reloadable at runtime. Process Blocks privilege escalation and dangerous syscalls. Locked at sandbox creation. Inference Reroutes model API calls to controlled backends. Hot-reloadable at runtime. For details, refer to Sandbox Policies and Customize Sandbox Policies . Common Use Cases OpenShell supports a range of agent deployment patterns. Use Case Description Secure coding agents Run Claude Code, OpenCode, or OpenClaw with constrained file and network access. Private enterprise development Route inference to self-hosted or private backends while keeping sensitive context under your control. Compliance and audit Treat policy YAML as version-controlled security controls that can be reviewed and audited. Reusable environments Use community sandbox images or bring your own containerized runtime. Next Steps Explore these topics to go deeper: To understand the components that make up the OpenShell runtime, refer to the Architecture Overview . To install the CLI and create your first sandbox, refer to the Quickstart . To learn how OpenShell enforces isolation across all protection layers, refer to Sandboxes","format":"text","summary":"NVIDIA OpenShell is an open-source runtime for executing autonomous AI agents in sandboxed environments with kernel-level isolation. It combines sandbox runtime controls and a declarative YAML policy so teams can run agents without giving them unrestricted access to local files, credentials, and ...","headings":[{"text":"Overview of NVIDIA OpenShell","level":1},{"text":"Why OpenShell Exists","level":2},{"text":"Common Risks and Controls","level":2},{"text":"Protection Layers at a Glance","level":2},{"text":"Common Use Cases","level":2},{"text":"Next Steps","level":2}]}
Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1 @@
1-
{"id":"about/release-notes","title":"NVIDIA OpenShell Release Notes","url":"about/release-notes.html","last_modified":"2026-03-20T08:38:21.084357+00:00","book":{"title":"NVIDIA OpenShell Developer Guide","version":"latest"},"product":{"name":"OpenShell","version":"latest"},"description":"Track the latest changes and improvements to NVIDIA OpenShell.","tags":["Release Notes","Changelog","AI Agents"],"topics":["Generative AI","Cybersecurity"],"content_type":"reference","learning_level":"technical_beginner","audience":["engineer","data_scientist"],"content":"NVIDIA OpenShell Release Notes NVIDIA OpenShell follows a frequent release cadence. Use the following GitHub resources directly. Resource Description Releases Versioned release notes and downloadable assets. Release comparison Diff between any two tags or branches. Merged pull requests Individual changes with review discussion. Commit history Full commit log on main","format":"text","summary":"NVIDIA OpenShell follows a frequent release cadence. Use the following GitHub resources directly.","headings":[{"text":"NVIDIA OpenShell Release Notes","level":1}]}
1+
{"id":"about/release-notes","title":"NVIDIA OpenShell Release Notes","url":"about/release-notes.html","last_modified":"2026-03-20T08:50:37.818153+00:00","book":{"title":"NVIDIA OpenShell Developer Guide","version":"latest"},"product":{"name":"OpenShell","version":"latest"},"description":"Track the latest changes and improvements to NVIDIA OpenShell.","tags":["Release Notes","Changelog","AI Agents"],"topics":["Generative AI","Cybersecurity"],"content_type":"reference","learning_level":"technical_beginner","audience":["engineer","data_scientist"],"content":"NVIDIA OpenShell Release Notes NVIDIA OpenShell follows a frequent release cadence. Use the following GitHub resources directly. Resource Description Releases Versioned release notes and downloadable assets. Release comparison Diff between any two tags or branches. Merged pull requests Individual changes with review discussion. Commit history Full commit log on main","format":"text","summary":"NVIDIA OpenShell follows a frequent release cadence. Use the following GitHub resources directly.","headings":[{"text":"NVIDIA OpenShell Release Notes","level":1}]}

0 commit comments

Comments
 (0)