A single command now installs security, privacy guardrails, and local AI models on the world’s fastest-growing open-source agent platform.
OpenClaw launched on 25 January 2026. Austrian developer Peter Steinberger says he built the first version in roughly an hour. Within weeks it had become one of the fastest-growing open-source repositories in GitHub history, an AI agent that anyone could run locally, capable of organising files, writing code, and browsing the web without routing data through a cloud. That kind of unchaperoned access was, for enterprise IT teams, both the point and the problem.
Nvidia’s answer arrived on Monday at its annual GTC developer conference in San Jose. The company announced NemoClaw, a stack that installs onto OpenClaw in a single command, adding the privacy and security infrastructure that enterprises need before they can trust an autonomous agent with production data.
The core component is OpenShell, a new open-source runtime that sandboxes agents at the process level. It enforces policy-based controls on file access, network connections, and data handling, so an agent can be productive without being given the run of the house.
Policies are written in YAML, which means a development team can, for example, permit a sandbox to connect to a specific cloud AI tool while blocking everything else on the network. OpenShell ships as part of Nvidia’s Agent Toolkit, a broader collection of open models, runtimes, and blueprints for building long-running autonomous agents.
NemoClaw also installs Nvidia’s Nemotron open models locally on whatever dedicated hardware is available, GeForce RTX PCs and laptops, RTX PRO workstations, DGX Station, or DGX Spark. A privacy router then allows agents to reach cloud-based frontier models when needed, while keeping the guardrails in place.
The combination is designed to let agents develop and learn new skills without ever stepping outside defined boundaries.
“OpenClaw opened the next frontier of AI to everyone and became the fastest-growing open source project in history,” Jensen Huang, Nvidia’s founder and CEO, said onstage. “Mac and Windows are the operating systems for the personal computer. OpenClaw is the operating system for personal AI. This is the moment the industry has been waiting for, the beginning of a new renaissance in software.”
Huang described the arrival of OpenClaw in terms that echo his usual framing for transformative open-source moments, Linux, Kubernetes, HTML, and said the question he would now put to every chief executive is: what is your OpenClaw strategy?
Steinberger, who joined OpenAI in February but retains involvement with the project, is quoted in the launch announcement. “OpenClaw brings people closer to AI and helps create a world where everyone has their own agents,” he said. “With Nvidia and the broader ecosystem, we’re building the claws and guardrails that let anyone create powerful, secure AI assistants.”
NemoClaw is not model-exclusive. It can run any coding agent and work with models from providers including OpenAI and Anthropic alongside Nvidia2019s own Nemotron family, which runs locally for those who want to avoid cloud exposure entirely.
Kari Briski, Nvidia2019s VP of generative AI software, told a press conference ahead of the announcement that OpenShell provides 201cthe missing infrastructure layer beneath claws to give them the access they need to be productive, while enforcing policy-based security, network, and privacy guardrails.201d
The security layer matters because OpenClaw’s earlier iterations had well-documented vulnerabilities, in particular around prompt injection and unconstrained file access. Most of those have been patched, but no software fix can resolve the structural tension between an autonomous agent that needs broad access to be useful and an enterprise that cannot afford to let it roam freely. OpenShell addresses that tension at the infrastructure level rather than the application level.
Nvidia is working with Cisco, CrowdStrike, Google, and Microsoft Security to bring OpenShell compatibility to their respective security tools, which would embed the guardrails into the broader enterprise security stack. The DGX Station, Nvidia’s higher-end desktop AI supercomputer for running frontier-class models locally, opened for orders on the same day as the NemoClaw announcement.
Analysts from Futurum Research noted that NemoClaw and OpenShell address the deployment end of the agent trust chain well, but urged enterprises not to treat them as a complete governance solution. Security and accountability, they argued, need to be embedded throughout the development lifecycle, not just at the runtime layer.
Nvidia’s Agent Toolkit also ships with AI-Q, a reference blueprint for how agents should decompose and route tasks, a detail that suggests the company is aware of the wider problem, even if NemoClaw is currently the headline product.
NemoClaw is currently available as an early-access preview. Nvidia describes it as alpha-stage and warns developers to expect rough edges; the stated goal is production-ready sandbox orchestration, but the company is explicit that the starting point is getting environments up and running.
A build-a-claw event at GTC Park ran from 16 to 19 March, giving conference attendees the chance to deploy a live NemoClaw assistant on the day of the announcement.
The speed of OpenClaw’s ascent, from a one-hour side project to the infrastructure layer of enterprise AI in less than two months, is a reminder of how quickly the ground is moving. Nvidia, which has spent the past three years positioning itself as the essential hardware layer beneath every AI workload, is now making a similar argument about software.
Whether enterprises will hand their agents to an Nvidia stack as readily as they handed their training jobs to an Nvidia GPU is a question that OpenShell, for all its YAML policies, cannot answer on its own.
Get the TNW newsletter
Get the most important tech news in your inbox each week.
