The deal pairs one of the world’s most powerful chip companies with the AI startup founded by OpenAI’s former CTO, and the compute commitment alone runs to tens of billions of dollars.
When Mira Murati left OpenAI in September 2024, she declined to say much about what came next. What has become clear, roughly 18 months on, is that she was building something with serious ambitions, and that she has found in Nvidia a partner prepared to back them at a scale that would have seemed extravagant even a year ago.
On March 10, 2026, NVIDIA and Thinking Machines Lab announced a multiyear strategic partnership under which Murati’s startup will deploy at least a gigawatt of NVIDIA’s next-generation Vera Rubin systems to train its models.
NVIDIA has also made what both companies describe as a “significant investment” in Thinking Machines, though neither has disclosed the figure.
According to the Financial Times, the chip supply arrangement alone is worth tens of billions of dollars. Nvidia CEO Jensen Huang has previously said that one gigawatt of AI data centre capacity costs up to $50 billion.
Thinking Machines Lab, which Murati founded in February 2025, has now raised more than $2 billion since its inception. Investors include Andreessen Horowitz, Accel, and NVIDIA, alongside, somewhat unusually, the venture arm of AMD, NVIDIA’s principal chip rival. The company has grown from roughly 30 employees a year ago to about 120 today.
A lab built on customisability
The company’s stated mission is to build AI systems that are, in its own words, “more widely understood, customizable and generally capable.” The emphasis on customisability is pointed: Murati and her team appear to be positioning
Thinking Machines as something distinct from OpenAI and Anthropic, which sell relatively fixed products, by building infrastructure that companies and developers can shape to their own requirements.
The partnership with NVIDIA includes technical collaboration as well as compute supply, specifically the optimisation of Thinking Machines’ products for NVIDIA’s hardware. That kind of close integration at the chip level has historically proved valuable, it is, in rough terms, part of what allowed OpenAI to move as quickly as it did in the GPT era.
“NVIDIA’s technology is the foundation on which the entire field is built,” Murati said in a statement accompanying the announcement. “This partnership accelerates our capacity to build AI that people can shape and make their own.”
What this signals about the compute race
Thinking Machines is not the only frontier lab signing gigawatt-scale compute agreements. The broader AI industry is locked in a race to secure the infrastructure necessary to train the next generation of models, and the deals being signed now, in some cases before the hardware even exists, reflect a bet that whoever secures the most compute earliest will have a durable advantage.
For NVIDIA, the investment serves a dual purpose: it generates revenue from chip sales while also giving the company a stake in a lab it clearly views as a potential long-term customer and strategic partner. NVIDIA has made similar investments in other AI companies, building a portfolio that tracks the industry’s frontier.
Murati, for her part, turned down an acquisition offer from Meta’s Mark Zuckerberg last year. The NVIDIA partnership suggests she intends to remain independent, and that she has secured the resources to make that case credibly. Whether a 120-person lab can genuinely compete with organisations ten times its size remains to be seen. But she is no longer short of compute to try.
Get the TNW newsletter
Get the most important tech news in your inbox each week.
