TL;DR
Recursive Superintelligence, a startup founded by former leaders from Meta AI, Google DeepMind, OpenAI, and Salesforce AI, has emerged from stealth with $650 million in funding at a $4.65 billion valuation. Led by Richard Socher and co-founded by ex-Meta FAIR director Yuandong Tian, the company is pursuing recursive self-improvement: AI systems that autonomously improve themselves in an accelerating loop. GV, Greycroft, Nvidia, and AMD backed the round. The startup has fewer than 30 employees and no released product.
The idea that an AI system could improve itself, then use those improvements to improve itself again, faster, in an accelerating loop that eventually outpaces every human researcher on earth, has been a fixture of computer science folklore since at least the 1960s. For most of that time, it remained comfortably theoretical. Now someone has raised $650 million to build it.
Recursive Superintelligence, a startup founded by former leaders from Meta AI, Google DeepMind, OpenAI, Salesforce AI, and Uber AI, emerged from stealth on 13 May with a $4.65 billion valuation and a thesis that would have sounded like science fiction two years ago but now sits squarely within the Overton window of Silicon Valley ambition. The company’s stated mission: build AI systems that can autonomously discover knowledge, continuously optimise themselves, and evolve in an open-ended loop, much like biological evolution, but without the inconvenience of waiting millions of years.
The team behind the loop
The round was led by GV, Alphabet’s venture capital arm, and Greycroft, with participation from Nvidia and AMD, the two chipmakers whose hardware underpins virtually all frontier AI training. The involvement of both companies is notable: strategic investment from the firms that sell the picks and shovels suggests they see recursive self-improvement not as a theoretical curiosity but as a near-term compute customer.
The founding team is built to signal credibility. Richard Socher, the former chief scientist at Salesforce and founder of the AI search engine You.com, leads the company alongside seven co-founders: Yuandong Tian, formerly a research scientist director at Meta’s Fundamental AI Research lab (FAIR), where he led work on reinforcement learning, LLM reasoning, and AI-guided optimisation; Tim Rocktaschel, a professor of AI at University College London and former principal scientist at Google DeepMind; Alexey Dosovitskiy, one of the authors of the Vision Transformer (ViT), the 2020 paper that reshaped computer vision research; Josh Tobin, formerly of OpenAI; Caiming Xiong; Tim Shi; and Jeff Clune. Peter Norvig, co-author of Artificial Intelligence: A Modern Approach, the standard university textbook in the field, serves as an adviser.
Tian Yuandong’s involvement is particularly striking. A graduate of Shanghai Jiao Tong University who went on to earn a PhD in robotics from Carnegie Mellon, Tian spent over a decade at Meta FAIR, where his work spanned some of the most consequential problems in modern AI research. He led the DarkForest Go project, a CNN-based Go AI developed before DeepMind’s AlphaGo captured global attention, and later became lead scientist on ELF OpenGo. His departure from Meta and immediate entry into a startup pursuing the most ambitious goal in the field is itself a signal: the talent that built the current generation of AI systems is now betting that the next generation can build itself.
What recursive self-improvement actually means
The concept is deceptively simple. Instead of human researchers designing each new generation of AI, an AI system would automate parts of its own research and development process, generating improvements that in turn make it better at generating improvements. A company that achieves this first would, in theory, be able to extend its lead over competitors exponentially, because its development velocity would be compounding rather than linear.
Recursive Superintelligence has outlined a staged roadmap. The first step, according to company materials, is to train a system with the capabilities of “50,000 doctors” to automate AI scientific research itself. From there, the company plans to run what it calls a “Level 1” autonomous training system, with a public launch targeted for mid-2026. The funding will be used in part to secure the large-scale compute infrastructure required to run these experiments.
The company currently operates from offices in San Francisco and London, with a team that has expanded beyond 25 researchers and engineers. The round was described as heavily oversubscribed.
The race is already on
Recursive Superintelligence is not pursuing this thesis in isolation. The largest AI laboratories are already using their own models to accelerate research. Anthropic has said that the majority of its code is now written by Claude. OpenAI has reported that GPT-5.5 developed a parallelisation method that boosted token generation speeds by more than 20%. Google DeepMind has built AlphaEvolve, a coding agent designed for scientific and algorithmic discovery. Google co-founder Sergey Brin has reportedly described coding gains as a path to “AI takeoff” internally.
What distinguishes Recursive Superintelligence from these efforts is that none of the major laboratories has organised an entire company around recursive self-improvement as its core commercial thesis. OpenAI, Anthropic, and Google DeepMind all use AI to assist their research workflows, but their businesses are built around selling models and API access. Recursive is betting that the self-improvement loop itself is the product.
Whether that bet pays off depends on a question that remains genuinely open: whether recursive self-improvement produces the kind of runaway acceleration its proponents describe, or whether it converges on diminishing returns as each cycle of improvement yields smaller gains. Anthropic co-founder Jack Clark has estimated a roughly 60% probability that a system capable of training a more powerful successor on its own, without human involvement, will exist by the end of 2028, and a 30% chance by 2027.
For now, what is certain is the price the market has placed on the possibility. Recursive Superintelligence is four months old, has fewer than 30 employees, and has not released a product. It is valued at $4.65 billion. In the current AI investment climate, the promise of a machine that can improve itself is apparently worth more than many companies that have already built one.