OpenAI president says AI is now writing 80% of the company’s code


OpenAI president says AI is now writing 80% of the company’s code

Greg Brockman’s comments at Sequoia’s AI Ascent 2026 conference fit a pattern of AI lab leaders citing self-reinforcing productivity numbers, but the underlying evidence on AI coding productivity remains substantially more contested than the headline figure suggests.


OpenAI president Greg Brockman said AI is now writing roughly 80% of the company’s code at Sequoia Capital’s AI Ascent 2026 conference on Thursday, according to Business Insider.

“It’s hard to know what percent is not’ being written by AI, Brockman said, echoing a comment he made on the Knowledge Project podcast in late April. The remarks are part of a broader argument Brockman has been making across multiple interviews this month: that AI coding capabilities have crossed a productivity threshold, that AGI is “70-80% there” by his personal definition, and that compute scarcity is now the binding constraint on what AI labs can deliver.

The 80% figure is striking but ambiguous. The two stronger interpretations are very different from each other. The first is that AI tools write 80% of the lines of code committed to OpenAI’s codebase, a productivity claim.

The second is that AI is involved in some way (autocomplete, refactoring suggestion, generation followed by human revision) in 80% of the coding work, a usage claim. Brockman’s qualifier, “it’s hard to know what percent is not”, aligns more closely with the second interpretation, and the gap between the two is large enough to materially alter what the figure means.

The pattern across AI lab leadership

Brockman is not alone in citing high AI-coding figures. Anthropic CEO Dario Amodei said publicly last year that AI was writing 90% of code at Anthropic, with a target of 100% within months.

Cursor reached $2 billion in annualised revenue within three years on the strength of AI-assisted coding workflows; GitHub Copilot has 4.7 million paid subscribers and 90% adoption among the Fortune 100; and Anthropic’s $30 billion run-rate revenue is, by the company’s own description, overwhelmingly concentrated in coding, enterprise search, and general productivity.

The pattern is consistent: the labs producing the underlying models are reporting that those models are transformative for software engineering.

The deeper context is one Brockman articulated more clearly in his early-April Big Technology podcast interview. He described a ‘December 2025 inflection’ in which models went from being able to do roughly 20% of typical engineering tasks to roughly 80%, a shift he characterised as “you absolutely need to retool your workflow around these AIs.”

He cited an OpenAI engineer who had previously been unable to get AI to handle low-level systems engineering and now hands the model a design document and watches it implement, instrument, and profile the resulting system to production quality.

There is, however, a significant body of work questioning whether internal AI-coding productivity numbers should be taken at face value. A February 2026 paper from the National Bureau of Economic Research found that 80% of companies actively using AI reported no measurable impact on productivity.

A widely cited 2025 MIT study concluded that 95% of corporate AI pilot programmes generated zero return on investment. Machine learning engineer Han-Chung Lee has argued in a widely circulated GitHub post that even rosy internal AI productivity numbers should be treated with skepticism, because they are typically produced to hit adoption targets that no one can independently audit.

The independent academic critique has been sharpest from cognitive scientist Gary Marcus, who has called the broader AGI claims “a trillion-dollar delusion.” “We as a society are placing truly massive bets around the premise that AGI is close,” Marcus said in a recent keynote at the Royal Society in London. “Large language models are deeply flawed imitators that are preying on the Eliza effect.”

Marcus’ specific point about coding is structurally important: a model that produces code which compiles and passes the tests it was given is not the same as a model that produces correct, secure, maintainable, well-architected software. The first is verifiable in seconds; the second requires the kind of judgement that has been the historical bottleneck on engineering productivity.

Brockman acknowledges the gap, even as he argues it is closing. “The technology we have right now is very jagged,” he said in the Big Technology interview. “It is absolutely superhuman at many tasks. When it comes to writing code, those kinds of things, the AI can just do it. But there’s some very basic tasks that a human can do that our AI still struggles with.”

Two things make Brockman’s 80% figure particularly worth examining at this moment. The first is the financial scale of OpenAI’s current capital deployment. The company raised $122 billion in 2026 and is targeting an IPO at potentially $1 trillion. Brockman has been explicit that the central question for OpenAI is no longer model capability but compute scarcity.

Compute is now “a revenue centre, not a cost centre,” he has said, and OpenAI is committing essentially all available capital to it. That capital deployment is being justified, in significant part, by exactly the kind of productivity claims he is making about AI coding.

The second is the labour market context. Tech companies have laid off thousands of engineers over the past two years, with management increasingly citing AI-driven productivity gains as the rationale. If AI is genuinely doing 80% of the coding at companies like OpenAI and Anthropic, the labour market consequences are substantial.

If the figure reflects a less robust reality, AI being involved in some workflow stage in most coding tasks, but not actually replacing 80% of engineering effort, then the layoffs may be running ahead of the actual productivity gains, and the long-term human cost of the gap may be considerable.

There is one additional layer to Brockman’s framing worth noting: he himself, by his own description and in TIME’s 100 Most Influential People in AI profile, spends approximately 80% of his working time coding, between 60 and 100 hours per week.

The man making the claim that AI now writes 80% of the company’s code is also, by reputation, the company’s most prolific human coder. Whether that makes him the most credible witness to the productivity shift or the most invested in believing in it depends on which framing of the figure one accepts.

Get the TNW newsletter

Get the most important tech news in your inbox each week.