On the afternoon of 27 February 2026, Pete Hegseth picked up his phone and posted to X. The US Secretary of Defense had just designated Anthropic, a San Francisco AI company, a “supply chain risk to national security.”
The label, under 10 USC 3252, had previously been applied to Huawei and ZTE, Chinese firms accused of embedding surveillance backdoors into their hardware.
Now it was being used against an American company founded by former OpenAI researchers, whose crime was this: it refused to let the US military use its AI models for mass domestic surveillance of American citizens, or for fully autonomous lethal weapons.
That afternoon, hours after Anthropic was blacklisted, OpenAI CEO Sam Altman announced his company had reached its own deal with the Pentagon. His models, he wrote, would be available for all lawful purposes.
The same evening, OpenAI’s most senior hardware executive, Caitlin Kalinowski, who had spent 16 months building the company’s robotics programme, announced her resignation.
“Surveillance of Americans without judicial oversight and lethal autonomy without human authorization,” she wrote, “are lines that deserved more deliberation than they got.”
The lines, as it turned out, had not been deliberated at all. They had been drawn in a contract dispute and erased in a Friday-afternoon press release.
This is where the story is usually told as a clash between two American companies and one American administration, a Washington power struggle with AI at its centre. That reading is not wrong. But it is incomplete.
What happened between Anthropic, OpenAI, and the Pentagon over the first three months of 2026 is also a story about democratic governance, about who gets to set the terms on which the most consequential technologies of our era are deployed, and about what happens when a government decides that the answer to that question is: whoever complies first.
The anatomy of a purge
The sequence of events is worth setting out clearly, because the pace at which they unfolded has obscured their significance. Anthropic held a $200 million Pentagon contract, awarded in July 2025, for work on classified systems.
The terms included two restrictions: Claude could not be used for mass domestic surveillance of American citizens, and it could not be used to power fully autonomous weapons with no human in the targeting loop. These were not novel demands.
They aligned with longstanding prohibitions in international humanitarian law and US constitutional protections. They were, by any reasonable measure, the kind of safeguards a democratic government should want embedded in its AI systems.
The Pentagon disagreed. It wanted, in the words of its final ultimatum, “unrestricted access to AI for all lawful purposes.” When Anthropic declined to remove its restrictions, Hegseth set a deadline: 5:01pm on 27 February. It passed without agreement. Trump, writing on Truth Social, called the company’s leadership “leftwing nut jobs” and ordered every federal agency to immediately cease use of Anthropic’s technology.
A federal judge in San Francisco, reviewing the designation, was less colourful but more precise. Judge Rita Lin wrote in her March ruling that the supply chain risk designation is “usually reserved for foreign intelligence agencies and terrorists, not for American companies,” and described the administration’s actions as “classic First Amendment retaliation.”
She issued a preliminary injunction blocking the ban.
None of this stopped a federal appeals court from later denying Anthropic’s stay request, concluding that “the equitable balance here cuts in favour of the government.”
As of this writing, Anthropic is barred from Pentagon contracts, permitted to work with other agencies, and fighting two parallel lawsuits while simultaneously recruiting enterprise partners, launching a $100 million partner programme, and testing its new model, Mythos, with Wall Street banks at the quiet encouragement of the Treasury Secretary and the Federal Reserve chair.
The administration that blacklisted the company is also, directing those banks to evaluate it for critical financial infrastructure.
The contradiction is not bureaucratic confusion. It is a policy.
What OpenAI’s deal actually means
The more uncomfortable part of this story is OpenAI’s role in it. Altman has said his company shares Anthropic’s core principles: no domestic mass surveillance, no autonomous weapons. The companies’ stated red lines are, on paper, nearly identical.
The difference is that OpenAI signed, and Anthropic did not. What exactly is in OpenAI’s Pentagon agreement, and how its provisions compare to the assurances Anthropic sought, has not been made public.
Pentagon officials have said existing US law already prohibits the uses Anthropic was concerned about. Anthropic’s lawyers, and a group of 37 researchers from OpenAI and Google DeepMind who filed an amicus brief supporting the lawsuit, clearly do not share that confidence.
What we can say with reasonable certainty is this: a government that wanted to remove enforceable safety restrictions from AI models used in classified military systems found a way to do so. One company held the line and was treated as an adversary.
Another accommodated the government’s position and was treated as a partner. The market signal this sends to every AI company negotiating a public sector contract, anywhere in the world, could not be clearer.
Sam Altman has acknowledged the deal was “definitely rushed.” OpenAI’s own employees pushed back. ChatGPT uninstalls reportedly surged 295% in the days following the announcement, while Claude climbed to the top of the US App Store.
These responses suggest that users, at least, understood something significant had shifted. The question is whether policymakers outside the United States are drawing the same conclusion.
What Europe should question?
Europe has spent the better part of a decade building a regulatory framework for AI premised on a core democratic argument: that powerful technologies must be constrained by law, not merely by the good intentions of the companies that build them.
The AI Act, which enters full enforcement in August 2026, encodes that argument in legislation. Prohibited uses, including real-time biometric surveillance in public spaces and social scoring, are not left to corporate discretion. They are banned.
What the Anthropic saga demonstrates is what happens in a jurisdiction where that argument has been rejected. In the United States, the Biden administration’s AI safety executive order was revoked on Trump’s first day. State-level AI legislation has been actively suppressed. And when a company tried to embed the principles of the EU AI Act into its own contractual terms, a government that had previously praised its technology as “exquisite” reached for a statute designed to neutralise foreign saboteurs.
The EU’s “Digital Omnibus” package, currently under negotiation, proposes to delay and weaken parts of both the AI Act and GDPR in the name of cutting red tape and boosting competitiveness. It is being driven, at least in part, by the argument that European regulation puts the continent at a disadvantage against less constrained American and Chinese competitors.
The Anthropic case offers a corrective to that framing. What the US has demonstrated is not a competitive advantage through deregulation. It has demonstrated what it looks like when a government uses procurement power to enforce the removal of safety limits that its own democratic principles would otherwise require.
That is not a model Europe should envy. It is a warning, in my humble opinion.
Federal agencies are, as of this week, quietly testing Anthropic’s Mythos model despite the ban. Congressional staff are seeking briefings on its capabilities. The Commerce Department’s Centre for AI Standards and Innovation is actively evaluating its cybersecurity potential. The prohibition is, in practice, already eroding, because the technology is too useful to ignore, even for the government that declared it a national security threat.
That, too, is instructive. The AI guardrails Anthropic refused to remove were not protections the US government ultimately wanted to do without. They were protections it wanted to hold without being contractually bound by. The distinction matters. A safety principle written into a contract is enforceable. A safety principle stated in a press release is a communication strategy.
In Brussels, as in Washington, the question is not whether AI will be governed. It is whether the governance will be written into law before or after the most consequential decisions have already been made.
The deadline for the AI Act’s full provisions is August. The deadline Hegseth set for Anthropic was 5:01pm on a Friday. Both, in their own way, are a reckoning. Yet, I am sure this Saga will continue for a long time.
Get the TNW newsletter
Get the most important tech news in your inbox each week.