The Information reported the deal on Tuesday. The agreement allows the DoD to use Google’s AI models without the restrictions that led Anthropic to be blacklisted in February. Google becomes the latest in a line of AI companies, alongside OpenAI and xAI, to supply classified AI capability to the US military.
Google has signed a classified AI deal with the US Department of Defense allowing the Pentagon to use Google’s AI models for “any lawful government purpose,” The Information reported on Tuesday, citing a person familiar with the matter.
The deal was reported hours after more than 560 Google employees published an open letter to CEO Sundar Pichai on Monday, urging him to refuse exactly this kind of classified military AI arrangement. Google had not publicly confirmed or commented on the agreement at the time of this article’s publication.
The agreement, as characterised by The Information, is structured without the ethical restrictions that Anthropic included in its Pentagon contract, the restrictions that led to Anthropic being designated a national security supply chain risk and blacklisted by the Trump administration in February 2026.
Where Anthropic refused to remove contractual prohibitions on mass domestic surveillance and fully autonomous weapons without human oversight, Google’s deal is described as permitting “any lawful government purpose” without such carve-outs.
That framing aligns Google’s deal with the unrestricted model preferred by the Trump administration, rather than the amended model that OpenAI negotiated, which included red lines on domestic surveillance while remaining within the Pentagon’s contracting framework.
The Pentagon has now signed classified AI deals with four of the largest AI companies in the United States: OpenAI, xAI, Google, and, until its blacklisting, Anthropic. The sequencing is notable.
Anthropic was removed from the supplier pool for maintaining ethical restrictions; OpenAI renegotiated to stay in while preserving some restrictions; xAI signed without apparent restrictions; and now Google has signed in language that appears to give the broadest discretion of all to the Pentagon.
The result is a classified AI vendor pool from which Anthropic is excluded, and in which the three remaining suppliers have varying but significant latitude to provide AI capability for military applications.
The timing relative to Monday’s employee letter is the most striking element. The 560 employees who signed the letter to Pichai on Monday morning were, by Tuesday morning, employees of a company that had signed the deal they were asking Pichai to refuse.
That creates a direct and uncomfortable contrast that Pichai will be asked to address in town halls, in press conferences, and in the Musk v. Altman trial courtroom if the question of Google’s AI ethics posture becomes relevant to testimony.
Google has never confirmed the specific terms of its Pentagon AI engagements, and the “any lawful government purpose” framing comes from a single anonymous source reported by The Information.
The employee letter and the Pentagon deal together define the fault line that every major AI company is now navigating. On one side: the US government’s demand for unrestricted AI capability for classified military use.
On the other: the published AI ethics principles that companies adopted, partly in response to the 2018 Project Maven controversy, that commit them to avoiding AI weapons without human oversight. Anthropic chose its principles and was blacklisted.
OpenAI and Google appear to have chosen the contracts. Whether that choice is temporary, commercially reversible, or permanent will depend on how the political environment evolves, and on whether the 560 signatories of Monday’s letter, and those who might join them, can change the calculus internally.
Get the TNW newsletter
Get the most important tech news in your inbox each week.