Google has quietly signed a classified agreement permitting the US Department of Defense to deploy its AI models for 'any lawful government purpose,' according to a report from The Information published on April 28, 2026. The arrangement extends an existing contract between Google and the Pentagon and pulls the search giant decisively into the same camp as OpenAI and xAI on military AI deployment.
What the Agreement Says
The classified deal reportedly allows the Department of Defense broad latitude to apply Google's frontier models — including Gemini-family systems — across defense work, intelligence tasks, and other government missions. Both parties have agreed that the systems should not be used for domestic mass surveillance or autonomous weaponry 'without suitable human supervision and control,' according to The Information's reporting.
Those guardrails, however, appear to be more aspirational than enforceable. The contract reportedly states that it does not grant Google 'any authority to oversee or deny lawful governmental operational choices,' meaning the safeguards function as informal commitments rather than contractual restrictions Google could invoke to halt a specific deployment.
Employee Backlash and a Reversal of Posture
The deal lands at an awkward moment internally. It surfaces shortly after Google employees publicly urged CEO Sundar Pichai to prevent the Pentagon from using the company's AI in ways they said could be 'inhumane or extremely harmful ways.' That open letter echoed the 2018 Project Maven revolt that briefly pushed Google out of high-profile defense AI work — a stance the company has steadily walked back over the last two years.
In a statement, Google reiterated its position that AI should not be used for domestic mass surveillance or autonomous weapons without human oversight, and characterized API access to its commercial models as 'a responsible approach to supporting national security.'
The Wider Frontier-Lab Realignment
If validated, the deal cements a clear split among the top US AI labs. OpenAI and xAI have already locked in classified Pentagon arrangements. Google now joins them. Anthropic, by contrast, was blacklisted by the Pentagon earlier this year after declining Department of Defense requests to strip weapon- and surveillance-related safeguards from Claude. That dispute spilled into federal court and triggered amicus briefs from former judges challenging the legality of the blacklist.
Implications
For government buyers, the practical effect is that three of the four leading US frontier labs are now formally available for classified defense workloads, with Anthropic remaining the conscientious objector. For Google, the contract resolves years of strategic ambiguity about how far the company would go on military AI — at the cost of reopening the internal fight that Project Maven was supposed to have settled. Expect renewed scrutiny from employees, civil liberties groups, and lawmakers over how loosely a phrase like 'any lawful government purpose' can be interpreted in practice.



