OpenAI, xAI & Google Are Racing Into the Pentagon’s War Machine — Here Is Exactly What That Means

Category: AI Ethics | Military Tech | Pentagon Contracts Date: April 2, 2026 | 9 min read


When Anthropic refused to hand the Pentagon unlimited access to Claude and was designated a “supply chain risk” in late February 2026, it created a vacuum. That vacuum was filled within hours. By the time US bombs were falling on Tehran, OpenAI, Elon Musk’s xAI, and Google were all accelerating negotiations with the Department of Defense to replace Anthropic inside the military’s classified AI infrastructure. What has followed is the fastest, least scrutinized militarization of commercial AI in history — and it is happening in real time, against the backdrop of an active war.

Understanding which company got in, what they agreed to, and what they refused to agree to is essential for anyone trying to understand where AI warfare goes next — and why Iran named all of them on its target list.

The Race to Replace Claude

The Pentagon’s under-secretary of defense for research and engineering, Emil Michael, said he wanted redundancy: “I’m not biased. I just want all of them. I want to give them all the same exact terms because I need redundancy.” He acknowledged that Anthropic had become “deeply embedded” in the department while other AI companies hadn’t pursued enterprise customers as aggressively by providing forward-deployed engineers. Fortune

Hegseth granted the agency six months to phase out Claude, during which the military would phase in OpenAI’s models as well as those from Elon Musk’s xAI. But Claude was reportedly used in the strikes on Iran hours after the ban was issued, suggesting that a phase-out will be anything but simple. MIT Technology Review

Loading…

The OpenAI Deal: What It Allows — and What It Probably Doesn’t

OpenAI reached an agreement with the Pentagon for deploying advanced AI systems in classified environments. OpenAI published three main red lines: no use for mass domestic surveillance, no use to direct autonomous weapons systems, and no use for high-stakes automated decisions. OpenAI stated: “Other AI labs have reduced or removed their safety guardrails and relied primarily on usage policies as their primary safeguards in national security deployments. We think our approach better protects against unacceptable use.” OpenAI

The problem is enforcement. A former Pentagon official who worked on military AI applications said the language gives the government “enough flexibility to still do whatever the fuck they want, more or less, and then say, whoops, sorry, didn’t mean to.” He added: “There is nothing OpenAI can do to clarify this except release the contract.” The Intercept

xAI’s Grok: In Classified Environments, With Reportedly No Guardrails

In January, the Pentagon announced that xAI’s Grok was going to be added to the GenAI.mil platform, despite incidents in which the model had spread antisemitic content and created nonconsensual deepfakes. Google Gemini was one of the first available on the platform. The message from Defense Secretary Hegseth was that AI is transforming every aspect of how the US fights, from targeting decisions down to paperwork. MIT Technology Review

Google employees circulating an open letter pointed to the Pentagon deploying Grok “in classified environments — as far as we know, without any guardrails.” The letter stated: “Our own companies are also on the brink of accepting similar contract terms. Google is in negotiations with the Pentagon to deploy Gemini, its own frontier model, for classified uses.” CNBC

The Training-on-Classified-Data Frontier

The most alarming development has received almost no mainstream coverage. The Pentagon is making plans for AI companies to train on classified data. AI models like Anthropic’s Claude are already used to answer questions in classified settings, with applications including analyzing targets in Iran. But allowing models to train on and learn from classified data would be a new development. It would mean sensitive intelligence like surveillance reports or battlefield assessments could become embedded into the models themselves, bringing AI firms into closer contact with classified data than ever before. MIT Technology Review

The Pentagon has reached agreements with OpenAI and Elon Musk’s xAI to operate their models in classified settings and is implementing a new agenda to become an “AI-first warfighting force” as the conflict with Iran escalates. MIT Technology Review

The AI Company Pentagon Contract Comparison

CompanyClassified AccessStated Red LinesGuardrail VerificationIran War Status
AnthropicYes (via Palantir) — now being phased outNo autonomous weapons, no mass surveillanceContractually specifiedDesignated “supply chain risk,” suing Pentagon
OpenAIYes — new agreement Feb 28No autonomous weapons, no mass surveillanceUnverified — contract not publicActively integrating
xAI (Grok)Yes — classified environmentsReportedly none specified publiclyUnknownDeployed on GenAI.mil
Google (Gemini)Negotiating — not yet deployedNot publicly statedAlphabet has not commentedUnder negotiation
PalantirDeep classified integrationOperates within whatever AI it usesVia partner agreementsCore targeting system remains active

Why This Makes Every AI Company a Target

James Henderson, CEO of risk management firm Healix, said the rise in threats against tech companies is not a flash in the pan but is a sustained pattern. “Tech assets are now treated as part of the conflict, not peripheral to it. It also signals that future crises may target data centres and cloud platforms as much as traditional strategic sites.” CNBC

Iran’s logic is brutally coherent. If OpenAI’s models are being used to rank strike targets in Iran, and those models run on Microsoft Azure, and Microsoft Azure has data centers in the UAE, then to Iran, Microsoft is a combatant. The target list is not a threat issued in anger. It is a strategic doctrine — and it applies to every AI company that signs a classified Pentagon agreement from this point forward.

Hundreds of tech workers signed an open letter urging the Department of Defense to withdraw its designation of Anthropic as a “supply chain risk.” Almost 900 employees at Google and OpenAI were circulating letters calling for stricter limits on how their employers work with the military. CNBC

The workers understand what the companies’ legal teams are being careful not to say publicly: the moment you sign a classified military AI contract, your offices in the Gulf are on Iran’s map.

Tags: OpenAI Pentagon Contract · xAI Grok Military · Google Gemini Defence · Anthropic Supply Chain Risk · AI Military Ethics 2026 · Classified AI Training Data · Pentagon AI Contracts Iran War · AI Targeting Warfare

Leave a Comment