On May 1, 2026, the U.S. Department of Defense announced agreements with eight major AI companies to deploy frontier models on its most sensitive classified networks. The companies — Amazon Web Services, Google, Microsoft, Nvidia, OpenAI, SpaceX, Oracle, and a stealth startup called Reflection AI — are now cleared to integrate capabilities into the Pentagon's Impact Level 6 (IL6) and Impact Level 7 (IL7) environments: the networks where the American military runs its actual war-fighting decision support. One prominent name is absent. Anthropic, the maker of Claude, remains officially designated a "supply chain risk" by the Department of Defense after refusing to accept unrestricted use terms for its models. The exclusion has escalated to federal court, triggered a contractor-wide blacklist, and forced companies like Palantir to plan the removal of Claude from Pentagon platforms. This guide covers the deal's structure, what IL6 and IL7 actually are, why Anthropic is out, who Reflection AI is, and what the agreement signals for the future of enterprise AI procurement.
The Announcement: What the Pentagon Actually Said
The official press release described the objective in deliberately broad terms: integrating "secure frontier AI capabilities into the Department's Impact Level 6 (IL6) and Impact Level 7 (IL7) network environments to streamline data synthesis, elevate situational understanding, and augment warfighter decision-making in complex operational environments." The announcement disclosed no contract values, no deployment timelines, and no specifics on which models will run on which networks first.
What the DoD emphasized was architecture. The stated goal is to "build an architecture that prevents AI vendor lock and ensures long-term flexibility for the Joint Force." Rather than the single-vendor approach that created strategic vulnerabilities in earlier cloud-computing contracts like JEDI, the Pentagon is explicitly hedging across eight providers simultaneously. If one company's model underperforms a mission-critical task, alternatives are already cleared and ready to substitute.
IL6 and IL7: What Classified Networks Actually Mean
Most federal AI deployments operate on Impact Level 2 or IL4 networks — environments for sensitive-but-unclassified and government-protected data. IL6 and IL7 are categorically different.
IL6 supports storage and processing of information classified up to the Secret level. This is the network where military operations are planned, where intelligence assessments about adversary capabilities live, and where battlefield logistics are coordinated. Unauthorized disclosure of IL6 information is legally defined as capable of causing "serious damage to national security." IL7 goes further, handling compartmented information at the Top Secret/SCI level: signals intelligence, satellite imagery analysis, weapons systems specifications, and codeword-classified programs that relatively few personnel access even within the defense community.
Deploying AI on IL6/IL7 is not comparable to deploying on Azure or standard AWS GovCloud. It requires specialized air-gapped or purpose-built classified cloud infrastructure, security clearances for developers and support staff, and compliance with DoD directives governing everything from model weights storage to inference request logging. All eight companies in the deal have existing cleared infrastructure — either through their own classified cloud products or through existing DoD relationships — which makes Reflection AI's inclusion as a two-year-old startup the most surprising detail in the announcement.
The Eight Companies: What Each Brings
Amazon Web Services operates AWS Secret Region, the infrastructure backbone for a significant portion of the U.S. intelligence community's existing cloud workloads. AWS brings Amazon Bedrock and its newly announced Bedrock Managed Agents to the deal — including a limited preview of Bedrock Managed Agents powered by OpenAI models, announced in the same week as the Pentagon agreement.
Google contributes Gemini's multimodal capabilities through its cleared government cloud offerings. Google's Project Mariner — a web-browsing agent that scores 83.5% on the WebVoyager benchmark for autonomous task completion and handles ten concurrent tasks on cloud-based virtual machines — is precisely the kind of agentic capability the DoD cited for warfighter decision support: navigating complex information systems, synthesizing data across disparate sources, and completing multi-step analytical tasks without constant human supervision.
Microsoft has the deepest existing defense relationship, having won both the JEDI and subsequent JWCC cloud contracts. Microsoft 365 Copilot and Azure OpenAI Service already operate across DoD on lower classification levels. This deal extends that footprint into IL6/IL7 with the combined weight of the Microsoft-OpenAI partnership.
Nvidia is primarily an infrastructure and inference play. Nvidia's cleared data centers running H100 and B200 accelerators provide the compute substrate on which other cleared AI models run. Nvidia's inclusion also ensures the hardware supply chain for classified inference is not dependent on foreign-manufactured alternatives, a strategic concern that has grown in direct proportion to U.S.-China technology competition.
OpenAI brings GPT-5.5 and its Codex model family. GPT-5.5 — designed for "professional work" and "turning messy, multi-step requests into finished work" — is the type of capability intelligence analysts and operational planners need for synthesizing large volumes of intelligence reports, operational logs, and imagery metadata into actionable assessments quickly.
SpaceX is the most unconventional member of the eight. Its role appears tied to Starshield, the satellite communications network built specifically for national security customers. AI-enabled processing of satellite communications data — autonomous triage of signals, pattern detection across telemetry, anomaly flagging at machine speed — is the operational intersection of Starshield's infrastructure and the DoD's AI ambitions.
Oracle was added hours after the initial seven-company announcement. Oracle National Security Regions already host classified DoD and intelligence community workloads. Oracle's addition formalizes what was already an operating relationship and closes an obvious gap in the original announcement.
Reflection AI is the most significant surprise. Founded in 2024, Reflection AI has released no publicly available model. The company raised $2 billion in October 2024, with Nvidia as a lead backer. Its inclusion ahead of companies with years of commercially deployed models signals that willingness to accept unrestricted lawful use terms — combined with early investment in cleared infrastructure — was sufficient to earn a seat alongside the hyperscalers on the most sensitive networks the U.S. government operates.
The Anthropic Exclusion: A Months-Long Conflict
The Anthropic situation did not begin on May 1. The conflict traces to at least February 2026, when reports first emerged that Pentagon officials had threatened to terminate Anthropic's government contracts over AI safeguards. The core disagreement was specific: the DoD wanted Anthropic to agree that Claude could be deployed for "all lawful purposes," a term Anthropic interpreted as including fully autonomous weapons systems and domestic mass surveillance of American citizens. Anthropic's executives refused, arguing these use cases crossed ethical and safety lines the company would not accept.
The DoD's response was historically severe. In early March 2026, the department officially designated Anthropic a "supply chain risk" — a classification previously reserved for companies with ties to foreign adversaries such as Huawei. Defense Secretary Pete Hegseth declared that any contractor or supplier doing business with the U.S. military would be barred from commercial activity with Anthropic. Palantir Technologies, which had integrated Claude into government platforms, found itself in the center of the conflict. CEO Alex Karp confirmed that the DoD was "planning to phase out Anthropic" and that Palantir would eventually substitute other models in its government products.
Anthropic filed suit and won a preliminary injunction from a federal judge in San Francisco, who cited potential "First Amendment retaliation" in the DoD's actions. A federal appeals court in Washington, D.C., however, denied Anthropic's separate request to temporarily block the blacklisting while the case proceeds. As of May 5, 2026, Anthropic remains officially designated a supply chain risk. One opening exists: President Trump stated in late April that Anthropic is "shaping up" and that a defense deal is "possible," suggesting the administration's position has some flexibility.
What the Deal Signals for Enterprise AI Procurement
The Pentagon agreement establishes the clearest articulation yet of what U.S. national security leadership expects from frontier AI providers seeking government market access: unconditional agreement to all lawful uses, cleared infrastructure, and willingness to operate without model-level use restrictions baked into vendor contracts.
For enterprise AI buyers across sectors, the deal surfaces a procurement consideration that most organizations have not yet examined seriously: what are your AI vendors' acceptable use policies, and do they create operational risk in your regulatory environment? The Anthropic-DoD conflict is the most public version of a tension that exists across enterprise AI deployment — between the guardrails AI companies build into their models for safety and liability reasons and the unrestricted access enterprise customers often require for lawful operational use.
The multi-vendor architecture the Pentagon is building also signals how large institutional buyers will structure AI procurement going forward. The DoD's explicit goal of preventing vendor lock is an early expression of what enterprise procurement teams in banking, healthcare, and critical infrastructure will increasingly demand: the ability to route workloads across competing providers based on performance, cost, and risk tolerance — not a single long-term contract with a preferred vendor.
Reflection AI's inclusion carries a specific message for investors and founders in defense AI. Cleared infrastructure access and acceptance of government-compatible use terms can matter more than a track record of publicly deployed models. A startup that invests in the necessary clearances and agrees to government terms can compete directly with hyperscalers for the most sensitive workloads, bypassing the years of commercial market validation that enterprise software typically requires before government adoption.
The Underlying Tension: AI Safety vs. Operational Necessity
Anthropic's position — that Claude should not enable fully autonomous weapons or mass domestic surveillance without meaningful human oversight — is not a fringe view. The IEEE, the International Committee of the Red Cross, and multiple national governments have called for maintaining human control over autonomous weapons systems. The EU AI Act classifies certain military AI applications as high-risk, requiring human oversight. The UN Secretary-General has called for a legally binding instrument on lethal autonomous weapons. Anthropic's refusal to accept unconditional use terms aligned with mainstream international norms, not an outlier stance.
The DoD's position is also coherent within its strategic frame. A military that imposes use constraints on its own AI systems while an adversary's equivalent systems operate without comparable limits accepts a structural operational disadvantage in a crisis. The demand for unrestricted lawful use reflects strategic realism and the competitive dynamics of great-power AI competition as much as any political preference.
The collision between these two coherent positions — AI safety and human oversight on one side, unrestricted lawful operational use on the other — will not resolve itself quickly. It will play out in court, in procurement negotiations, and in the competitive dynamics between AI companies that accept government terms and those that do not. The outcome will shape not just which models run on Pentagon networks, but which AI companies can build commercially sustainable businesses serving governments that operate under different legal and strategic assumptions than the ones those companies were built around.
The May 1 announcement is the current score: eight companies cleared, one officially out, courts still deciding whether that exclusion was lawful. The larger contest over what AI ethics means in a contested security environment is only beginning.
Written by
Anup Karanjkar
Expert contributor at WOWHOW. Writing about AI, development, automation, and building products that ship.
Ready to ship faster?
Browse our catalog of 3,000+ premium dev tools, prompt packs, and templates.
Monday Memo · Free
One insight, every Monday. 7am IST. Zero fluff.
1 field report, 3 links, 1 tool we actually use. Join 11,200+ builders.
Comments · 0
No comments yet. Be the first to share your thoughts.