U.S. Government Bans Use of Anthropic Products: What This Means for Government Contractors and AI Strategy
On Feb. 27, President Donald Trump ordered all federal agencies to “immediately cease all use” of Anthropic’s technology, with a six‑month phase‑out period for existing deployments. Defense Secretary Pete Hegseth simultaneously announced that the Department of Defense (DoD) will designate Anthropic as a “supply‑chain risk to national security,” a label historically used in high‑stakes national security contexts, including prior actions against foreign technology firms. Under Hegseth’s directive, “effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.”
Why Anthropic?
The dispute stems from Anthropic’s refusal to remove contractual “red lines” that would bar the Pentagon from using its Claude models for mass domestic surveillance of Americans and fully autonomous weapons systems that can fire without human involvement. The Pentagon has asserted that once it acquires a technology, it must be free to use the tool for any lawful purpose under its own policies, and that it cannot accept private vendors dictating operational use cases on a mission‑by‑mission basis. Anthropic has announced plans to challenge the “supply‑chain risk” designation in court, arguing that the move is legally unsound and sets a dangerous precedent for American companies negotiating safeguards with the government.
What “Supply-Chain Risk” Means
Being labeled a supply‑chain risk does more than terminate Anthropic’s up‑to‑$200 million Pentagon contract; it also forces entities seeking to do business with the DoD to sever commercial ties with Anthropic. The Pentagon is expected to cancel its Anthropic contract and require defense contractors to certify that they do not use Claude in their workflows, with a six‑month wind‑down window for disentanglement. While the administration has not yet publicly identified the specific statutory authority it is invoking, the “supply‑chain risk” label is analogous to prior federal actions against technology suppliers viewed as posing national security concerns.
Immediate Impact on Contractors
For federal prime contractors and subcontractors, particularly in the defense and national security space, the practical impact is immediate:
- Contractors performing DoD work will be barred from using Anthropic’s Claude models in any capacity tied to DoD contracts once the wind‑down period expires and may be asked to certify non‑use as part of supply‑chain and cybersecurity representations.
- Many agencies beyond the DoD — including the Department of Health and Human Services, NASA’s Jet Propulsion Laboratory, and national laboratories — have already piloted or integrated Claude under the General Services Administration’s (GSA) “OneGov” agreement, which the GSA has now pulled and terminated.
- The GSA has removed Anthropic from USAI.gov and the Multiple Award Schedule, signaling that civilian agencies will follow the Pentagon’s lead and unwind Anthropic‑based solutions.
Anthropic has responded that, as a matter of federal law, a DoD “supply‑chain risk” designation should only reach Claude’s use in DoD contracts and not dictate how contractors use Claude for purely commercial work or its use by non‑DoD customers. How agencies and contracting officers interpret that position in practice remains unclear.
How The Ban Impacts AI Strategy
For organizations that have adopted Claude for proposal drafting, code generation, document review, or analytics in connection with federal work, this is not a simple vendor change. Existing contracts may contain cybersecurity, supply-chain, or national security clauses (including AI-related provisions) that now effectively prohibit the use of Anthropic tools in performance. Use of Claude in preparing proposals, deliverables, or software builds for DoD contracts may need to be disclosed or unwound to avoid inaccurate certifications or misstatements, especially where supply-chain attestations are required. In addition, agencies that have already embedded Claude into workflows and contractors supporting those efforts will need to refactor internal tooling, pipelines, and prompts to alternative models.
In the meantime, rival AI providers are working to fill the gap. OpenAI announced a deal with the defense department for classified networks within hours of the ban, and other providers are also positioning their models for defense and classified use. With the six-month wind-down, we expect the U.S. government to continue exploring opportunities with other AI providers; particularly with DoD’s stated goal of incorporating AI into its systems.
What Should Government Contractors Do Now?
Federal prime contractors and subcontractors should move quickly to assess legal, contractual, and technical exposure from their use of Anthropic platforms and products.
- Inventory and Map Anthropic Footprint: Inventory where Anthropic’s Claude (or tools built on top of Claude) is used across enterprises, with a specific focus on performance of federal contracts, proposal and capture activities, and internal tools that process federal data. From there, identify whether Claude is embedded in third-party platforms (such as SaaS products, developer tools, or “AI copilots”) used in connection with government projects, not just direct Clause API calls.
- Review Contracts and Flow-Downs: Analyze prime contracts, subcontracts, task orders, and BPAs to locate any AI-specific cybersecurity, supply-chain, or national security clauses that could be triggered by continued use of Anthropic. Pay particular attention to certifications stating no “covered” or “restricted” suppliers are used in providing the solution; these may be updated by DoD or GSA guidance to expressly reference Anthropic.
- Plan and Execute an Orderly Transition: Develop and document a transition plan to alternative models for any federal-facing use of Claude, targeting completion before the expiration of the six-month wind-down period referenced by DoD. Validate that replacement models meet specific internal AI governance requirements, including any restrictions on surveillance use cases and weapons-related applications, even where government counterparts may not insist on identical guardrails.
- Adjust AI Governance and Procurement Playbook: Update internal AI policies to address “designated supply‑chain risk” vendors explicitly, ensuring that legal and security stakeholders have authority overuse of restricted AI providers in any government‑related context.
Looking Ahead
This is an early, and unusually public, test of how far the federal government will go to define the boundaries of acceptable use for advanced AI systems in national security contexts. It also signals that “supply‑chain risk” designations, once largely confined to hardware and telecommunications and companies owned by U.S. adversaries, are moving squarely into the cloud and AI stack, with direct consequences for how contractors choose and govern their tooling.
For government contractors that use Claude or other Anthropic services or are unsure whether Anthropic is embedded in vendors’ products, Taft’s Privacy, Security, & AI, and Government Contracts teams can assist with assessing exposure, coordinating a compliant transition strategy, and aligning specific AI governance frameworks with this rapidly evolving landscape.
In This Article
You May Also Like
DOL Proposes New Rule Regarding Independent Contractor Classification Another Push Toward CMMC Compliance — This Time at GSA