Skip to content

Anthropic’s Standoff With the Pentagon Is a Test of U.S. Credibility

Anthropic’s public standoff with the Pentagon over AI safety terms ended with the company being designated a national security supply chain risk, a legally dubious power play. As Chinese AI rivals surge, Congress remains largely absent, and industry leaders stay silent, America’s credibility and competitive edge hang in the balance.

<p>Defense Secretary Pete Hegseth speaks next to President Donald Trump in the Oval Office at the White House in Washington, DC, on Friday, March 21.</p>
Defense Secretary Pete Hegseth speaks next to President Donald Trump in the Oval Office at the White House in Washington, DC, on Friday, March 21. Carlos Barria/Reuters

By experts and staff

Published

Experts

On Feb. 27, mere hours before the U.S. and Israel began bombing Iran with the help of tools made by leading AI company Anthropic, the company’s relationship with the U.S. government went up in flames. Amid a contentious and very public contract dispute over how Anthropic’s models could be used by the U.S. military, Defense Secretary Pete Hegseth declared Anthropic a supply chain risk in a statement so broad that it can only be seen as a power play aimed at destroying the company.

Shortly thereafter, OpenAI, one of Anthropic’s main rivals, announced it had reached its own deal with the Pentagon, claiming it had secured all the safety terms that Anthropic sought, plus additional guardrails. Yet buried in an OpenAI FAQ released the next day was a seemingly banal but telling acknowledgement. In response to a question asking what would happen if the government violated the terms of the contract, the company wrote, “As with any contract, we could terminate it if the counterparty violates the terms. We don’t expect that to happen.”

To which I can only respond: “Wait… Huh?”

The Enforcement Paradox

OpenAI has heralded three red lines it negotiated for areas where OpenAI technology would not be used: mass domestic surveillance, directing autonomous weapons systems and high-stakes automated decisions. Anthropic’s negotiations had also centered on prohibiting the use of their tools for mass surveillance of Americans and “fully autonomous weapons.”

OpenAI also highlighted structural limitations it had negotiated to guard against such uses, which are not insignificant. But critically, only one enforcement mechanism exists to force governmental compliance with the contract: OpenAI’s freedom to walk away if the government won’t respect the terms.

Of course, Anthropic also assumed it enjoyed that freedom and negotiated accordingly. Rejecting the government’s “best and final offer,” on Feb. 26, CEO Dario Amodei stated, “Our strong preference is to continue to serve the Department and our warfighters—with our two requested safeguards in place. Should the Department choose to offboard Anthropic, we will work to enable a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations, or other critical missions.”

In response, the top Pentagon official for research and technology used his official social media account to call Anthropic’s CEO “a liar” with “a God-complex.” President Donald Trump then took to his own social media platform to announce a legally dubious government-wide ban on Anthropic’s tools. And shortly after the negotiation deadline passed, Hegseth announced on X that he would designate Anthropic a “supply chain risk to national security,” stating that “effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.” (There was one important exception: his own department would have six months to manage the transition.)

In the days since, government agencies have announced that they will end their relationship with Anthropic, and private companies have begun moving away from Anthropic’s tools.

The Supply Chain Risk Designation

The Pentagon has yet to clarify what legal authority it’s invoking to designate Anthropic a supply chain risk. The most obvious possibility is Title 10 Section 3252, a defense procurement authority that lets the Pentagon exclude a company from government contracts for specific national security systems with no notice to the target, no opportunity to respond and very limited court review. No company has ever been publicly designated a supply chain risk under it, and the statute can only be applied when “less intrusive measures” were not reasonably available.

Another possibility is a 2018 law called the Federal Acquisition Supply Chain Security Act, but that would require the administration to work through an interagency council and give the target 30 days’ notice and an opportunity to respond. It would also involve  judicial review in the D.C. Circuit. Its only public use was in 2025 against Acronis AG, a Swiss cybersecurity firm with Russian ties, and that enforcement action was limited to intelligence community contracts.


For a nation striving to lead the world in a private sector-led technological transformation, trust in that sector’s freedom from government influence is critical.


Both of these statutes were designed to respond to foreign adversary threats the government had previously identified, such as the Chinese tech giants Huawei and ZTE, as well as the Russian cybersecurity firm Kaspersky. As many analysts have noted, neither statute authorizes Hegseth’s broad demand for third parties to cease commercial activity with Anthropic, which the company has announced it will challenge in court. Nonetheless, multiple defense contractors have already begun shifting to other AI providers—even as the Pentagon continues its war in Iran with the alleged national security threat operating across its classified systems.

Meanwhile, in Beijing

While Washington has been attacking an American frontier AI leader, Chinese labs have been on a tear. In the past month alone, five major Chinese models have dropped: Alibaba’s Qwen 3.5, Zhipu AI’s GLM-5 (an advanced model trained entirely on Huawei chips, none from Nvidia), MiniMax’s M2.5, ByteDance’s Doubao 2.0 and Moonshot’s Kimi K2.5.

All of these offer many of the same capabilities as Anthropic’s Claude or OpenAI’s GPT at a fraction of the price. Most have been released in forms that are easy to adapt and broadly accessible worldwide. DeepSeek—the company whose R1 model triggered a trillion-dollar market sell-off in January 2025—is expected to release its next major update soon. On Hugging Face, the leading AI model repository, Chinese open-weight models have surpassed U.S. models in total downloads. Alibaba’s Qwen family now accounts for over 40 percent of all new model derivatives.

Here lies the perverse irony: As Dean Ball has noted, no Chinese AI firm has been designated a supply chain risk by the U.S. government. Only Anthropic, arguably America’s most safety-conscious frontier company, enjoys that distinction. The regulatory risk of using made-in-America AI just increased for American defense contractors relative to the risk of using Chinese open-weighted models. 

A Golden Opportunity, Squandered

The terms under which American AI tools are deployed in warfare are being set by some men in a room, with no democratic input and no durability beyond the next change in political winds. Where is Congress? And where is the tech sector?

As of Feb. 27, U.S. lawmakers’ contribution was seemingly limited to a letter asking everyone to play nicely. Four senators—the chair and ranking member of the Armed Services committee and the top defense appropriators from both parties—wrote a genuinely thoughtful appeal urging Anthropic and the Pentagon to extend teir negotiations, delivering it as negotiations were set to conclude. But the Senate has the power to do more than send a note. It can hold hearings, demand information and even craft legislation.

A company’s terms of service cannot and should not map the legal boundaries for how AI can be used in war, or in the governmental surveillance of U.S. citizens. That is a job for Congress. In the short term, lawmakers can immediately clarify that authorities they established to protect America from foreign adversaries may not be weaponized for political retribution against American companies. They can also establish stronger mechanisms for external oversight, overhaul procurement authorities and launch immediate investigations into how the Pentagon is using AI tools for warfare to ensure their use complies with existing legal mandates.

Until such steps are taken, every AI safety commitment in every military contract is only as trustworthy as the government’s commitment to respect the limitations of the contract and the rights of the contractor. The Pentagon has demonstrated how capricious that commitment may be.

Defense contractors must pay heed. More than a week before the Feb. 27 deadline for negotiations between Anthropic and the Pentagon, a senior defense official told Axios that it was “an enormous pain in the ass to disentangle [Anthropic’s tools], and we are going to make sure they pay a price for forcing our hand like this.” That is not the language of a national security finding or legitimate executive authority; it is the language of personal acrimony and political retaliation. 

Yet the companies being conscripted into that retaliatory effort—Boeing, Lockheed, Palantir, Amazon, Google, major investors and the rest of the defense industrial base—have said almost nothing publicly. These companies live inside supply chain compliance regimes; they know the difference between a genuine security finding and an abuse of executive power. Their silence is not neutrality—it is acquiescence to a precedent that will eventually be used against them.

Sam Altman, OpenAI’s CEO, doesn’t expect the government to violate OpenAI’s terms.

Dario Amodei didn’t expect the government to label Anthropic a national security threat for standing firm on contractual terms. 

The gap between those expectations and reality represents an existential risk to America’s international credibility, and that gap will remain until democratic authority is exerted to close it.

In the meantime, for the sake of America’s public servants, national security, AI leadership and global reputation, it is beyond time for industry leaders to publicly demand adherence to the rule of law. Backroom deals will not restore public faith in the strength and independence of the private sector, at home or abroad. For a nation striving to lead the world in a private sector-led technological transformation, trust in that sector’s freedom from government influence is critical to long-term growth. Cutting-edge models and chips can only differentiate America for so long, as China is proving on a monthly basis.

Alternatively, the U.S. can keep frittering away its trust premium and see how that plays out. But when America ends up losing the competitive edge it has earned through decades of protecting due process and private sector independence, no one will have the right to say, “We didn’t expect that to happen.”