BLUF
On February 27th, Secretary of War Pete Hegseth made an unprecedented announcement, listing American AI company Anthropic as a supply chain risk, effectively barring it from all current and future federal contracts. This designation came after the company refused to budge on their red lines preventing the use of Claude for mass surveillance and fully autonomous weapons systems. Anthropic sued the Department of War on the grounds that the government violated their rights to free speech and due process, as well as the on basis that the Department of War’s response was an arbitrary overreach of executive authority. Anthropic has a strong case to vacate the supply chain risk designation, but it may not prevent the Department of War from looking elsewhere for AI integration into the military.
Anthropic is an AI company based in San Francisco behind the large language model Claude. Claude has been utilized by U.S. military command for intelligence analysis, battlefield simulations, and target selection support for the Department of War (DoW), including in the ongoing joint U.S.-Israel military operations in Iran. It has also been integrated into the classified intelligence space, allowing it to be used for automated vulnerability patching, operational planning, and synthesizing vast amounts of complex logistics data. In recent weeks, the company has been on one side of a showdown with the DoW and the Trump administration at large.
What happened between the War Department and Anthropic?
On January 12, 2026, the DoW announced that it would be prioritizing AI to accelerate its warfighting capabilities. In the press release, Secretary of War Pete Hegseth stated,
“We will unleash experimentation, eliminate bureaucratic barriers, focus our investments, and demonstrate the execution approach needed to ensure we lead in military AI. We will become an ‘AI-first’ warfighting force across all domains.”
This new directive caused friction between the Pentagon and one of its main AI contractors. At the center of the row between DoW and Anthropic hinges on two of the company’s boundaries for use of Claude, restrictions that the Pentagon wanted removed. First, it restricts the use of Claude for AI-driven mass surveillance out of concern that its technology could contribute to the erosion of democratic values, privacy, and personal liberties. Second, it precludes the integration of Claude for fully autonomous weapons, as even frontier AI systems are too immature and inconsistent to possess the judgment necessary for autonomous target selection and engagement, as defined by the Pentagon.
On February 24, the negotiations reached an impasse when the Pentagon issued an ultimatum: Anthropic had to agree to lift these restrictions by February 27, or the government would invoke the Defense Production Act to force Anthropic to give the Pentagon access to its models while also listing the company as a supply-chain risk. When the deadline hit, no agreement had been reached. Shortly after, while the Defense Production Act was not invoked, Secretary Hegseth designated Anthropic as a supply chain risk, issuing the following announcement on X.
Being designated as a supply chain risk to national security effectively functions as a domestic blacklist. Under 10 U.S.C. § 3252, this label grants the Secretary of War the authority to exclude a company from any contract or subcontract involving the military’s most sensitive information technology, including intelligence and weapons systems. For Anthropic, it triggers a secondary boycott, meaning major defense contractors like Boeing and Lockheed Martin are warned that any commercial activity with Anthropic—even outside of government work—could jeopardize their own standing with the Pentagon.
On March 9, Anthropic formally submitted a lawsuit against the DoW and affiliated government parties in the Northern District of California.
The merits of the lawsuit
*Read to full lawsuit here.
What Anthropic argues
The DoW’s actions violate the First Amendment protection from retaliation against protected speech. The First Amendment protects Anthropic’s speech, viewpoints, and petitioning of the government. It also protects Anthropic from retaliatory actions by the government after expressions of protected speech (Gibson v. United States, 781 F.2d 1334, 1338, 9th Cir. 1986).
Anthropic argues that all the conditions for a retaliation claim have been met. First, Anthropic has been clear about its commitment to the safe deployment of its AI, both publicly through its website and commentary, as well as in private negotiations with the government about its red lines. This constitutes a protected form of speech.
Second, the DoW’s designation of Anthropic as a supply chain risk creates a chilling effect on the company, as this label implies that the company is a “sabotage” or “adversarial” threat to the U.S. (10 U.S.C. § 3252(d)(4)). This stain will follow Anthropic into future procurements or contracts, and would be disastrous for its business.
Third, there is a causal link between Anthropic’s expression of speech and the government’s subsequent action—it was clearly the motivating factor. At the onset of the spat with Anthropic, the DoW initially considered invoking the Defense Production Act to compel the very technology that it now lists as a risk to national security. Given the government’s extensive use of Claude, this is the only point of contention upon which the supply chain risk designation could have rested.
The DoW acted abritrarily and capriciously in violation of the Administrative Procedure Act and overstepped its authority in violation of Article II executive powers (ultra vires). The Administrative Procedure Act states that courts must vacate final agency actions that are “arbitrary, capricious, an abuse of discretion, or otherwise not in accordance with law,” that exceed statutory discretion, and that don’t adhere to lawful procedure. Secretary Hegseth’s directive was a final agency action because it was an order from the Secretary of War that directed the DoW to contract with Anthropic for no longer than six months after the directive.
In addition, a final agency action is arbitary and capricious if it “entirely fail[s] to consider an important aspect of the problem,” offers “an explanation for its decision that runs counter to the evidence before the agency,” or fails to “articulate a satisfactory explanation for its action including a rational connection between the facts found and the choice made” (Motor Vehicle Mfrs. Ass’n v. State Farm Mut. Auto. Ins. Co., 463 U.S. 29 (1983)). Anthropic argues that this standard has been met because the directive both exceeds its congressional authorization and is incorrectly applied to the company.
First, Anthropic argues that the Trump administration has neither statutory nor inherent authority to impose this order. Neither U.S.C. Title 10 regulations for the armed forces nor U.S.C. Title 41 regulations for public contracts grants explicit authority. And because executive actions to issue these types of directives must come from an act of Congress or the Constitution itself (Youngstown Sheet & Tube Co. v. Sawyer, 343 U.S. 579 (1952)), the DoW must be invoking some form of constitutional authority. And given the recent Supreme Court case that struck down Trump’s IEEPA tariffs, the executive branch does not have the inherent authority to force companies to choose between immediate cancellation of all federal contracts or submission to the DoW’s demands (Learning Res.,Inc v. Trump, 2026 WL 477534, (2026)).
Second, notwithstanding the executive branch’s inherent constitutional authority, Anthropic says that supply chain risk designations have always been meant to apply to foreign contractors and subcontractors that can sabotage information systems on behalf of an adversary, traditionally circumscribed to China, Russia, Iran, North Korea, Cuba, and Venezuela. As a U.S.-incorporated and U.S.-headquartered company that has never demonstrated a history (let alone a “long-term pattern or serious instances of conduct significantly adverse to the national security of the United States”) of adversarial subversion or sabotage, the supply chain risk designation shouldn’t apply regardless of congressional authorization.
The DoW’s actions caused immediate and severe harm in violation of the Fifth Amendment’s guarantee of due process. Anthropic argues that the sudden ultimatum and debarment (excluding it from federal contracts) without any factual findings or evidence as to why the company was a supply chain risk constitutes a de facto punishment without any hearings or opportunities for redress.
What the government might argue
The government has not yet released any public legal response to Anthropic’s lawsuit, but the DoW is likely to argue that the federal government has broad discretion when it comes to defense procurement contracts. To justify the specific use of the “supply-chain risk” designation under 10 U.S.C. § 3252, the DoW argues that ideological restrictions create a literal, physical risk to operations. If a military operator inputs a prompt that violates Anthropic’s usage policies during a live mission, the software could theoretically shut down or refuse to answer. The government views this unpredictability as a critical vulnerability. As the Chief Technology Officer for the DoW told CNBC,
“We can’t have a company that has a different policy preference that is baked into the model through its constitution, its soul…pollute the supply chain so our fighters are getting ineffective weapons, ineffective body armour, ineffective protection.”
Who is likely to win? And why does this matter for international security?
For a number of reasons, it seems as though Anthropic has a solid case in its request for relief from the supply-chain risk designation. First, Anthropic has a strong argument for the count of ultra vires and statutory overreach. Legal analysis conducted by Alan Rozenstein from Lawfare reveals that Secretary Hegseth’s primary weapons are 10 U.S.C. § 3252 and the Federal Acquisition Supply Chain Security Act (FASCSA).
Section 3252 was designed to combat foreign espionage. The statute defines a supply chain risk as an adversary acting to “sabotage, maliciously introduce unwanted function, or otherwise subvert” a system. A domestic American company transparently refusing to waive its terms of service regarding autonomous weapons and mass surveillance does not constitute covert hostile action or sabotage. In addition, the government’s reliance on FASCSA, which is the basis for requiring every federal agency to cut ties with Anthropic, mandates a 30-day notice period for public comment. That notice was not provided.
Next, Anthropic has a strong argument for the count of violations of the Administrative Procedure Act and is likely to successfully argue that the DoW’s actions lack logical coherence and reasoned decision-making. Precedent strongly favors Anthropic here; in Luokung Technology Corp. v. Department of Defense and Xiaomi Corp. v. Department of Defense, federal courts enjoined (ordered a stop) DoD designations that lacked adequate factual basis or procedural due process. And in Anthropic’s case, I see the arbitrary and capricious nature of the DoW’s actions equally clear. The Pentagon’s logic is internally inconsistent: it cannot simultaneously brand Anthropic as a grave national security threat while threatening to invoke the Defense Production Act to seize its technology—all while keeping Claude integrated into military operations for a six-month transition period.
While courts may vacate Anthropic’s “supply-chain risk” designation and the commercial boycott, I don’t think the courts will force the military to retain Anthropic as a vendor. If the Pentagon formally determines that its operational R&D and combat deployment require AI models that allow for “any lawful use,” that is a routine procurement decision. The DoW is perfectly within its rights to simply decline to renew Anthropic’s contract and pivot to a competitor like OpenAI that is willing to meet those operational specifications. And indeed, it already has. On March 2, OpenAI announced that it had struck an alternate agreement with the DoW for the use of its models.
The DoW-Anthropic fallout is a reflection of the significant leverage that private tech companies have in shaping norms for ethical applications of AI into military contexts (but only if they work together). As I outlined in last week’s BLUF, the integration of AI into drone swarms, kamikaze loitering munitions, and other autonomous systems may have a serious impact on the offense-defense balance. In other words, if you can manufacture a low-cost, highly intelligent, and adaptive autonomous weapons system, it may lower the “blood-and-treasure” threshold for launching an attack. Furthermore, integration of frontier AI technology into kinetic weapons systems may fuel the already-accelerating global AI arms race. Finally, as recent years have shown, there has been a broad centralization and extensive use of executive authority to guide the ways in which wars are fought (the Iran War was largely an unilateral action by the U.S. and Israel). Given this, the room where AI governance decisions are made may begin to shrink, especially in the context of AI in civil-military fusion.













