A Timeline of the Anthropic-Pentagon Dispute
Justin Hendrix / Feb 25, 2026This tracker was last updated on April 10.

Defense Secretary Pete Hegseth speaks during a cabinet meeting at the White House, Thursday, Jan. 29, 2026, in Washington. (AP Photo/Evan Vucci)
Over the last several weeks, Anthropic and the Pentagon have been engaged in a dispute over what Reuters describes as "usage restrictions for military purposes" — limits the AI company says are necessary. The company is concerned about the use of its technologies for applications such as autonomous weapons and mass surveillance.
On February 24, US Defense Secretary Pete Hegseth gave Anthropic cofounder and CEO Dario Amodei a deadline: relent by 5:01 p.m. on Friday, February 27, and allow unrestricted use of the company's AI models “for all legal purposes.”
Anthropic released a statement on Thursday, February 26 indicating it would not budge. Then, on Friday, February 27, President Donald Trump directed federal agencies to cease using Anthropic’s products, and Hegseth designated the firm a supply chain risk.
On Wednesday, March 4, the Financial Times reported that Anthropic had reopened talks with the Pentagon, and The Washington Post reported that Claude is being used in the ongoing war against Iran.
On Monday, March 9, Anthropic sued the federal government in a Northern District of California court, arguing the administration’s actions had caused it irreparable harm and requesting an injunction of the supply chain risk designation. On the same day, it filed suit against the government in the Court of Appeals for the District of Columbia.
At a hearing on Tuesday, March 24 in San Francisco, Northern District of California Judge Rita F. Lin. Judge Lin said she found the Pentagon’s actions against Anthropic “troubling,” and questioned whether the designation of the company as a “supply chain risk” was “tailored” to the government’s national security concerns. “If the worry is about the integrity of the operational chain of command, [the Department of Defense] could just stop using Claude,” she said.
On Thursday, March 26, Judge Lin issued an order granting a motion for preliminary injunction blocking the United States government from enforcing its ban on doing business with the AI firm Anthropic and its “supply chain risk” designation of the company. In the 43-page ruling, Judge Rita Lin found the government took retaliatory actions against Anthropic that likely violated the law after the company publicly refused to allow its technology to be used in autonomous lethal weapons or for mass surveillance—applications it deemed “red lines.”
On Wednesday, April 8, a three judge panel in the US Court of Appeals for the D.C. Circuit denied Anthropic’s request for a stay, permitting the government to maintain its supply chain risk designation. The judges acknowledged that the company could face harm due to the supply chain risk designation, but noted there are “weighty governmental and public interests on the other side of the ledger. Most obviously, granting a stay would force the United States military to prolong its dealings with an unwanted vendor of critical AI services in the middle of a significant ongoing military conflict.”
The two cases will now proceed in parallel.
The dispute raises a variety of political, legal, policy and ethical questions, and its outcome could set an important precedent for the relationship between AI firms and the US government.
In order to provide readers with a summary of what is known about the dispute, the following timeline contains links to news reports and other useful materials. This timeline will be updated and additional resources added to it as events unfold.
Authors
