The Problem with AI in War is Much Bigger Than Anthropic’s Fight with the DoD
Madeline Batt / Mar 26, 2026Madeline Batt is the Legal Fellow for the Tech Justice Law Project.

Kathryn Conrad / AI Kill Chain / CC-BY 4.0
On Tuesday, lawyers for Anthropic and the Department of Defense faced off in the first hearing of a contentious lawsuit that has provoked newfound scrutiny of AI in warfare. From their arguments, it might appear that the dangers of AI in warfare remain theoretical. But militarized AI has already unleashed a humanitarian crisis, and both parties are complicit.
Anthropic sued the Department of Defense (“DoD”) after Secretary of Defense Pete Hegseth designated Anthropic a supply chain risk for refusing to permit the use of its AI model for mass surveillance of Americans and autonomous weaponry. Anthropic alleges that the designation was an unconstitutional retaliation for its freedom of speech, violated the Administrative Procedure Act, deprived Anthropic of Due Process, and generally had no lawful basis. The company is seeking a temporary restraining order preventing the DoD from implementing the supply chain risk designation while the case is litigated.
In DoD’s opposing brief and in the Tuesday hearing, the agency doubled down on the designation. It emphasized the “privileged access” that AI developers continue to have to their models even after they are deployed in armed conflict because of models’ need for “constant tuning” (Anthropic disputed at the hearing that it has any ability to alter models post-deployment). DoD insists that the supply chain risk designation is justified because it “cannot trust Anthropic” in this sensitive role.
Neither party acknowledges that their collaboration is constrained by international humanitarian law obligations beyond their own assessments of Claude's safety in warfare—and that those obligations are being violated even as the case is litigated. As Anthropic and DoD argue over Anthropic's red line on lethal autonomous weapons in a San Francisco court, the US is wielding Claude to commit war crimes in Iran by mass-approving AI-generated strike targets without meaningfully reviewing their impact on civilians, as the laws of war require. Deploying Claude without full autonomy has not mitigated the horror or the unlawfulness of the United States' AI-enabled assault.
The key reason to adopt AI in war has been succinctly articulated by Hegseth: "speed wins." Militarized AI can dramatically compress the "kill chain"–the process of identifying, tracking, and ultimately killing a target–from weeks or days to mere seconds. Models need not be fully autonomous to achieve this. By analyzing large volumes of surveillance, intelligence, and real-time combat data, and then near-instantaneously recommending targets to kill based on that data, AI transforms warfare, even if a human officer makes the final decision to strike. In practice, human operators are abiding by the push for speed by spending seconds rubber-stamping AI recommendations rather than taking time to meaningfully review underlying data. This occurs despite the widespread awareness that AI models can “confidently provid[e] incorrect information” and, in Anthropic’s case, acknowledgment from the company that its model is not reliable enough to independently determine who lives and who dies.
As we saw from Israel’s use of militarized AI in Gaza, these systems enable destruction at unprecedented speed and scale. By one calculation, Israel bombed an "astonishing" two targets per minute in Gaza at the height of its aerial assault. Humans were technically responsible for approving the AI-selected targets, but they did so in just seconds—sometimes, the only review of the AI output was confirming a target was male.
Now, in Iran, strikes enabled by Claude are proceeding "quicker than the ‘speed of thought’ … amid fears human decision-makers could be sidelined." Already, over 1,000 civilians are estimated to have been killed. These lives lost do not appear in either Anthropic's or DoD’s filings before the court.
This mass AI-enabled killing is not just a moral failure. As human rights and tech justice organizations, including Tech Justice Law, argue in a brief before the court, it is a war crime.
Despite Hegseth's insistence that the US military respects no "stupid rules of engagement," the United States is in fact a party to the Geneva Conventions and has codified them through the War Crimes Act. The US and its contractors must abide by the fundamental obligation to distinguish between civilians and combatants, to ensure that their attacks are justified by military necessity, and to protect civilians from disproportionate harm. These requirements — the principles of distinction, necessity, and proportionality — are cardinal principles of international humanitarian law.
When soldiers rubber-stamp an AI model's kill list, failing to meaningfully consider these principles in their race to strike fastest, they violate international law. If such conduct occurs systematically, it could rise to the level of a crime against humanity. For these kinds of grave international law crimes, members of the US military and Anthropic employees could face potential prosecution in domestic, foreign, and even international jurisdictions.
Anthropic’s legal challenge pits a tech company's red lines against the Trump Administration's demand for unconstrained tech power, but neither party should be the arbiter of safe and lawful AI in war. In its complaint, Anthropic argues that the law has not caught up with the pace of AI development. While it is true that we urgently need AI regulation, emerging technologies are not beyond the reach of existing law. A war crime committed using AI is still a war crime.
The dangers of militarized AI are already here, and they are not limited to lethal autonomous weapons. Anthropic’s meager guardrails, even if implemented by DoD, would utterly fail to contain the technology’s human cost. To effectively resist the harms of militarized AI, we must look beyond the parties’ dispute and enforce their humanitarian obligations. Understanding the true stakes of militarized AI is a matter of life and mass death.
Authors
