Home

Donate
News

Europe’s AI Act Leaves a Gap for Military AI Entering Civilian Life

Raluca Besliu / Mar 10, 2026

European Commission President Ursula von der Leyen surveys Europe’s Eastern border regions from a helicopter. Source: @vonderleyen / X

As Europe moves to scale up defense investment and technological capacity, AI is quickly becoming central to the EU’s vision of strategic autonomy.

For Commission President Ursula von der Leyen and other EU leaders, AI is increasingly framed as a strategic capability — alongside energy security and semiconductor supply chains — particularly as the war in Ukraine and the Middle East reshapes threat perceptions and investment in defense tech intensifies.

At the January AI in Defence Summit, in Brussels, Commissioner Kubilius acknowledged the EU is "behind on investments" compared to the US and China, while stressing that Europe is "radically increasing defense investments.”

Yet the EU’s regulatory approach to AI sits uneasily with these ambitions.

While the European Commission is finalizing implementation guidance for the AI Act, the bloc’s defense sector is already developing and deploying AI systems under a sweeping military exemption that places them outside the Act’s risk-based framework.

That gap raises a broader question: who decides where the line falls between civilian AI and military use?

The issue is already surfacing in debates between governments and AI developers. In the United States, AI company Anthropic has resisted pressure from the Pentagon to remove safeguards that prevent its models from being used for autonomous weapons or mass surveillance.

As governments seek deeper partnerships with frontier AI companies, the boundary between civilian technology and military application is becoming increasingly blurred — leaving regulators in Europe and elsewhere struggling to define how, and where, oversight should apply.

Blurred lines between military and civilian use

Justinas Lingevičius of Vilnius University's Institute of International Relations and Political Science warned that the military exemption leaves "military-related AI development and use largely outside regulation" while the line between civilian and military AI is becoming "increasingly blurred."

The blurring is especially evident with dual-use systems, "which the AI Act does not explicitly address, since its rules will primarily apply to civilian uses," according to Lingevičius. Some European companies, the expert noted, develop military AI while also providing solutions for law enforcement, which itself enjoys exemptions under the AI Act.

Legal research group Lexify said the exclusion of military systems is far from trivial. “A system may initially fall outside the AI Act because it is developed for military purposes, but if it is then used, even briefly, for civilian purposes, the AI Act applies,” said Emanuele Gambula of Lexify.

Military AI systems may be trained on classified, non-consented, or operationally biased datasets, conditions that would violate civilian AI requirements for lawful, representative, and well-documented data.

Frank Slijper, who leads the Arms Trade project at the civil society network Forum for Arms Trade, warned that systems developed under such conditions risk failing to meet EU standards if later repurposed for civilian use.

A second concern relates to oversight across the system’s lifecycle. A system developed exclusively for military purposes would not have undergone the AI Act’s conformity assessment, which demonstrates compliance with rules on risk management, data governance and cybersecurity.

Aljosa Ajanovic, a policy advisor at digital rights group EDRi, argued that assessing compliance only when a system enters civilian markets comes too late. By that stage, he said, regulators cannot fully examine how data was collected, how models were trained, or what assumptions were embedded during development.

Gambula disputed the idea that military AI operates without governance, noting that defence systems are already subject to procurement, certification and operational standards that are “de facto mandatory.” But Slijper remained sceptical, warning that constantly evolving military AI systems would require ongoing monitoring to ensure compliance with civilian requirements.

This concern is compounded by the structural character of military AI itself. Ajanovic pointed out that such systems are "designed around surveillance, threat detection, treating people as potential threats," with "architectures and data practices shaped by secrecy and unlimited power." These design logics do not disappear when a system crosses into civilian deployment.

There is also a well-documented pattern of repurposing to contend with. Ajanovic notes a "continuous demand for adapting the technology from the security and military arena to other fields," surveillance tools built to detect terrorism, for instance, are routinely repurposed to monitor street activism.

Slijper argued that facial recognition illustrates the broader danger. The technology is “inherently controversial,” he noted, yet it is already being deployed at scale through face recognition firms such as Clearview AI and Corsight. The former is reportedly working closely with ICE in the United States, the latter with the Israeli military.

In Israel, Unit 8200, the military’s intelligence division, has drawn on Corsight’s systems alongside Google Photos to help identify potential targets. Corsight claims its software can accurately recognize individuals with “less than 50 percent of a face visible,” even from “poor quality” images — a capability that, critics like Slijper argued, raises profound concerns about misidentification, accountability and the downstream consequences of errors once such systems migrate beyond tightly controlled military contexts.

Lingevičius echoed these concerns, arguing that the conformity assessment process itself "raises further questions,” particularly about "what and how military AI systems can be repurposed and for which civilian domains."

For Ajanovic, this is precisely why the priority must be upstream: ensuring "meaningful prohibitions, safeguards, oversight and human rights protections for AI systems used in military and national security contexts themselves."

Without that, he warned, "technologies developed under opaque and exceptional conditions can be introduced into our lives without ever having been subject to the transparency, accountability, and fundamental rights standards that EU law is meant to guarantee."

Funding flows to operational systems

These risks are not purely theoretical. Several EU-funded defense projects are already developing AI systems with potential dual-use applications.

EU initiatives financed by the European Defense Fund include FaRADAI (EUR 18 million), with the involvement of multiple companies, including HENSOLDT, which develops adaptive AI for low-data military environments, and EU-GUARDIAN (EUR 13.5 million), led by Spain's INDRA SISTEMAS SA, an AI-based automation system for cyber incident management across military networks.

Both companies develop dual-use AI. INDRA's "IndraMind" is designed for the automation of critical operations in defense while also targeting the modernization of civil infrastructures. HENSOLDT has developed mission computers that can integrate military sensors into civilian aircraft avionics — systems the company confirms have been used operationally, though it says neither its mission computers nor military sensors have been deployed for civilian purposes.

That may be true for now. But the question neither company has answered is what internal controls exist to ensure that knowledge, data, and tools developed under military-funded programs do not flow into civilian applications, or vice versa. HENSOLDT declined to respond substantively, directing all FaRADAI questions to the project coordinator and the relevant EU authority.

The governance gap

Even if the standards were to exist, enforcing them would still be another matter. Unlike the US Department of Defense's Chief Digital and AI Office, which maintains centralized oversight from development through deployment, the EU has no single institution with authority to track what happens to EDF-funded AI systems once projects conclude.

"The EU certainly cannot operate like the US because constitutional and sovereignty constraints prevent centralized oversight," Gambula explained.

The Commission confirmed the limitation of its role. Spokesperson Regnier emphasized that the EDF "is a research and development instrument of the EU when it comes to the defence industry and does not go further than that."

Gambula nevertheless points to ways of narrowing these gaps. "There are common frameworks at the European level that national competent authorities can choose to adopt," he explained.

Such shared reference points include the European Defense Standards Reference System (EDSTAR) for technical interoperability, and PESCO commitments for cooperative capability development.

“For EU-funded projects, these frameworks can serve as a shared baseline, while the final decision always remains with the member state,” Gambula added.

Without centralized post-project oversight, the EU has no mechanism to ensure EDF-funded systems comply with international humanitarian law or the values it claims to uphold.

Questions of regulatory coherence

AI in defense is increasingly viewed by EU leaders as a key dimension of European technological sovereignty.

But as dual-use AI systems and military-civilian crossovers proliferate, the boundary between exempted security applications and regulated civilian uses becomes increasingly porous.

What legal and governance mechanisms, if any, will ensure that military AI development aligns with the safeguards applied to civilian systems under EU law?

Until Brussels clarifies how civilian AI regulation under the AI Act interacts with the largely exempt defense and national security domain, the EU risks developing parallel approaches to AI governance.

Authors

Raluca Besliu
Raluca Besliu is a freelance journalist from Romania, whose areas of focus include tech developments, human rights, and the environment.

Related

Analysis
Key Questions on the Role of Technology in the Expanding Middle East WarMarch 5, 2026
Perspective
America's First War in Age of LLMs Exposes Myth of AI AlignmentMarch 6, 2026

Topics