Europe’s Deregulatory Turn Puts the AI Act at Risk
Laura Lazaro Cabrera / Jun 3, 2025In August 2024, the EU’s Artificial Intelligence Act was adopted after years of relentless negotiations, multiple trilogues, and countless amendments. Only a month later, as the ink on it was barely dry, the AI Act had a target on its back—it had already been named in former European Central Bank President Mario Draghi’s report on the future of European competitiveness as an example of a regulatory barrier onerous for the tech sector. This was only the beginning.
In its communication on implementation and simplification, the European Commission announced its intention to pursue no less than five simplification initiatives, including one encompassing the EU digital rulebook, to assess whether “the expanded digital acquis adequately reflects the needs and constraints of businesses such as SMEs and small midcaps.” In a footnote, the document specified that this assessment was meant to encompass the AI Act, alongside other significant targets, including the General Data Protection Regulation (GDPR), which has recently been the subject of proposed amendments.
Concerns that the AI Act could be weakened intensified shortly thereafter following the European Commission’s remarks at the French AI Summit, with Commission President Ursula Von der Leyen promising to cut red tape. That same evening, the draft legal framework proposed initially to complement the AI Act by establishing liability rules for damage caused by AI systems—the AI Liability Directive—was scrapped from the European Commission’s work program for 2025, prompting backlash from European civil society and members of the European Parliament. Concerns that the AI Act would be reopened were somewhat assuaged by the European Commission’s AI Continent Action Plan, which identified simplification as a core pillar of this mandate’s agenda. The Plan announced, as a first step in this regard, the establishment of an AI Act Service Desk to provide practical compliance guidance, interactive tools, and direct support for startups and small and medium-sized enterprises (SMEs). The Plan made no mention of revisiting the text of the AI Act, and focused instead on clarifying obligations to regulated entities.
The writing was on the wall. In the consultation directly connected to the plan—still open at the time of writing—the European Commission invites stakeholders to identify regulatory challenges and propose measures to facilitate compliance and potentially simplify the AI Act, paving the way for further deregulatory efforts. More recently, the European Commission has been reported to consider postponing the entry into application of the AI Act.
Simplification is a dangerous misnomer
In previous statements, the European Commission had indicated that the primary target of simplification would be the reporting obligations under the AI Act. While the Commission has since made clear that any changes made to the AI Act would be targeted as opposed to a substantial reopening, there is a real possibility that even targeted changes may have wide-ranging implications, adverse to the safe development and deployment of AI. For example, a key reporting obligation formalized in the AI Act concerns the reporting of serious incidents—incidents leading to serious harm to individuals, property, or the environment, as well as the infringement of fundamental rights obligations—to relevant authorities.
While these notification obligations could be easily dismissed as mere reporting requirements, the reality is that they could play a key role in identifying and mitigating real harms stemming from AI that may have been missed in earlier risk management efforts. It is concerning to hear that the AI Act reporting obligations may be simplified, when arguably they did not go far enough in the first place. For example, providers of AI systems can remove their systems from the high-risk category where they consider they do not pose a high-risk, despite meeting the criteria set in the Act—one of the key remaining loopholes in the law—without being required to notify a regulator. While providers choosing to “opt-out” of the high-risk categorization still need to prepare documentation to be able to prove that their system does not fall within the high-risk category, actually being required to produce this documentation will inevitably rely on a regulator asking in the first place.
An argument often raised in favor of simplification is the allegedly overlapping nature of the obligations contained in the AI Act with other obligations found elsewhere in the EU digital rulebook, most specifically the GDPR. Similar claims were made and debunked in the Code of Practice process for general-purpose AI models, where changes rolling back fundamental rights protections were justified on the basis that other laws provided adequate coverage. These arguments overestimate at best—or inflate at worst—the applicability, relevance, and suitability of other laws to address the concerns specific to AI. When leveraged against the AI Act as a whole, such claims tend to overlook the fact that the AI Act explicitly addresses these intersections and how regulated entities should approach them. The data governance obligations for high-risk AI systems specifically indicate how they should be met in compliance with GDPR. The obligation for public authorities deploying these AI systems to conduct a fundamental rights impact assessment notes their complementary nature to the GDPR’s data protection impact assessments.
Even if one were to concede the possibility that some aspects of the intersection of the AI Act with existing laws could be better addressed, the Act itself provides an avenue to do so by way of guidelines on the relationship of the AI Act with other relevant EU laws, which are under development at the time of writing. Any calls for changes to be made to the AI Act under the guise of “simplification” beyond the cross-sectoral clarity and coherence that these guidelines are intended to provide should be approached with caution.
Evidence-based rule-making must remain at the heart of any review
While the exact nature and scope of any proposed amendments to the AI Act have yet to be defined, the possibility of reopening a law, most of which remains unenforceable to date, signals a concerning alignment with open-ended industry calls for simplification. In a document titled the “EU Economic Blueprint”, US-based OpenAI notes that the “sheer breadth and quantity of EU regulations hamper innovation, slow economic growth, and pose an existential challenge” to the EU’s future, prompting policymakers to assess which “rules strengthen the EU’s AI sector and should be preserved, versus which ones are holding it back and should not.”
In Europe, the EU AI Champions Initiative—which is backing 150 billion EUR of the 200 billion InvestAI package announced by the European Commission towards AI innovation—has made similar arguments. In their position paper, the AI Act is alleged to have created market uncertainty through unclear risk categorization, “causing businesses to hesitate in AI adoption.” Several other trade associations and companies have echoed similar arguments, building increasing pressure on the European Commission to apply a broad-brush approach in their review of the AI Act. Experience indicates that there is a real risk of this approach materializing in the context of the AI Act.
The first simplification initiative brought forward this year—the first omnibus package tackling a directive on corporate sustainability and due diligence—is a good example of how seemingly narrow amendments can result in a significant dilution of the targeted legal frameworks. Originally intended to reduce overlapping obligations, the changes made stripped away the core purpose of the laws, and were put forward without an open or robust consultation, leading a coalition of environmental NGOs to bring a complaint before the EU Ombudsman, who has now opened an inquiry. What took place was not “simplification”, but full-scale deregulation, based on the input of a selected group of stakeholders where industry representatives and their interests were drastically over-represented. The approach not only flouted the better lawmaking guidelines, but also resulted in amendments that overwhelmingly benefited industry actors at an enormous cost to individuals and fundamental rights.
The European Commission cannot allow these mistakes to be repeated and must apply the lessons learned. At a time when the European Union seeks to assert its sovereignty, the European approach to AI innovation should be firmly rooted in EU values and fundamental rights, and any review of the existing legislative framework should be premised on a rich body of evidence beyond generalized industry resistance to rules.
As the AI Act becomes increasingly threatened, decision-makers should ensure that any amendments under consideration are broadly consulted on and their impact robustly assessed before laying down a proposal. Failure to do so will threaten the core strengths of the EU digital rulebook, and the hard-fought fundamental rights protections secured in the AI Act.
Authors
