Home

Donate
Perspective

Digital Rights Are on the Chopping Block in the European Commission’s Omnibus

Daniel Leufer / Nov 19, 2025

Daniel Leufer is a Senior Policy Analyst at Access Now’s Brussels office and Emerging Technologies Policy Lead.

European Commissioner Henna Virkkunen talks to European President Ursula von der Leyen at the European Parliament in Strasbourg on September 10, 2025. Source

On Wednesday, the European Commission launched its proposal for a Digital Omnibus. The stated aim of the Digital Omnibus package is to ‘simplify’ the European Union’s digital rulebook to ease compliance burdens for industry. The Commission has repeatedly promised that its proposals will not lower the level of protection for fundamental rights, stating that its proposed changes “are not expected to modify or have negative impacts on the underlying acts as regards other areas such as the protection of fundamental rights or the environment.”

Not that digital rights advocates ever believed them, but these assurances were definitely undermined last week when draft versions of the two Omnibus texts (one focused on data, the other on artificial intelligence) were leaked, and more or less confirmed this Wednesday with the official release, in which even the worst fears of digital rights advocates were surpassed: under the misleading banner of simplification, the Commission was proposing nothing short of a destruction of core safeguards and fundamental principles from its digital rulebook.

It’s important to note that Access Now is also not against simplification per se; nobody wants overly complex laws, and indeed, we have long advocated for measures that would simplify procedures and processes for the enforcement of digital laws that would allow people to exercise their rights. What we are against is what this Digital Omnibus is proposing: the deliberate weakening of fundamental rights safeguards solely to cut compliance costs for businesses.

Delays, watering down, and more delays

So what precisely is the European Commission proposing, and how will it play out for digital rights protections in the EU, and further afield? The first thing to note is that the most consequential changes are those that the Commission is proposing to the General Data Protection Regulation (GDPR) and the ePrivacy Directive, which commentators on last week’s leaks have shown would have a disastrous impact on people’s rights.

But it’s also crucial to take a look at what’s being proposed for the Artificial Intelligence (AI) Act, particularly an amendment that has attracted less attention than others. Much of the public discussion is focused on the so-called ‘stop the clock’ proposal, which would delay the implementation of key parts of the AI Act. The Commission is proposing to delay the entry into force of the rules for high-risk AI systems for up to 16 months, supposedly because the technical standards being prepared by CEN-CENELEC are delayed, although rights groups have pushed back on this excuse.

While these proposals are bad, they are fundamentally only about delaying, but not radically changing, the AI Act’s obligations. By contrast, the most disastrous amendment being proposed to the AI Act relates to a basic requirement that providers of AI systems make it publicly known that they are exempting themselves from all legal obligations for high-risk AI systems, and despite its seriousness, this change wasn’t even mentioned in the Commission Press Release. Yes, that’s right, there is an exemption in Article 6(3) of the AI Act that allows a provider of a high-risk AI system to exempt themselves from all obligations if they think they don’t really pose a risk, and it gives them four very broad criteria to make that assessment. The only safeguard against abuse of this exemption was, up to now, that any provider who availed this exemption was obliged to publicly declare that they were exempting themselves. And the Commission now proposes to remove that basic bit of transparency. But where did this exemption come from in the first place?

According to the original AI Act proposal, there was a two-step process to decide if a provider’s system was high risk and therefore subject to obligations. Firstly, did it fit the AI Act’s definition of an AI system? If yes, the second question is: does the intended purpose of the system match one of the high-risk use cases outlined in Annex III? If yes, then the system was considered to pose a high risk to health, safety, and fundamental rights and was subject to a series of obligations.

It’s worth stressing that these obligations are not groundbreaking; they are just basic practices that many responsible developers were already doing. What the AI Act proposed to do was set a baseline of responsible development practices for these high-risk use cases. The positive impact of this on the EU market should have been that responsible developers would no longer be at a disadvantage for spending extra resources on ensuring their systems were transparent, reliable, and well-documented.

At some point during the negotiations, however, various proposals appeared to add a third step to the high-risk classification process: allowing providers to decide, on their own, whether or not they really posed a high risk to health, safety, and fundamental rights, and if they decided that they didn’t, to simply opt out of following the regulation. At the time, this seemed too absurd, too obviously a concession to the worst parody of industry’s anti-regulation lobbying. And yet, in a depressing testament to the influence of this lobbying, a version of this amendment made it into the European Parliament’s negotiating position, and, ultimately, into the final text of the AI Act as Article 6(3).

Access Now and others fought constantly against this loophole, warning that it would introduce high legal uncertainty as to which systems are considered ‘high risk’; lead to fragmentation of the EU single market, result in Member State authorities facing severe challenges to enforce the legislation, and allow unscrupulous developers to avoid the basic requirements of the law. Even the Parliament’s Legal Service gave a highly critical opinion on it, noting numerous issues, such as the introduction of a high degree of subjectivity into the classification process, which would create legal uncertainty.

In the final negotiations, this loophole was a key pain point, and it only remained in the text because a minimum compromise was made to require providers who availed of this exemption to register their use of the exemption in a publicly viewable database. Let’s be very clear about this: the Article 6(3) exemption was an enormous victory for industry lobbying, and left a gaping loophole in the AI Act’s high-risk classification process. The Commission is now proposing to remove the one safeguard that would have any chance of preventing widespread abuse of this exemption, a move that fits depressingly well into what has already been identified as a broader plan to make the AI Act a carte blanche for the indiscriminate use of AI systems, especially in the context of security and migration.

If this goes ahead, neither national market surveillance authorities (MSAs) nor the AI Office will have any way of knowing which providers have exempted themselves. They will have no overview of how many exemptions are taken per year, in what categories, according to which criteria, etc. Unscrupulous providers of high-risk systems will be incentivized to simply opt out of the AI Act, and those responsible providers who acknowledge the benefits of doing things above board will be at a disadvantage in the market.

This amendment manages to make the worst loophole in the AI Act even worse, and the fact that the Commission is willing to go this far in bending the knee to industry’s most absurd demands is symptomatic of a much broader phenomenon: the progressive dominance of the risk-based approach to digital regulation and the erosion of rights-based safeguards.

In February 2021, two months before the launch of the AI Act, Access Now warned against the pitfalls of risk-based approaches to digital regulation. We noted that a risk-based approach “would have companies evaluate their operational risks vs. people’s fundamental rights,” which is “a fundamental misconception of what human rights are; they cannot be put in a balance with companies’ interests.”

We also cautioned that companies would “have an interest in downplaying the risks in order to develop products.” All of these worries were borne out by the direction of the AI Act negotiations, and will be further exacerbated by the Digital Omnibus.

What we didn’t predict was that the flawed risk-based approach of the AI Act would be retroactively applied to the GDPR, which we contrasted to the AI Act for its strong, rights-based approach. But this is precisely what now seems to be on the table. The Digital Omnibus proposes limitations on the exercise of data subjects' right of access, a key feature of the GDPR, giving more discretion to the data controller to determine which requests should be honoured.

Indeed, in the responses to the call for evidence on the Digital Omnibus, we have even seen proposals made for a full overhaul of the GDPR to make it risk-based to the same degree as the AI Act, a call which was echoed just yesterday by the German and French governments in a joint statement on Digital Sovereignty in which they call for the Commission to apply the risk-based approach to the GDPR as part of its simplification agenda.

Shifting power from people to profit

What all of these changes point to is a shift away from empowering people and towards granting discretion to business. What makes the GDPR truly disruptive is that its rights-based approach puts power into the hands of data subjects, of people, and gives them tools to fight back against tech giants, powerful government agencies, and anyone else who uses their data to surveil, track or control them. In a broad sense, shifting towards a risk-based approach to digital regulation tends to allow discretion to powerful actors and creates a maze of loopholes, exemptions, and exceptions that all, ultimately, function as ways for powerful actors to avoid accountability.

Many of us thought that the foundation laid by the GDPR was a solid first step towards a fairer digital future, but all bets are off now that the Commission has proposed such profound changes. Seeing how far the Commission is willing to bend will embolden those with vested interests in diluting or removing fundamental rights protections, and we still have a long way to go with this ‘simplification agenda’, so other digital regulations may be up for debate before too long.

The Commission seems to have chosen sides by prioritizing the needs of industry and demonstrating an utter disregard, if not contempt, for fundamental rights. And for those who hope that the European Parliament could save the day by standing up to this regulatory destruction, this month’s vote on the first Omnibus package to slash environmental safeguards should have thoroughly shattered that hope: the European People’s Party (EPP) broke with the traditional alliance of centre and centre-left parties and voted with the far right, fully confirming that their lurch to the far right has now become a full-blown far right alignment.

Political leaders in the European Union seem committed to slashing safeguards and prioritising innovation at any cost over people’s rights. It’s now up to those who care about protecting fundamental rights to raise their voices and employ all advocacy, procedural, and legal avenues to turn the tide on this bonfire of safeguards.

Authors

Daniel Leufer
Daniel is a Senior Policy Analyst at Access Now’s Brussels office and Emerging Technologies Policy Lead. His work focuses on the impact of emerging technologies on digital rights, with a particular focus on artificial intelligence (AI), facial recognition, and biometrics. While he was a Mozilla Fell...

Related

News
EU Set the Global Standard on Privacy and AI. Now It’s Pulling BackNovember 10, 2025
Podcast
How the EU's Voluntary AI Code is Testing Industry and Regulators AlikeJuly 13, 2025
Analysis
Europe’s Deregulatory Turn Puts the AI Act at RiskJune 3, 2025
Perspective
One Year On, EU AI Act Collides with New Political RealityAugust 7, 2025

Topics