Home

Donate
News

EU Set the Global Standard on Privacy and AI. Now It’s Pulling Back

Ramsha Jahangir / Nov 10, 2025

Henna Virkkunen, Executive Vice-President for Tech Sovereignty, Security and Democracy of the European Commission, photographed in February, 2025. Source

Europe is preparing to roll back parts of its landmark digital rules, long seen as global benchmarks for privacy and AI. On November 19, the European Commission is expected to unveil the “Digital Omnibus,” a package of reforms that could reshape the General Data Protection Regulation (GDPR), the AI Act, and the ePrivacy rules.

The plan is presented as a way to simplify compliance and reduce bureaucracy for small and medium-sized companies. It follows a report released a year ago by former Italian Prime Minister Mario Draghi, warning that Europe’s complex laws are stifling innovation and holding the region back in global competition with the US and China.

The Digital Omnibus also comes at a moment of intense geopolitical pressure. Politico reported that in May, European Commission Executive Vice President Henna Virkkunen met with top US tech executives to pitch a more business-friendly Europe and highlight plans to simplify digital rules, the same reforms now being rolled into the Omnibus.

Leaked drafts show the stakes go far beyond paperwork. The proposed changes could weaken core data protections, give tech companies more leeway in using European data, and slow down the enforcement of Europe’s AI rules.

The outcome matters far beyond Europe. For a decade, the GDPR set a global standard for privacy that influenced laws in countries from Brazil to India and California. If Brussels now reverses course, the ripple effects could reshape how data protection and AI regulation are approached worldwide.

What’s at stake

Core principles of the GDPR

According to an analysis by Austrian privacy NGO noyb, the leaked draft of the Omnibus could significantly weaken GDPR protections. It narrows the definition of personal data, meaning information that cannot directly identify an individual might no longer count as personal, even if it could be linked with other data. This would strip many pseudonymous identifiers, such as ad IDs and cookies, of GDPR protection, paving the way for more tracking and profiling.

The draft also limits when people can exercise their rights to access, correct, or delete data, restricting them to “data protection purposes.” In practice, this could block workers, journalists, or consumers from using data requests in disputes or investigations.

Sensitive categories of data — including health status, political views, or sexual orientation — would only be protected if explicitly disclosed, not inferred. This represents a major shift from existing European court rulings, which safeguard people from profiling based on deductions.

On top of that, the Omnibus draft introduces a “legitimate interest” exception allowing companies to use personal data, including some sensitive information, for AI training, provided unspecified safeguards are in place. Under these rules, high-risk AI systems could process massive amounts of European data legally, while traditional data storage and processing, like databases or CCTV footage, remain tightly regulated.

Noyb warns this could give US and global tech companies freer rein to use European data for AI training or analytics. In practice, EU users would rarely know their data is being used, and objections would be nearly impossible to enforce.

"One part of the EU Commission seems to try overrunning everyone else in Brussels, disregarding rules on good lawmaking, with potentially terrible results,” said noyb founder Max Schrems, who has filed a string of GDPR complaints against major tech companies. “It is very concerning to see Trump'ian lawmaking practices taking hold in Brussels."

The AI Act could be slowed down

The EU’s landmark AI Act entered into force earlier this year but will not fully apply until 2026. Reporting by MLex, Reuters, and Financial Times indicates that the European Commission is considering changes that could delay enforcement and reduce transparency.

Under the proposals, companies deploying high-risk AI systems could receive a one-year grace period before fines and other obligations take effect. This would particularly benefit providers that already placed generative AI systems on the market, giving them time to adjust without disrupting operations. Draft documents also suggest postponing penalties for transparency violations, such as failing to clearly label AI-generated content, until August 2027. MLex reported that the package would also make compliance easier for companies and centralize enforcement through a new EU AI office.

Civil society groups warn that one of the most alarming changes would let companies unilaterally declare a high-risk AI system low-risk and bypass safeguards without notifying anyone. The amendments would remove the requirement for providers to register self-exempted systems in the EU database. Article 6 of the AI Act lets providers self-assess AI risk and claim exemptions, with the only safeguard being public disclosure of their rationale. Civil society groups warn that eliminating this safeguard would undo a hard-fought 2023 compromise.

"The Commission's so-called simplification proposal will let loose unsafe AI systems in the EU that will threaten public safety and fundamental rights," said CAIDP President Merve Hickok. "The current reporting requirements in Article 6 are the bare minimum for AI accountability and transparency."

Folding ePrivacy into the GDPR

The long-delayed ePrivacy regulation, which controls how companies access data on users’ phones, computers, and other devices, could be merged into the GDPR under the Digital Omnibus. This would effectively move cookie regulation from a separate law into the broader privacy framework.

Currently, websites must get explicit consent before storing or accessing most cookies, think clicking “accept” on cookie banners. Under the proposed changes, companies could collect some data without asking first, either for a limited list of “low-risk” uses or under a broader legal basis called “legitimate interest,” which lets companies argue they can use data if it serves their business. This would shift Europe from an opt-in system to something closer to opt-out, where users must actively refuse to stop tracking.

The European Commission says this would make things simpler for users and reduce banner fatigue. Privacy experts warn it could weaken privacy protections, giving companies and even governments broader access to data on devices without clear consent. Itxaso Domínguez de Olazábal of European Digital Rights (EDRi) said the proposals are “not only about cookies. It’s about whether platforms, data brokers, and governments get legal permission to look inside your device and your communications.”

What comes next

The proposal is still being discussed within the Commission and could change before November 19. Once adopted, it will head to EU governments and the European Parliament for approval.

Privacy advocates have criticized the fast-track process of the Digital Omnibus. While the GDPR took years to negotiate, public consultation on the Omnibus only concluded in October. According to noyb, some Brussels units had just five working days to review a 180+ page draft. The Commission has not prepared impact assessments, saying the proposed changes are “targeted and technical.”

Robin Berjon, technologist and fellow at the Future of Tech Institute, warned that the proposed reforms go beyond mere simplification.“We’ve seen the European Commission be weak on enforcement and hesitant to anger the American authorities, but the omnibus changes go much further,” he said in a press release. “American tech monopolies and intelligence agencies are the biggest beneficiaries of the surveillance economy and these changes strengthen their hand to instead actively sabotage European businesses and national security.”

Authors

Ramsha Jahangir
Ramsha Jahangir is an Associate Editor at Tech Policy Press. Previously, she led Policy and Communications at the Global Network Initiative (GNI), which she now occasionally represents as a Senior Fellow on a range of issues related to human rights and tech policy. As an award-winning journalist and...

Related

Analysis
What’s Behind Europe’s Push to “Simplify” Tech Regulation?April 24, 2025
Perspective
One Year On, EU AI Act Collides with New Political RealityAugust 7, 2025

Topics