Home

Donate
Perspective

Can the Digital Markets Act Free Users' Data in the AI Age?

Eliot Bendinelli / Dec 12, 2025

This post is part of a series of provocations published in Tech Policy Press following the third iteration of a Digital Markets Act (DMA) enforcement symposium hosted in Brussels on November 20 and 21 by the free-speech organization ARTICLE 19 in partnership with the Center for Digital Governance at the Hertie School, the University of Trento, the Amsterdam Center for European Law and Governance and the University of Namur.

Europe’s Digital Markets Act (DMA) defines “gatekeepers” as large digital platforms that have a significant impact on the market, provide a “core platform service” which is an important gateway for business users to reach end-users, and enjoy an entrenched and durable position. Right now, no company providing an exclusively AI-powered service has been designated as a gatekeeper, and no AI assistant or AI agent has been designated as a core platform service. But with the DMA aiming to make digital markets ‘fairer and more contestable,’ and in light of the concentration of power that's observable in the AI space, there is a strong chance that such designation will happen in the next few years.

For example, with 800 million weekly active users and 1 million business users, OpenAI's ChatGPT, likely surpasses the threshold of 45 million monthly active end-users and 10,000 yearly active business users in the EU required by the DMA to be a Gatekeeper. Of course, that’s not the only criterion for designation, and many questions remain as to what service should be designated and under which category. ChatGPT the service is different from the Large Language Model (LLM) that powers it, which is different from the API giving access to this model. Even within ChatGPT, features like Operator, the company's browser agent, might qualify as a different service than its deep research or shopping feature. But whether one or all of these services end up designated, one obligation will apply to them if they have a user-facing dimension: data portability.

The data portability obligation set out by the DMA in Article 6(9) plays an important role in fostering the emergence of alternatives to core platform services, such as social media, web browsers, video sharing platforms and virtual assistant, as it facilitates user migration from closed services to competitors. By enabling users to automatically port all their relevant data from a platform to an alternative (read: create an account on a new service and authorize it to port data from the service they left), it effectively lowers switching costs and massively simplifies the process of leaving a gatekeeper's service.

One notable aspect of this obligation, especially compared to its equivalent in the GDPR Article 20, is that it's meant to be "continuous and real-time." This theoretically means users can continue using a service they are seeking to leave while feeding data into the service they are planning to join. Or start using a new service while continuously getting data from the one they left. The implementation of this obligation has translated into APIs rather than manual "take away tools" that were the default solution for GDPR compliance, enabling automated workflows that can greatly facilitate migration. Compliance is still far from perfect (CODE has an excellent overview informed by companies that have tried to use those features), but it shows positive steps forward as well as an interest by the European Commission to monitor compliance.

When it comes to AI-powered services, portability could be a critical tool to prevent the apparition of walled gardens and ensure easy migration between services, keeping the market "fair and contestable." AI assistants (or AI agents) in particular, as services that are deeply integrated with users' digital and physical life, derive important value from the accumulation of data they access and the tools they integrate with. Features such as ChatGPT's “memory,” which stores a dynamic profile of users, including their interests, projects and preferences, might drastically change the experience of a service, increasing its relevance the more data it contains. Similarly, integration with other services (dubbed “connectors” in Claude and ChatGPT and “apps” in Gemini) can extend the capabilities of AI assistants, enabling access to external information (say an email inbox) that can make the service more useful.

Luckily, the technical stack for AI tools currently greatly facilitates data portability. Prompts, memories and projects are essentially pure text, which is extremely simple to format and share. Even services offering text-to-speech, the data processed and stored by AI chatbots is usually the converted text rather than the audio. To this point, most AI chatbot services currently offer a way to export this data, which usually takes the form of a structured JSON file including prompts, responses, timestamp, references to files and additional metadata. Regarding the integration with third parties, the connectors, most services currently rely on Model Context Protocol (MCP) servers, an open source protocol developed by Anthropic that standardizes connection with external services. Fundamentally, this means that connecting to your Gmail, Outlook calendar or GitHub account is done in the same technical way whether you are using a local AI chatbot tool like Jan, or online services such as Claude or Copilot. What this allows is for these connectors to be easily ported from one service to the other, potentially conserving configuration and permissions (e.g., whether the AI assistant can read emails, not send them).

Those considerations are based on the current state of AI assistants, and it's possible that this tech stack will evolve and that companies might embrace proprietary solutions for future features. But as we see dominant companies emerge and as the Commission considers how to approach AI services, compliance with data portability should be high on the list of obligations to closely monitor, as it would ensure a high degree of flexibility for users who want to try competitors.

At this time, most AI chatbots offer some form of data export, but none of them allow for data import, despite how easy it would technically be to implement (it's worth mentioning that there are security risks to consider in this process but likely nothing more dangerous than what these companies already contend with). Reaffirming the importance of "continuous and real-time" data portability, strong enforcement could set the stage for a market where users rely on continuous data portability to switch between different services while keeping all of them synchronized and with the same level of information. This would allow them to assess products based on their real quality rather than based on the amount of information it has access to. It would also support undertakings’ development efforts by providing them with data they can use to improve their systems, with users' consent.

For too long, personal data has been both the fuel that powers digital giants and the chains that keep users locked in. As a new era of AI services is dawning, breaking these chains is critical to ensure we don't reproduce the same mistakes that led to locked down digital markets, and the DMA could be the tool to do just that.

Authors

Eliot Bendinelli
Eliot Bendinelli is a digital rights advocate and technologist. He currently works with Data Rights to develop strategic initiatives at the intersection of competition and data protection. Eliot was previously Programme Lead at Privacy International.

Related

Perspective
What Europe’s Digital Markets Act Has Delivered So Far and What Comes NextDecember 10, 2025
Analysis
Will the EU Designate AI Under the Digital Markets Act?September 26, 2025

Topics