Home

Donate
Perspective

Building Trust for Data Portability Within the DMA Framework

Tom Fish / Jun 23, 2025

Tom Fish is Head of Europe for the Data Transfer Initiative, a non-profit dedicated to empowering technology users to transfer their data between services. DTI partners include Google, Amazon, Apple, ErnieApp, and Meta.

In Europe, it’s getting easier for users to port their data from one platform to another. The EU’s Digital Markets Act (DMA) has catalyzed a surge in the availability of effective data portability tools, with new functionality now available to support user-led direct transfers of data from all of the largest online platforms to other service providers. However, although the technology is now in place to make data available, there remain ongoing implementation challenges that are placing a handbrake on adoption and subsequent value creation. These challenges relate principally to the process of establishing trust between transfer parties, which is not defined by the relevant DMA provisions.

With that context, this article lays out an effective and proportionate approach to establishing trust in the context of the DMA, and how a “Trust Model” and “Trust Registry” can smooth the ongoing implementation of the DMA and future parallel regimes in other jurisdictions.

Barriers to personal data portability

While data portability has inherent technical complexities in implementation, approaches like the deployment of Application Programming Interfaces (APIs) or the use of a framework for end-to-end transfer like the Data Transfer Project offer known solutions to these. Thus, the main barriers today to widespread data portability are non-technical:

  • Investment: for each transfer party – including the data holder, the user, and the intended data recipient – facilitating data portability through direct transfers requires some form of investment, whether that is financial, time, or opportunity costs. Each party requires sufficient incentives or motivation to willingly engage in the process simultaneously.
  • Trust: in order to implement data portability effectively, at least two organizations need to work together, in communication with a shared user. Trusting each other is key to this coordination, which can be tricky between rivals, let alone strangers.

These barriers are close relatives. The stronger the incentives to solve a problem, the easier the different parties will find it to invest in the necessary resources and to coordinate and agree on the way forward. In an ideal world, we would want to see these challenges being overcome through market forces, and we strongly believe this will happen over time as demand for portability gradually builds. Yet we also recognize that regulation can be an initial catalyst for sector-wide action where more immediate change is needed.

Open Banking is a prime example. Starting in the UK in 2016, the largest incumbent retail banks were required to develop APIs to facilitate user-led data transfers to third-party services, including those of rival banks. Those banks were legally required to coordinate on the solution, while a centralised directory was introduced to enable third-party providers to register and signal their trustworthiness.

The EU’s Digital Markets Act (DMA) is another example of a regulation that has provided stimulus for companies to invest in tools in the absence of visible demand. In response to the DMA, seven ‘Gatekeeper’ companies (Alphabet, Amazon, Apple, Booking, ByteDance, Meta, and Microsoft) built data portability tools that enable their EU users (and in some cases with wider availability such as the UK) to transfer their data continuously to a third-party service of their choosing. Although that regulation overcame the challenges related to investment, it did not directly address the challenges surrounding trust and coordination. The Data Transfer Initiative (DTI) is well-placed to plug this particular gap.

The DMA is silent on how to establish trust

The DMA contains a provision that mandates designated gatekeepers to provide data portability tools. Specifically, Article 6(9) states that gatekeepers

…shall provide end users and third parties authorised by an end user, at their request and free of charge, with effective portability of data provided by the end user or generated through the activity of the end user in the context of the use of the relevant core platform service, including by providing, free of charge, tools to facilitate the effective exercise of such data portability, and including by the provision of continuous and real-time access to such data.

This places a legal obligation on each gatekeeper to deliver this high-level outcome, while also putting the responsibility of working out the ‘what’ and the ‘how’ firmly in the gatekeepers’ hands. There is a sound logic to regulators taking this kind of outcomes-based approach in digital markets, wisely resisting the temptation to tell the most successful technology companies in the world how to design new systems. However, being silent on known implementation questions can also create ambiguity and confusion among stakeholders, or result in complexity where central coordination could have been beneficial.

In the case of trust, the DMA does not specify how or when gatekeepers should establish a relationship with potential receiving transfer parties, how they should verify that these parties are trustworthy, or how they should authenticate that the user in question is involved in and aware of the request.

To some, this implies gatekeepers must not verify developers or authenticate users under any circumstances, nor have the right to undertake any due diligence whatsoever. To others, it might imply that it has been left entirely up to the individual gatekeepers to decide how best to verify that the user isn’t being impersonated or how to onboard developers in advance of potential requests. But to any objective observer, it is clear that both of these outcomes would likely be sub-optimal.

Checks and balances are clearly in the interests of end users, along with the entirely reasonable desire of gatekeepers to protect their reputations from avoidable high-profile privacy scandals.

Yet proportionality is also necessary and expected in this context. After all, these companies face a legal obligation to enable access to user data under the DMA, and the GDPR tells us that the data subject typically has the right to be the main arbiter of who they share their data with and to what end.

The outcomes-based, hands-off approach to initial DMA implementation means the gatekeepers have each independently judged where to draw the line between safety and proportionality. Inevitably, without central coordination, there are now seven slightly different approaches and processes for third parties to engage with, along with a great deal of duplication. While this gets the job done, it introduces unnecessary burdens, particularly on smaller players who, in practice, face a redundancy of reviews.

User-led data transfers require guardrails

If you reduce the friction for data access, you open up new positive opportunities for user empowerment, competition, and innovation. But in doing so, you must inevitably also reduce the friction for bad actors to steal sensitive information, while increasing the attack surface for cybercriminals. When implementing new data portability tools or schemes, we therefore need to minimize the level of friction for users while maximizing it for potential malicious actors. In doing so, we must acknowledge that we will place a burden on those parties with good intentions, including the data holders and the vast majority of third-party developers. Striking the right balance here is not straightforward.

One of the benefits of data portability is that it is, by definition, user-led.’ In other words, there is an in-built mechanism that ensures data transfers are initiated by the user's affirmative action, rather than something that takes place passively behind the scenes without the individual’s knowledge.

That is how it should work, but without guardrails, there are some risks to users (as assessed in detail in our User Data Portability Threat Model). For example:

  • Bots or hackers could attempt to initiate portability requests without the user’s knowledge through impersonation;
  • Users could misunderstand or be misled about what their data was going to be used for, or be tricked into making the wrong decision; and
  • Users’ data could be exposed to risks or threats from its onward use that they couldn’t reasonably be expected to make informed decisions about.

Even in the EU or the UK, where comprehensive and robust data protection laws are in place, it would be reckless to assume that all organizations implement such laws effectively, and unrealistic to assume that DPAs have been perfectly enforcing them, considering the limited scale of penalties.

For example, in 2024, the UK’s ICO issued just three fines for GDPR breaches, all of which were levied against public bodies (i.e., no private companies in the UK were fined for a GDPR breach over the course of the year). This was in the context of having processed a total of over 36,000 data protection complaints (not limited to GDPR matters). Over the same period, the French DPA was comparatively more interventionist, issuing a total of just 20 GDPR fines from thousands of complaints received.

Yet we know there are plenty of bad actors out there, with the global cost of cybercrime projected to reach $13.8 trillion by 2028. This includes sophisticated and malicious criminals, of course, but there are also more subtle bad or careless actors who collect users’ data without being upfront about how they will use it, or who fail to take sufficient measures to keep it safe. Assuming GDPR compliance by all would not match reality.

We can be certain that some bad actors will attempt to utilise data portability as a method for stealing people’s personal data, and so it makes sense to consider what form this might take.

This is not a new challenge, nor is it specific to data portability. Verification processes to build trust and guard against risk are very common:

  • Open Banking Directory enrollment: Before any organization can access and operate within the UK’s Open Banking ecosystem, such as to gain access to APIs, it must enroll with the Open Banking Directory. This involves submitting information about the applicant organization, the permissions being sought, and details regarding its registration with a relevant national financial services regulator.
  • Know Your Customer (KYC) requirements: Businesses from a range of regulated sectors, including financial services, have a legal obligation to conduct KYC checks on their customers. These checks, which can, for example, prevent crimes such as money laundering or fraud, involve collecting simple customer details such as proof of ID and address, through to providing evidence for the source of funds and income.
  • Due diligence for new vendor onboarding: Outside of formal KYC checks, due diligence processes are common when a business takes on a new vendor. Where products or services supplied involve the transfer of data (which is, could be, or has previously been personal data), such processes are often comprehensive and resource-intensive, with vendors responding to lengthy and detailed questionnaires of a technical and legal nature before contracts can be finalised.
  • Access to online platforms: where online platforms provide an access point for businesses to distribute their products or services to consumers, there are typically some checks and balances in place to protect the platform users from harm. For example, for the distribution of software through app and web stores, there are up-front product reviews to protect users from privacy, security, and other threats. Online marketplaces also undertake some verification processes before retailers can list and sell their products on the platform.

In all of these cases, there is a clear rationale for the checks; however, they also incur costs and delays for the numerous actors in the ecosystem who are operating in good faith. The right balance needs to be struck between these costs and delays, as well as the scale of the risks. In this sense, and certainly in the context of the DMA, checks must be proportionate to the risk and the context.

Proportionality is context-specific

In tech regulation circles, we seem to be hearing a lot about proportionality at the moment. Governments and commentators want regulators to be proportionate in how they use new powers. Regulators want companies they oversee to be proportionate in the way they place checks and balances on others.

Both perspectives seem right and sensible on the face of it, but what does it mean in practice? The word won’t mean the same thing to everyone, nor will it mean the same thing in all contexts.

For a data holder to establish trust with a potential data transfer recipient, the following factors might influence what is considered proportionate:

  • the type and sensitivity of data;
  • the volume of data;
  • the types of permissions the third party is gaining; and
  • the relevant legal obligations on the data holder.

The last one is key in the context of the DMA. The designated gatekeepers are obligated to support data portability, and so the onus is on them to approve applicants. The DMA is not a voluntary scheme, and blocking third parties will clearly need to be the exception. But there will be applicants who intend harm to users, and others who grievously underinvest in data protection, including entities who fall outside the scope of data protection authorities, as well as those who simply would not be caught in time to prevent harm.

Verification and grounds for rejection

In 2024, in consultation with a wide range of stakeholders, DTI developed and published a Trust Model to inform the type of questions that transfer parties might ask each other to establish the kind of trust that is necessary for facilitating data portability.

We intended for the model to be relevant to all contexts where coordination is necessary to facilitate data portability, including for different sectors, data types, and legal jurisdictions. Intentionally, the model was not a specific proposal for how gatekeepers should seek to verify third parties in the context of the DMA. As a result, there are some topics or questions contained within our model that we would not necessarily expect Gatekeepers to include within their verification processes, and this has been reflected in reality. Many of the verification processes implemented by the gatekeepers are more streamlined than our model, with some elements omitted.

But in any case, the questions asked or the evidence requested by gatekeepers are a bit of a red herring. In the context of the DMA, proportionality ought to be judged based on parameters such as what the gatekeepers do with the responses, how efficiently they run the process, and what decisions they ultimately take.

Let us consider the issue of consent as an example to illustrate this point. As part of our Trust Model, we consider it important that receiving transfer parties (i.e., those third parties seeking approval for access to data portability APIs) can demonstrate that they will initiate data portability requests with clear consent from their users. This is a relatively straightforward thing for a sending transfer party (i.e., a DMA gatekeeper) to request, and for the receiving party to prove through a few screenshots or a video of their data transfer user journey.

If the receiving party is unable to demonstrate a consent mechanism, or if the evidence provided shows that they are seeking consent on a false premise (e.g., the consent flow is misleading about what permission is being sought, or blatantly misleading about what will happen with the data) then it would surely be a proportionate response, even in the context of the DMA, to block such an applicant from gaining access to the API. Turning a blind eye to this kind of observable deception and user harm on the flawed assumption that the third party will comply with the GDPR would be irresponsible and entirely contrary to users’ expectations.

On the other hand, the reviewers within the sending party might have views about the language used in the consent flow, the colours chosen for buttons, or some other element of the choice architecture design. While we would always encourage companies to work together to share best practices and bring up standards together, this is unlikely to be considered a proportionate reason to block an applicant in the context of the DMA.

It is apparent from this example that there are valid reasons for a DMA gatekeeper to ask third-party developers to provide screenshots of their consent flows along with a link to their privacy policy. Doing so is consistent with the implementation of a proportionate and prima facie verification exercise, even in the context of a legal obligation to provide access to data.

In many cases, like this example of demonstrating consent, it will be fairly clear where the line is for DMA compliance. In others, the line may need to be drawn by the European Commission, in consultation with the parties involved.

Blame duplication, not due diligence

In principle, when it comes to complying with legal obligations on businesses, each affected organization can only control its own response. Every single organization could design a solution for regulatory compliance that makes perfect sense in the context of their own obligations, but in aggregate, the responses of all those companies combined can be problematic – less than the sum of their parts. This kind of outcome can stem from a lack of alignment or coordination, or sometimes from unintended duplication. None of these things is the fault of the companies involved – this is classic regulatory failure.

A good example of this regulatory failure is the widespread use of cookie banners. Data protection laws, such as the EU’s ePrivacy Directive and more recently the GDPR, require websites to obtain informed consent from their users for tracking their online activity through the use of cookies. On the face of it, this is a reasonable expectation, and an intuitive way of seeking this consent is to put up a prompt, offering the user a choice from the outset. One simple click to confirm the users’ wishes, and the obligation is met in a few seconds.

In isolation, on one website, this seems fine. But in aggregate, the implementation of this legal obligation is a total disaster. Cookie banners are implemented inconsistently, sometimes with highly problematic designs, but more importantly, they are duplicated across most websites and are a source of immense frustration for consumers online. Often, people simply click the button they need to make the prompt disappear as quickly as possible. Here, the problem and the bulk of the frustration stems from the repetition, not the intention, nor each organization’s implementation of the rule.

Although on a much smaller scale, we are now observing a similar pattern emerge through gatekeepers’ implementation of Article 6(9) of the DMA, particularly in relation to verification.

On the face of it, most people can see that having a simple vetting process in place to weed out bad actors accessing these tools is intuitively sensible. Most developers that we have spoken to are not opposed in principle to the kinds of checks currently in place. In fact, these kinds of reviews are familiar for access to existing APIs, or to platforms such as app and web stores.

Let us zoom in on security vetting as a specific example. Is it reasonable and proportionate for a sending transfer party to ask a receiving transfer party to go through a security assessment with an external provider that costs several hundred pounds? It’s certainly not outrageous, and such a cost should not be prohibitive even for a very small or early-stage business. Without existing evidence or certification at hand to demonstrate effective security practices, such a review can be a useful exercise to check that the company takes its responsibilities around data security seriously.

But in the context of the DMA, would it be reasonable for a small company to have to do five, six, or even seven separate security checks - one for each gatekeeper - potentially costing several thousand pounds in total and taking up significant engineering resources for a number of weeks? It would be easy to see how such an outcome could cause frustration and undermine demand for accessing the new tools.

We believe that much of the frustration from third-party developers in this scenario would come not from the approach of each individual gatekeeper in isolation — though of course some frustrations with individual cases may naturally occur — but from the fact that there is no single central solution. In aggregate, the impact on the individual developers, many of which may be small and micro businesses, could be burdensome and disproportionate.

We view this as a coordination problem, and it is one we aim to address head-on through the implementation of our Trust Registry, which is now in a live pilot phase with one of our partners. In due course, we believe that gatekeepers (and hopefully other data holders engaging in data transfers) will be able to rely on our registry, at least in significant part, instead of each running their own wholly separate verification processes.

Next steps and our goals

The gatekeepers’ data portability tools are now all in place and are in the process of being fully operationalized, while the new methods for establishing trust are undoubtedly still being refined. We are assisting where we can with communication challenges that arise and helping to bring parties together where unintended roadblocks emerge.

In parallel, we are forging ahead with the piloting and implementation of our Trust Registry. We view this as a critical contribution to the portability ecosystem in order to address coordination challenges and remove duplication. This means having developers apply and work their way through our initial verification process, which we are iterating in line with the principles of our Trust Model. The feedback we gather from this process will be valuable data for us as we seek to scale up and interact with more of our partners, taking on a higher volume of applicants.

It is important that we focus now on how our trust work applies to and works with the EU’s DMA obligations. But our long-term goal is to operate and maintain our Trust Registry, built on the principles of our model, in support of a vibrant ecosystem for simple and secure data transfers, including on a global basis. Our vision is for each transfer party to establish trust with DTI and then have access to any data portability tool for users in any country or legal jurisdiction.

To the extent that organizations begin to invest in data portability infrastructure and tools beyond the borders of the EU, whether motivated by market forces or regulation, we will make our Trust Registry available to assist with smooth coordination and establishing trust.

Authors

Tom Fish
Tom Fish is Head of Europe for the Data Transfer Initiative, a non-profit dedicated to empowering technology users to transfer their data between services. Tom specializes in technology policy at the intersection of competition and data protection, stemming from his background as a UK Government eco...

Related

Can Data Portability Shift Power in Europe’s Digital Ecosystem?March 12, 2025

Topics