Home

Donate
Podcast

What We Can Learn from the First Digital Services Act Out-of-Court Dispute Settlements?

Ramsha Jahangir / Oct 8, 2025

Audio of this conversation is available via your favorite podcast service.

It’s been three years since Europe’s Digital Services Act (DSA) came into effect, a sweeping set of rules meant to hold online platforms accountable for how they moderate content and protect users. One component of the law allows users to challenge online platform content moderation decisions through independent, certified bodies rather than judicial proceedings. Under Article 21 of the DSA, these “Out-of-Court Dispute Settlement“ bodies are intended to play a crucial role in resolving disputes over moderation decisions, whether it's about content takedowns, demonetization, account suspensions, or even decisions to leave flagged content online.

One such out-of-court dispute settlement body is called Appeals Centre Europe. It was established last year as an independent entity with a grant from the Oversight Board Trust, which administers Oversight Board, the content moderation 'supreme court' created and funded by Meta. Appeals Centre Europe has released a new transparency report, and the numbers are striking: of the 1,500 disputes the Centre has ruled on, over three-quarters of the platforms’ original decisions were overturned, either because they were incorrect, or because the platform didn’t provide the content for review at all.

I spoke to two experts to unpack what the early wave of disputes tells us about how the system is working, and how platforms are applying their own rules:

  • Thomas Hughes is the CEO of Appeals Center Europe
  • Paddy Leerssen is a postdoctoral researcher at the University of Amsterdam and part of the DSA Observatory, which monitors the implementation of the DSA.

Below is a lightly edited transcript of the discussion.

Ramsha Jahangir:

Article 21 of the Digital Services Act gives independent dispute bodies the power to resolve user complaints about content moderation, such as removals, demonetization, or account suspensions. A new transparency report from the Appeals Centre Europe shows how inconsistently major platforms are applying their own content rules. Out of 1,500 disputes the Appeals Centre Europe ruled on, over three-quarters of platform decisions were overturned either because they were wrong or the platform didn't provide the content.

Today I'm joined by two experts to unpack what the early wave of disputes tells us about how seriously platforms are applying their own policies as well as what it might signal about the future of user rights.

Thanks so much, Paddy and Thomas, for joining us. Maybe we can start with an introduction to what Article 21 under the DSA is.

Paddy Leerssen:

So the easiest way of understanding Article 21 is as an appeals right. The technical term is out-of-court dispute settlement, and what it allows users to do, is that if they disagree with a moderation decision of an online platform, they're able to appeal this right with these out-of-court dispute settlement bodies to have potentially have that decision overturned. Now, these out-of-court dispute settlement bodies are independent groups, so they're not part of the platform, and they're accredited by the national regulators to fulfill that role in an independent manner. It's important to note that these aren't binding decisions, so the platform ultimately isn't necessarily required to overturn the decision, but you can see that as an advisory decision from the ODS body.

Ramsha Jahangir:

Thomas, actually, you're putting this into practice at the Appeals Centre, so for our listeners not familiar with what Appeals Centre Europe is, where it's based, that would be great. And also if you could walk us through what this looks like in practice. So what happens when a user submits a dispute? What's the process like from submission to decision?

Thomas Hughes:

Ramsha, thank you. Yes, so the Appeals Centre Europe is amongst the first out-of-court dispute settlement bodies that Paddy just described earlier to be set up in the EU. We are located in Ireland and certified by the Irish Digital Services Coordinator, so the national regulator called Coimisiún na Meán, and we are an EU-wide multi-language institution. So we operate with six procedural languages but can handle content in most languages that are spoken in the EU. And we look at disputes based on policy violations, so we utilize the platform's policies themselves, but we do so with consideration for international human rights standards, and then as Paddy mentioned, we review disputes and issue decisions, and those decisions on non-binding, and we function across multiple different platforms so we can handle disputes in relation to Facebook and Instagram and TikTok and YouTube and Pinterest, but we are certified to cover all of the very large online platforms as designated by the European Commission. So we will continue over time to bring on board other platforms through the latter part of this year and early next.

In terms of the way a dispute can be raised, obviously the goal for out-of-court dispute settlement is to give people in the EU a more realistic, achievable, affordable route to be able to challenge a decision that a platform has taken in relation to either their own content that they've had removed or content that they see up online on one of the platforms that they think is harmful that should be taken down, and essentially they can do that by coming to our website, so that's appealscenter.eu. In advance of that they should have raised a dispute with the platform. They can then submit a dispute to us based on the fact that the platform has either not agreed with them or that the platform has not responded to them in a timely manner.

We will then receive that dispute or we have an internal process through which we would then triage that, we look at that from various different perspectives. It would be potentially escalated for in-depth review. Then we will issue a decision back to both the individual user and then also the platform. And then obviously the platform is then responsible for reviewing that and potentially implementing that decision.

Ramsha Jahangir:

How long does this entire process take?

Thomas Hughes:

Well, the legislation sets out 90 days for an ordinary case. When we started, our average case review time was about 115 days. We're now down to about an average of 19 days. But I think we can get that lower. I think we can get that down just to single digit days. And that requires obviously platform compliance, but greater automation within the transfer of data, and hopefully even greater engagement from users in terms of awareness of the types of information they be submitting.

Paddy Leerssen:

Just to add also to this general introduction of Article 21, I think it's important to reflect on what a crucial provision this actually is in the overall design of the DSA as well. Because as we know, content moderation disputes tend to be of a relatively low value, at least in terms of legal disputes. So when an item like a post gets removed or an account gets suspended, these are often not things that people are able to bring to court. That's simply too expensive. And I think that's how we can understand this ODS provision is trying to create an accessible solution or remedy for these kinds of disputes that happen at a tremendous scale, but each of the individual cases don't really rise to the level of something that you can bring to court. So yeah, that's why I think a lot of people are very interested in how these ODS bodies are going to fulfill that role.

Thomas Hughes:

And Paddy, I very much agree with you that this is really the part of the legislation that I think connects more directly with individuals and their experience online and gives them the opportunity to really both take control of their digital ecosystems but also have more say over the social media environments in which they function and move. But it's also one of the beautiful parts, if I can use that word, one of the beautiful parts of the Digital Services Act, is the fact that it does create a bit of an ecosystem of various different processes and actors that are external to platforms, very importantly, Paddy, as you noted, but also in a co-regulatory structure, independent of the statutory regulators and governments in each national jurisdiction within the EU.

And you have other elements like trusted flaggers and vetted researchers and then the systemic risk auditors. But all of this should fit together. All of this are cogs in an engine that need to still, I think, more effectively connect with one another. And what I mean by that is that obviously where you have individual disputes that are raised, Paddy, to your point, they may be problematic or hard for individual users, and an individual dispute unto itself is not going to necessarily... And it's extremely emblematic, it's not necessarily going to change policy or the way platforms function. But at the aggregate level, when you bring these together, they will inform, for instance, by way of creating a bit of a heat map, they will inform where you might look for systemic risks.

And actually there's a tie back in the legislation between the individual decisions that are made and the way in which systemic risks should be identified, and then regulators will then of course be looking to platforms to inform them how they... Through the auditor, but inform them how they're going to mitigate those risks. And of course, there are other tiebacks to trusted flaggers and vetted researchers as well. So as and when this starts to function really seamlessly and effectively, I would hope that there are all of these parts where you start to lock together.

Paddy Leerssen:

Yeah, heatmap is a great term. I use it in the report as well. And as I was reading the report, I got the sense that a lot of researchers are going to be tremendously jealous of you, Thomas, and the other ODSs, because you really have a front row seat to content moderation as it's happening. You see all these different areas and the different kinds of errors and disputes that are arising from it in really kind of an unparalleled view. And like you say, for researchers, but also for regulators, seeing what comes out of this process is going to be super interesting and also ties into other areas of the DSA, like the systemic risk framework. It's a really strong signal of where content moderation might be working and where it might not be working.

Thomas Hughes:

We see the value of that data to the media, to researchers, to regulators, to platforms themselves even. And it's our aspiration, although this is our first transparency report we've just released, it's our aspiration to be releasing this data as frequently as we can, but also to try and do it in a much more bespoke manner and to understand the needs of researchers and other third parties as to how they might utilize that data so that we can make it available to them. And again, that is a really important part of how these cogs can all start to connect.

Ramsha Jahangir:

You both have already referenced the report, so maybe this is a good segue to dive a little bit into the findings and what the first data, initial data out of this transparency reporting mechanism tells us. So Thomas, what does the early volume of disputes suggest? And also you reported a very high volume of disputes which were overturned by platforms, so what does that say about how well platforms are applying their own policies?

Thomas Hughes:

So we received 10,000 disputes in the first half, or slightly longer than a half, but first half of this year. About a third of them were eligible, so within our scope, although our scope is expanding over time of course, and about half of those eligible ones we've processed to date, and of course we will continue in the coming weeks and months to process more of those disputes as well. I think what it really tells us is that there is both a strong interest and a need for out-of-court dispute settlement, for individuals across the EU to have the ability to be able to challenge the decisions the platforms are taking about their own content and content that they see online.

And I think the numbers as we see them at the moment are the tip of the iceberg. But I also don't believe it's just an exponential growth in numbers over time. And I think there's a number of markers out there that we could come back to in this conversation that would give us an indication of what the equilibrium might look like in terms of the volume of disputes we could see in two or three years from now. But really the first half of this year for me really demonstrates that there is this need and there is a very strong interest from people across the EU. Ramsha, sorry, the second part of your question.

Ramsha Jahangir:

I am focusing on the word scope here. I'm curious to hear what are the challenges with eligibility of cases, and when you say scope, how do you decide that scope? Is it also because... Is that part of the DSA the way it's designed or it's an internal issue?

Thomas Hughes:

Well, it's a combination of both. So the Digital Services Act obviously does provide a description of what the scope of out-of-court dispute settlement should cover. So for instance, where a dispute relates obviously to the sharing of information and content, that we would describe as being in scope under the DSA, but where it is a behavioral issue for the user, then that may fall... May also be in scope, it may fall out of scope. But there's also the secondary element, which is that we are certified by Coimisiún na Meán to have a certain scope. So we are handling policy violating content. We are over time increasing and bringing on board new areas. We're still a very young, new institution. We've had to start with what I consider to be quite a broad set of potentially eligible cases, but still there is other areas that we can bring in.

As an example, one of the latest areas we're just bringing in to our scope is advertising. So there's the scope as certified by us and the scope as to what we are currently able to receive and what we might move to in the future. And then when we look at eligibility also, we look at the merits or the information, I should say, that's provided within a particular dispute as well. So there is also a category of submissions or disputes to us from users that simply don't have sufficient information or that we don't believe are necessarily within our scope based on what the information provided is. And obviously that means that up to two thirds of the disputes that are submitted we don't proceed with.

Ramsha Jahangir:

So that also implies that there's a misunderstanding on the scope of the rights for users as well. And quoting from your report, actually, you say the platforms are still keeping dispute settlement bodies as Europe's best kept secret. So is there also just generally from platforms as well, there's not enough effort to educate users on what scoping looks like?

Thomas Hughes:

Yes, potentially so. But when we say it's Europe's best kept secret, what we are also pointing at is what we call signposting. So signposting is the information that the platforms provide on their platforms to the user to make them aware of the fact that they have the right to be able to submit a dispute under Article 21. And that is required under the legislation that they do that. Now we think that there are a few key areas that the platforms should be doing at a minimum. It should be within the internal appeals process at the end of that process for a user so that they have clear information about their right to submit a dispute, should be within the statement of reasons, and they should have a dedicated web page that gives this information to the user. And unfortunately, we don't think that the platforms are currently doing enough to make their users aware of this right.

Paddy Leerssen:

I found this a very interesting part of the report, I have to say, Thomas, because you kind of break down also for different services the degree to which they're actually signposting, like you say, making this option available or accessible to users. So correct me if I'm wrong, but I believe that that Meta services did actually list it in some contexts, but other platforms were even less visible than that. And obviously this seems to me like a hugely consequential issue going forward, and you can easily imagine that as soon as a platform notifies you of a moderation decision, if you can with one click, let's say, continue on with an appeals process, that's going to be much more attractive and commonplace than if you have to start doing your own research to find these options. So that seems to me tremendously consequential, and there also seems to be strong legal arguments to require quite a lot of accessibility here.

And not only is there some language in the statement of reasons itself, the requirements to have that explained to users what their appeal options are, but we also have this wonderful provision in the DSA on dark patterns, which is very broad and flexible. It's about manipulative interface designs. And you already note in your report that sometimes maybe they are disclosed, these options, but they're in gray text, or they're very hard to... They're in very small letters. These are kind of issues that could potentially raise a dark patterns issue. And I'll just mention that just last week in Amsterdam we had the first court case about dark patterns under the DSA brought by Bits of Freedom, which is about a different right. It was about the design of the recommender systems in Meta's newsfeed. But what you see there is the interplay between these different DSA rights, so that the dark patterns provision is used to make other rights within the DSA more accessible. So I'm definitely curious to see how that issue develops, and I hope that platforms will start to make these appeals rights as accessible as they should be.

Thomas Hughes:

Yeah, absolutely. And our assessment is that although if you look at the volume of disputes we are receiving across the different platforms, although they will vary for a number of reasons, not least the type of content that is posted on those platforms, the policies, the way the policies are informed, we think the biggest single variable in that is actually the quality of the sign posting, particularly when you look comparatively at the numbers of monthly users across the platforms compared to the scale of volume dispute that is being received. And certainly at the lower end, you've got platforms like YouTube where, again, we think that the quality of sign posting is not there. And the number, if you look at the report of disputes that we receiving comparatively for that platform is also low.

Paddy Leerssen:

It reminds me of actually the NetzDG, the German internet enforcement law which entered into the force before the DSA, because there we saw a similar issue where users had a right to submit takedown notices, and it seemed like Facebook made that feature far less accessible than other platforms did, and that was actually reflected in the numbers. We could see in the reports that they were receiving many multiples fewer takedown notices. So there's a very similar dynamic potentially at play here.

Ramsha Jahangir:

Yeah, actually already in the report Facebook and Instagram dominated the caseload, so it's... Already you've pointed to what makes some platforms particularly prone to disputed content moderation decisions, but also there's another angle to it, and that is this is obviously non-binding and platforms are required to do this under good faith measures. So what does it say about the whole process, the dispute process of so many decisions are made by default because platforms don't participate?

Thomas Hughes:

The default decisions that we've issued we do so, I should clarify, because if we feel that the platform has not engaged sufficiently, we do not want the process to drag out to the full 90 days, and therefore after 30 days we issue a decision on that basis as a default in favor of the user. But the default decisions unto themselves fall into a number of categories, and I think for this, the two primary, cases where the platform is unable to locate the content and then cases where the platform refuses to share the content. And in the latter group, obviously that may be in response to a disagreement over, for instance, the scope and whether the dispute itself is eligible for review by the Appeals Centre.

But the former category, where they aren't able to locate the content, is about tooling and product. It's about their ability to use the information that we've provided, which has been agreed in advance with the platforms and is the information that's requested from the user in order to submit an eligible dispute. But whether that information... Whether they're able to locate the content in all cases using that information. And I think as the report shows, the platform still have quite a way to go to both improve their identification and location and sharing of the content, and then there's also a way to go to clarify areas where there is disagreement about whether content or whether a particular dispute is eligible.

Ramsha Jahangir:

There's also data protection, I think, that plays a part here. So in response to this report, for instance, a spokesperson for YouTube told a media outlet that the Appeals Centre had not put sufficient privacy safeguards in place for it to share data to resolve content moderation decisions. So what type of safeguards does the account center have in place, and also generally ODS bodies are required to?

Thomas Hughes:

So I would say we have very high standards, we have very robust industry standards for the privacy safeguards that we have in place. We are a data controller and we are absolutely GDPR compliant. I have to say, very unfortunately, that I think that response from YouTube is disingenuous and misleading. We have been talking to them for just under a year about this. As you can imagine, we've had very similar conversations with Meta and with Pinterest and with TikTok. We've been able to sign appropriate data sharing agreements with those platforms in vastly less time than this one year that has passed in these discussions with YouTube.

And whilst I don't think it's helpful for this podcast to get into all the intricacies of those discussions, let me say that there has been long periods of silence from YouTube where we have tried to move this forward and we have not heard anything, and we also feel that their interpretation of the DSA is very narrow and restrictive, and that would also apply to data and data location and what is eligible for dispute by a user. So I'm sorry to say, I think that YouTube's response to the transparency report is a little misleading.

Paddy Leerssen:

Of course this wouldn't be the first time that we see large platforms use data protection law as a shield or maybe an excuse not to collaborate with third parties. They've also been censured for that in the US by the Federal Trade Commission. So yeah, it wouldn't be the first time.

Ramsha Jahangir:

Speaking of actually platform policies in their approach, so what are the type of standards ODS decisions are based on? Because from what I understand, specifically for the Appeals Centre, a lot of the decisions were based on platforms policies like TOS itself. So do you think this is a reliable, and also Paddy, looking at you here, is relying on TOS effective and what are some of the other standards that ODS mechanisms could address?

Paddy Leerssen:

Yeah, I think this is really a big profound question that all the ODS bodies have tried to grapple with, which is what are the standards for review? The law itself isn't exactly clear, and maybe also the ODS bodies can set their own kind of scope as to what kind of review they're performing. But basically you have the choice between are you going to apply policies the platform itself has formulated in their community guidelines or the terms of service, or do you also try to bring in other sources of norms? And that can be national law if you're taking disputes about whether content is lawful or not. My understanding is that Appeals Centre has kind of left that out of scope and said we're not going to be handling those cases, we're going to focus on the application of the platform policies. But even then there's still questions about, for instance, fundamental rights, even in contractual relations, which terms of service are fundamental rights can play a role and can potentially override elements of a contractual agreement.

And the DSA actually makes explicit in Article 14 that the application and enforcement of terms needs to be done with due regard for fundamental rights and other rights and interests. So there is always that question of to what extent do you apply the terms by the letter, and to what extent can you ask questions like, "Well, in this specific case, we find that applying those letters is maybe disproportionate or doesn't truly take into consideration free speech or privacy rights," et cetera. So I think that's the big million dollar question behind all these dispute cases. Obviously the arguments for a narrow reading where you just focus on the terms, the argument against that is that, yeah, you can fail to adequately text fundamental rights. Maybe you're deferring a little bit too much to what the platform itself wants. But of course, when you go in that direction, it becomes arguably more subjective and harder to keep it consistent or predictable, and you get perhaps a strange situation where these ODS bodies are interpreting a contract which they're not really a party to.

Now one final thing I'll say is that even if it were just a strictly application of the terms themselves, there are of course a large category of disputes where we're just talking about obvious errors, where it's obvious on its face that the platform hasn't even applied its own policies correctly. An infamous example is Meta incorrectly labeling an image of onions, like a photograph of a pair of onions, as sexual content. So there's a large category of cases where the platform has made an obvious error and just simply applying their terms is going to be extremely useful to a large number of people. And the question is, what about the edge cases? How far do we want the ODS bodies to go there?

Thomas Hughes:

Yeah, so we at the Appeals Centre have tried to some degree to bridge some of those challenging questions, but Paddy, I completely agree with you that it's one of the knotty areas as it were for Article 21, and then we'll have to see how it evolves over time. So we are certified to look at policy-violating content, and of course that has some benefits in relation to it, both in terms of looking at those enforcement errors, Paddy, that you mentioned, the platforms very frequently make, but also providing a basis per platform that is pan-EU and giving the user clarity in relation to the user understanding what the platform's policies are, and then being able to apply those policies in relation to individual decisions. But what we also do is look at those policies with consideration for international human rights and fundamental rights standards and norms, and particularly the application of certain tests around things like freedom of expression and other rights.

And although we do that through an escalated procedure, so obviously that would relate to specific individual cases, and still when we do that, we take that in consideration of the policy and the application of the policy, that may then inform our normative framework and be replicable across many hundreds of thousands, hundreds of thousands, but hundreds or thousands of cases potentially, because that then becomes an established part of our normative framework due to our interpretation of the platform policy itself. But I do agree it's a complex area and different ODS bodies are approaching this in slightly different ways and it's going to be really interesting to see over the coming year or two exactly how this does level out and maybe how we can find more consistency in the way that this is applied.

Ramsha Jahangir:

I can't talk about DSA anymore without pointing to the entire censorship claims and the pressure from US and just overall the deregulatory environment. Recently, as you would be aware, YouTube agreed to pay in settlement over Trump's account suspension and account suspensions were, correct me if I'm wrong, Thomas, one of the biggest areas of dispute that you settled in your report in this period as well. So what do you think is going to happen? We're three years into DSA enforcement, one year into enforcement with ODS bodies in place. How do you see the next few years and what are your reactions to all the weaponization around DSA?

Thomas Hughes:

I think the first thing I would mention is one of the interesting sort of pieces of data coming out of our first transparency report is that, in actual fact, we are more often sort of issuing decisions that require the reinstatement of content that has been disputed by the individual user. So actually the argument that regulation writ large, but out-of-court dispute settlement is some sort of censorship process is fundamentally flawed. Now, when you look at our data and you sort of look at the numbers of, for instance, default decisions that are built within it, then that sort of slightly obviously muddies the water a little bit because within those default decisions, we are only issuing those in favor of the user. But even when you extract those, you get a picture where there is an even balance between us recommending content be stay down or be removed, and us recommending content be restored.

So the idea that regulation is just unto itself some sort of censorship tool is wrong. It's a flawed approach in terms of thinking about the benefits of what regulation can bring. In terms of the future, what I would really like to see in the coming year or two is, again, how these different parts under the DSA and the different components of the regulation start to lock together or link together and help inform a much bigger picture in which, as I mentioned, out-of-court dispute settlement helps inform, identify where systemic risks lie, or where vetted researchers can help produce information based on data that comes from out-of-court dispute settlement to think about some of the evolving issues that could then inform normative frameworks and so on and so forth. So you can really imagine a situation in the not too distant future where as more entities start to get up and running, as more processes start to build up a bit of steam, as more reports and more data becomes available, this legislation really starts to perform and function as I think it's authors foresaw.

Paddy Leerssen:

Well, if I can add to that, I completely agree with Thomas that I think developments like this, like the ODS in Europe, really put the lie to this spurious American accusation that the DSA is a censorship machine. I think thanks to developments like this, we have far better free speech protections than we ever have in Europe, or for that matter anywhere else. It's really quite unprecedented that we're now able to appeal externally these platform decisions. And like you point out, in many cases, in fact the majority of the cases, it's about actually reinstating content, protecting free speech, rather than necessarily taking down content and limiting free speech. So yeah, I think that's a really significant development. It can't be stressed enough.

And again, it's not also the government's, as that rhetoric often implies controlling this, but it's these independent third party entities. And while there are issues now with European Commission's enforcement of systemic risks, seemingly slowing down questions, whether they're bowing to US pressure, I've also spoken to other journalists and told them that that's why we need to keep an eye on these other developments that are happening that don't depend only on European Commission enforcement. These are the less spectacular areas of the DSA. It's not about big fines or controversial cases, but it's quietly, slowly, we're getting tens of thousands of Europeans having their rights enforced against platforms in ways that they couldn't before. And that's kind of a slow, and as I said, not a very spectacular development, but an extremely significant one.

Ramsha Jahangir:

I think that's a good note to end this conversation. Thank you so much, both of you for your time.

Thomas Hughes:

Ramsha, thank you.

Paddy Leerssen:

Thank you, Ramsha.

Authors

Ramsha Jahangir
Ramsha Jahangir is an Associate Editor at Tech Policy Press. Previously, she led Policy and Communications at the Global Network Initiative (GNI), which she now occasionally represents as a Senior Fellow on a range of issues related to human rights and tech policy. As an award-winning journalist and...

Related

News
What a Dutch Court Ruling Against Meta Signals for Private DSA EnforcementOctober 8, 2025

Topics