Home

Content Moderation and Platform Observability in the Digital Services Act

Charis Papaevangelou, Fabio Votta / May 29, 2024

This piece is part of a series that marks the first 100 days since the full implementation of Europe's Digital Services Act. You can read more items in the series here.

Content Moderation and the Question of Transparency

Social media platforms may feel like open spaces for public deliberation, yet they rely fundamentally on sophisticated content moderation systems that heavily shape the nature and conditions of user interactions. These systems are essential to filter out content that is deemed illegal by public regulations or undesirable according to platform terms of service. The primary reason to moderate platforms is to sustain the advertising revenue that drives their business models. The tech companies operating social media and communication platforms like TikTok and YouTube have leaned heavily into artificial intelligence (AI) and automated decision-making processes to sustain their industrial-scale content moderation processes. While these technologies promise efficiency, they tend to overshadow the critical human labor involved—much of which is outsourced to developing countries to save on cost. This labor includes everything from data entry and annotation to the more daunting tasks of content review and decision enforcement.

Additionally, these AI technologies often fall short when dealing with languages that are less represented online—what experts refer to as "low-resource languages." There’s also a notable unwillingness by platform firms to properly employ human moderators who are fluent in such languages and/or who can provide essential cultural context. This combination has led to many instances of arbitrary suppression of speech, which disproportionately affects people of color, women, as well as vulnerable and marginalized communities. This often means that what appears on one’s feed has been “sanitized,” sidelining nonconforming or marginalized voices in the process due to implicit or explicit biases, power dynamics, and shortcomings of these systems and their operators.

Deliberately or not, instances of content moderation that feel unjust or arbitrary highlight a significant flaw in how content moderation is handled on major digital platforms, perpetuating the structures of power that oppress such communities to preserve the status quo. For example, a recent study evidences how pro-Palestinian content was systematically censored during the Sheikh Jarrah Crisis of 2021 on major social media platforms.

Empirical research has been instrumental in exposing the harms caused by content moderation and its automation. However, in recent years, important platforms for our digital public debate like Facebook and Twitter have blocked access to their data for researchers and journalists (e.g., through APIs or services like CrowdTangle). As a result, studying the harmful consequences of their content governance has been made particularly difficult. In this context, many governments and civil society organizations have been taking more actions to foster platform transparency. But, transparency is neither a panacea nor can it always lead to accountability. This is because, in many cases, there are complex power dynamics that shape the processes of what and how data will be made transparent, i.e., visible and understandable.

Many corporations and governments, moreover, prefer opting for a vague definition of transparency, which ensures that they are not held accountable, meaning that their power is not endangered. In this context, transparency is also understood to primarily concern platform data, but not data and information regarding the global value chain of industrial-scale content moderation. To that end, some scholars have been pushing for observability as a more meaningful and apt path to accountability. Observability refers to an active and dynamic approach to understanding and studying the nuances of digital platform operations. It builds upon the principles of continuous observation of platforms in favor of the public interest and, ideally, allows for interventions that affect platforms’ governance mechanisms. Here, we try to appreciate whether the EU’s Digital Services Act (DSA) facilitates observability and, ultimately, accountability.

The Digital Services Act and Platform Observability

The EU’s DSA has several provisions aimed at restoring access to data for research and fostering transparency in how platforms govern their digital spaces. But does it offer a path to observability and, thus, accountability? The DSA’s relevant obligations range from setting up a process for vetting researchers to access platform data to forcing Very Large Online Platforms (VLOPs) to provide justification for each content moderation action (“Statement of Reasons”) they take and to publish it on a public database. The potential for stakeholders, particularly those invested in the public interest such as critical researchers and investigative journalists, to use data from the SoRs database and transparency reports is significant. They can employ this information to question and challenge platforms, ensuring that content moderation practices are not just made known but are fair and accounted for. This ongoing engagement and scrutiny underscores the importance of observability in holding digital platforms to account, emphasizing its role as a vital tool for advocacy and change in the digital landscape.

By leveraging the DSA’s SoRs Database, which compiles statements from online platforms as mandated by Article 24(5) of the DSA, we recently conducted a study to tackle two questions:

  • How do content moderation practices vary among platforms and EU member states?
  • How does the use of automated tools for moderation differ across these platforms and regions?

An Inquiry into the Statement of Reasons Database

We focus on the moderation practices of eight major online platforms: Facebook, Instagram, YouTube, TikTok, Snapchat, Pinterest, LinkedIn, and X. For context, the database categorizes automated moderation as full, partial, or none, and automated detection as present or absent. We scraped the database for submissions over four months (September 2023-January 2024), starting since its launch in September 2023 and resulting in a dataset of 439 million entries. We also examined transparency reports of platforms for data on Monthly Active Users (MAUs) and the number of human moderators across the EU and its official languages. TikTok reportedly employs the most moderators in this context (Table 1).

Table 1 – Monthly Active Users (MAUs) and human moderators per platform working on EU’s official languages.

By combining these data sources, we were able to assess the relationship between automation, territorial scope, and language in content moderation, and explore any regional disparities. It is worth noting that divulging the language of the content affected by a moderation decision is voluntary under the DSA and none of these eight VLOPs divulged relevant information. To ensure fair comparisons, we normalized the SoRs data per 1,000 MAUs to identify if moderation is disproportionately applied in certain areas.

Our study reveals significant variations in how content moderation is implemented across major online platforms and EU member states. Most notably, while the majority of VLOPs tend to moderate content at a broad EU or EEA (European Economic Area) level, platforms like X, TikTok, and YouTube provide more detailed, country-specific data. Also, these platforms rely on textual and video content, which are the two most moderated types of content under the DSA framework as other researchers have shown. As a result, we then proceeded with a more nuanced analysis (Fig. 2) of how automated systems are employed differently across regions in these three platforms and found that:

  • X (in black) stands out by – allegedly – relying exclusively on manual moderation, contradicting its own public transparency reports, which state that it uses automated means. This goes to show that X has not taken its responsibilities under the DSA seriously, leading the European Commission to open a formal case against it.
  • TikTok (in purple) also stands out as it relies almost exclusively on automation for content moderation.
  • YouTube (in red) predominantly uses automated moderation methods and rarely supplements these with manual checks, resulting in a hybrid approach.

Figure 1 – Plot shows the relationship between Statement of Reasons (SoRs) per 1,000 MAUs and the use of automated means in detecting content in different territories (EU Member states)


Implications for Regulations

The discrepancies in moderation practices, especially the variance between automated and manual detection times, underscore a critical challenge for regulatory frameworks like the DSA. While automated detection can significantly speed up response times, it also highlights the limitations of such systems in dealing with complex, context-specific issues that require human judgment. For instance, YouTube’s median reaction time, including automatically detected content, was in many cases over 100 days, and this delay increased dramatically when manual review was involved, particularly for decisions with specific territorial scope.

Figure 2 – Plot shows the relationship between the (weighted) median reaction time of platforms and the use of automated means in detecting content (Left-hand side: TikTok; Right-hand graph: YouTube)

We also observed notable discrepancies in content moderation across different Member states, particularly in the Netherlands, France, and Spain, where decisions often had an exclusive territorial scope (i.e., the decisions only applied in those countries). For instance, in France, the median time for TikTok’s moderation actions exceeded 30 days—more than 10 times longer than in other countries like Germany, highlighting a significant delay. Notably, the majority of these decisions required human intervention. A similar pattern emerged with YouTube, where moderation processes affecting some countries namely Ireland, Finland, Austria, and Germany were almost 10 times slower compared to actions applied across the entire EEA. On the contrary, Sweden was an outlier in our sample, with decisions applying to its territory taking less than 10 days. Generally, though, we confirmed a positive correlation between the use of automation and speed in content moderation processes, which often has negative implications for free speech.

As a result, discrepancies in the enforcement of content moderation obligations are crucial to consider in light of calls by policymakers to expedite content moderation processes. Simply put, if platforms are compelled to accelerate content moderation, it is more likely that they will rely on automation, which will lead to more erroneous decisions, which as we noted earlier can disproportionately affect non-conforming and marginalized communities. This seems even more likely when taking into account the need for a better understanding of the cultural context and of languages that AI systems might not be as well trained as more popular ones. It is important to note, though, that this is equally a problem of these platform firms, in particular of their scale and business models – which prioritize efficiency, data exploitation, and profit-making at the expense of fundamental rights and social values.

Therefore, the DSA seems to ensure legal harmonization regarding content governance across the bloc, a long-standing issue in the EU, but does so by heavily relying on platform corporations. It should not come as a surprise, then, that there is not yet a vision where transparency or observability translates into action beyond the legal pursuits of the European Commission. In other words, both transparency and observability in the DSA exist as depoliticized concepts aimed at facilitating VLOPs’ compliance with novel legal obligations, without the capacity for the development of alternatives regarding the means of connection and data production in these spaces. In that sense, the DSA reinforces the structural power of platform firms, which is contingent on their infrastructures, both material and digital, as well as their institutional entanglements with key stakeholders, such as regulatory bodies and civil society organizations.

In conclusion, we find that the DSA, based on a constellation of provisions like the obligation to produce SoRs and report them to a dedicated database, is a step toward fostering a dynamic way of studying platform governance. It offers us a peek into the machinations of industrial-scale content moderation. Yet even this has severe limitations, as there exist various inconsistencies in the reporting by platforms, which only partially adhere to the spirit of the regulation, as also argued by other researchers. Moreover, technical shortcomings of the database regarding the lack of access to its API and insufficient data (e.g., not requiring platforms to report on content moderation languages) contribute to limitations in our study. Regardless, it is our duty as researchers to engage with “the tools of the master” in the hopes of someday dismantling the house.

This material is produced as part of AlgoSoc, a collaborative 10-year research program on public values in the algorithmic society, which is funded by the Dutch Ministry of Education, Culture and Science (OCW) as part of its Gravitation programme (project number 024.005.017). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of OCW or those of the AlgoSoc consortium as a whole.

Authors

Charis Papaevangelou
Charis Papaevangelou (Ph.D.) is postdoctoral researcher at the University of Amsterdam (Institute for Information Law) where he is studying the implications of the novel EU platform regulatory framework for the relationship between news media organisations and platforms. He is part of the “Public Va...
Fabio Votta
Fabio Votta (Ph.D.) is a postdoctoral researcher at the University of Amsterdam at the Amsterdam School for Communication Research (ASCoR). His research primarily focuses on the impact of microtargeted political advertisements on citizens and society. He is part of the “Public Values in the Algorith...

Topics