Home

The European Commission's Approach to DSA Systemic Risk is Concerning for Freedom of Expression

Joan Barata, Jordi Calvet-Bademunt / Oct 30, 2023

Joan Barata is Senior Fellow at The Future of Free Speech and a Fellow of the Program on Platform Regulation at the Stanford Cyber Policy Center. Jordi Calvet-Bademunt is Research Fellow at The Future of Free Speech, and a Visiting Scholar at Vanderbilt University.

1. Introduction

The recently adopted Digital Services Act (DSA) constitutes the new legal horizontal framework in the EU regarding the provision of online services. According to the European Commission, the DSA aimsto create a safer digital space in which the fundamental rights of all users of digital services are protected.” The DSA includes a complex and diverse set of rules for different players; significantly, it includes obligations for ‘very large online platforms’ (VLOPs) and ‘very large online search engines’ (VLOSEs) to assess and mitigate the so-called systemic risks their services are deemed to generate and/or contribute to. These risks include, among others, “the dissemination of illegal content” and “actual or negative effects for the exercise of fundamental rights [or] on civic discourse and electoral processes, and public security.

How these risks are going to be specifically assessed and mitigated by platforms still remains uncertain, particularly when it comes to incorporating fundamental rights impact assessments into the equation. As a matter of fact, the incorporation of such rules into the DSA triggered some caveats and warnings among civil society and experts since they place in the hands of a political body such as the European Commission the power to enact new speech rules affecting legal content as well as to review, shape, and constrain the different ways platforms deal with problematic content.

In August 2023, platforms submitted their first reports in this area and feedback from the Commission is expected soon. Defining the scope and methodology for proper risk assessment and mitigation gets to the core of whether the DSA can be effective and deliver on its promise to become an instrument to improve online platform accountability while preserving fundamental rights. Moreover, it also needs to be noted that the “regulator” in this area, the European Commission, is not an independent body similar to those implementing the DSA at the national level. Therefore, there is significant risk that these sensitive assessments, related to core elements of societal life, may be affected by political factors.

It is in this context that the recent publication of a report commissioned by DG CNECT, the Commission’s directorate responsible for enforcement and supervision of VLOPs and VLOSEs obligations under the DSA, titled ‘Digital Services Act: Application of the Risk Management Framework to Russian disinformation campaigns’ may have a possible impact on freedom of expression. This report, the first one concerning systemic risks, can set a precedent.

For completeness, we note that DG CNECT is, according to the European Commission’s website, the publication’s author; the Commission’s is also the only logo appearing on the cover of the report. However, based on a New York Times article, we understand that the nonprofit Reset “prepared” the report. As far as we can see, this is not specified anywhere in the report or the Commission’s press release, although one of the authors tweeted about it after it was released.

2. The Commission’s report on Russian disinformation campaigns: methodology, choices, and risks

The EU clearly needs to protect its territory from the harms deriving from disinformation campaigns and other negative consequences for democracy, human rights, and the free exchange of ideas and opinions, particularly in the context of Russia’s invasion of Ukraine and previous Kremlin orchestrated attempts to interfere with election processes. The proximity of the 2024 European Parliament election and other democratic processes in key Member States is also a significant factor for policymakers in Brussels. However, when defending democratic values and particularly the integrity of elections, the EU must be careful not to sacrifice one of its crucial tenets: freedom of expression.

In our view, the Commission’s report can lead to excessive and unintended restrictions on the dissemination of online content and, hence, excessively restrict freedom of expression. There are several reasons that lead us to this conclusion.

Firstly, the choice of Russian disinformation for this first report is worthy of consideration. Such a sensitive political topic barely allows for a nuanced assessment of all priorities at stake, as other scholars have also pointed out. How many would dare to reject the adoption of measures protecting national security from malicious foreign actors even if this represents partially giving up, in the short term, on principles of free expression?

Secondly, the report appears to put freedom of expression in a sort of secondary place. Even though references to this fundamental right can be found, this is undoubtedly a harms-focused report. This is evidenced by the use of a (slightly) adapted Rabat Plan of Action (RPA) test as a method for the assessment of systemic risks. The RPA test is used in human rights law to deal with situations at the intersection of free speech (Article 19 of the International Covenant on Civil and Political Rights, ICCPR) and incitement to hatred (Article 20(2) of the ICCPR). Given the particularly abject and dangerous nature of incitement to hatred, the RPA test results in interpreting freedom of expression more narrowly than in other contexts, even with the nuances introduced by the Commission report.

Although the report seems to understand the RPA as an instrument “to assess the proportionality of legal restrictions on speech,” the main objective of the Plan is instead “properly balancing freedom of expression and the prohibition of incitement to hatred.” In this sense, the report combines the qualitative criteria provided by the RPA with a quantitative approach particularly based on the visibility, reach, and potential exposure of certain pieces of content. The use of information warfare and propaganda tactics to support the invasion of Ukraine and to manipulate the broader political context of the EU constitutes, as already mentioned, a harmful use of communication tools, particularly in the digital world. However, in order not to create unjustified threats to the right to freedom of expression and the right to access to information, it is important that propaganda, disinformation and other forms of problematic yet not necessarily illegal content can be properly connected to the actual emergence of offline harms. Studies demonstrate that there is still much to explore and understand regarding how online propaganda operates and to what extent this creates significant harm in societies. The RPA precisely includes as a fundamental element of the test the component of the likelihood of actual harm beyond the platform. The report under consideration fails to provide clear evidence regarding the connection between its essentially quantitative approach and the actual causation of unacceptable societal harms.

Thirdly, the report uses a very broad scope to determine what is “problematic content.” From a subjective perspective, “anything published” by a party aligned with the Kremlin, for instance, is regarded as “potentially causing a systemic risk.” Alignment with the Kremlin does not require a formal or informal affiliation with the Russian State; just sharing “Kremlin’s narratives through originally produced content” suffices. While tackling “the dissemination of knowingly or recklessly false statements by official or State actorscan be justified in certain cases and circumstances, placing everyone aligned with such discourse under suspicion is excessive. The content itself is also overly broad: the ‘Z’ war propaganda symbol – banned in the Czech Republic, Estonia, and Germany – and “claims that Europe would run out of gas in the winter or that Putin had won the war and NATO was abandoning Ukraine” have been considered problematic in the report. Even “the everyday falsehoods that comprise the standard fare of many of the pro-Kremlin channels […] may be evaluated for risk in their own right, but they should also be seen as a cumulative phenomenon.”

Propaganda and disinformation should not be restricted unless the requirements in provisions 19(3) and 20(2) of the ICCPR are met, which in our view, they are not. In fact, the UN Special Rapporteur on the promotion and protection of freedom of opinion and expression (SROE) goes as far as saying that, even in the context of armed conflicts, “States should respect and protect the right of individuals to receive foreign news and propaganda,” unless “restricted in line with the international rights standards.” In addition, both the Commission and platforms should particularly consider that human rights criteria repeatedly stressed by the DSA encompass relevant standards, including article 11 of the Charter of Fundamental Rights and those established by the ECHR on freedom of expression, which since very early decisions is declared to apply “not only to ‘information’ or ‘ideas’ that are favourably received or regarded as inoffensive or as a matter of indifference, but also to those that offend, shock or disturb the State or any sector of the population. Such are the demands of that pluralism, tolerance and broadmindedness without which there is no ‘democratic society.’

Lastly, the report’s conclusions seem to suggest that, in order to deal with systemic risks appropriately, platforms should moderate the mentioned content more actively. More specifically, the report states that while not all Kremlin-aligned accounts (i.e., those not affiliated with the Russian State) should have necessarily been banned or demoted, it is necessary to consider that “mitigation requirements are not defined by the actor but rather by the severity of risk” and claims that “all of the sub-categories of accounts in [their] sample – Kremlin-backed and Kremlin-aligned – met the standard of severe and systemic risk.” In this context, references to international human rights standards are missed, particularly the three-part test, which states that any restrictions to freedom of expression should be legal, legitimate, and necessary according to the interpretation provided by international bodies and regional courts. As the report recognizes, “it is important to evaluate the ‘false positive’ rate of mitigation measures as well as the effectiveness of its regimes of rapid redress for instances of error in either content moderation or algorithmic recommendation policies.” However, the only way to deal with systemic risks may be the use of automated filtering mechanisms. Without prejudice to the DSA transparency obligations, mistakes by these automated mechanisms can seriously and disproportionately harm users’ fundamental right to free expression.

In addition to all the above, it is important to note the references to Telegram contained in the report, which is not a designated VLOP according to DSA provisions or a signatory of the EU Code of Practice on Disinformation. The report acknowledges the risk of “cross-platform manipulation” and, in particular,the“funneling content audiences [from more regulated platforms, such as Facebook] to the least regulated environments,” such as Telegram. Notably, the report points out that the datasets used “contain a higher percentage of Facebook and Telegram users relative to other platforms” and Telegram posts are repeatedly used as an illustration of undesirable content. In fact, Telegram is used in six out of the 13 figures showing undesirable content, more than any of the five VLOPs. This shows the complexity and the evolving nature of the current information dissemination processes and the fact that the DSA conceptual framework may not be able to fully tackle the potential harms deriving from malicious actors. These limitations should not be interpreted in the sense that a broader and more restrictive approach would be necessary, but to acknowledge the limitations of this kind of mechanism alone to deal with these complex questions.

3. Conclusions

The tension between shielding the EU from malicious interference and the effective protection of fundamental rights is apparent in recent legal and political and regulatory initiatives related to the information ecosystem.

In March 2022, the EU suspended the broadcasting activities of Sputnik and RT/Russia Today. The European Commission went as far as to point out that social media companies “must prevent users from broadcasting […] any content of RT and Sputnik. That applies both to accounts which appear as belonging to individuals who are likely to be used by RT/Sputnik and to any other individuals.” This suspension affected not only the right of speakers, but also of viewers and listeners to access information. The Commission’s request was broad enough to potentially include content posted by users attempting to counter Russian propaganda. This could hinder online collaborative efforts that expose and debunk Russian propaganda and disinformation in real time. Digital investigative reporters like Bellingcat and the Russian Media Monitor rely on open sources – including Russian media – to expose what is happening on the ground and debunk Russian-sponsored lies.

In July 2023, Thierry Breton, an EU Commissioner and digital “enforcer,” cited the DSA in suggesting that social media platforms could face shutdowns if they do not crack down on problematic content during riots that took place in France. These claims followed remarks by French President Emmanuel Macron, who suggested closing down some social media platforms to clamp down on riots taking place at the time in France. Fortunately, Commissioner Breton walked back his words following pressure from civil society. Another sensitive move from the Commissioner has been the recent letters’ relay to several platforms urging them to take down violent content and misinformation around the Israel-Hamas conflict. Such directions clearly circumvent the procedures and guardrails established under the DSA regarding harm mitigation to rather appear as an arbitrary use of public authority to restrain what may be lawful speech.

At a Member State level, there have also been concerning signs of this trend, since the early adoption of the Network Enforcement Act (NetzDG) in Germany, which imposes an obligation on social media companies to remove illegal content, including insult, incitement, and religious defamation within very short deadlines (24 hours, in the case of clearly illegal content, and seven days otherwise) and at risk of a fine of up to 50 million Euros. Two reports released by the global think tank Justitia demonstrate how this precedent has spilled over in more than twenty States, including authoritarian regimes such as Russia and Venezuela. Therefore, the interplay between the new DSA and the existing legislation at the national level may also trigger important challenges during the implementation phase.

As a geopolitical actor, the EU needs to show a strong commitment to democracy and the rule of law and thus defend such fundamental principles vis-à-vis threats and deliberate political interference coming from third countries.

However, as a general rule, disseminating disinformation, propaganda, and similar types of undesirable speech must be tackled via non-repressive mechanisms such as media ethics, communication policies, reinforcement of public service media, or promoting media pluralism. In March 2018, a High-Level Expert Group on Fake News and Online Disinformation set up by the Commission issued a report warning that “[a]ny form of censorship either public or private should clearly be avoided.” Instead, they suggested a five-pronged approach resting on five pillars: enhancing the transparency of online news, including, among others, regarding the funding of sources; promoting media and information literacy to counter disinformation and help users navigate the digital media environment; developing tools for empowering users and journalists to tackle disinformation; safeguarding the diversity and sustainability of the European news media ecosystem; and promoting continued research on the impact of disinformation.

The protection and promotion of fundamental rights such as freedom of expression, both within and outside the European Union, must remain a basic tenet of the European project. Otherwise, the EU risks losing its moral high ground and part of its strength; not to mention the potentially negative impact of such policies beyond its borders in connection with the so-called Brussels effect.

- - -

The authors thank Jacob Mchangama for his valuable comments.

Authors

Joan Barata
Joan Barata works on freedom of expression, media regulation, and intermediary liability issues. He is a Senior Fellow at Justitia's Future Free Speech project. He is also a Fellow of the Program on Platform Regulation at the Stanford Cyber Policy Center. He has published a large number of articles ...
Jordi Calvet-Bademunt
Jordi Calvet-Bademunt is a Research Fellow at the Future of Free Speech Project and a Visiting Scholar at Vanderbilt University. His research focuses on freedom of expression in the digital space. Jordi has almost a decade of experience as a policy analyst at the Organization for Economic Co-operati...

Topics