Addressing Cumulative Online Harms Through Human Rights Law
Meri Baghdasaryan / Dec 9, 2025Meri Baghdasaryan is an international lawyer currently serving as a senior case and policy officer at the Oversight Board. This post was written in her personal capacity.

Georgian President Salome Zourabichvili meets with the president of the European Court of Human Rights, Robert Spano, at the Orbeliani Palace in July 2022. (Administration of the President of Georgia)
The United Nations has repeatedly recognized that online speech should enjoy the same protections as offline speech. However, what happens online does not travel, influence or impact people in the same way, and often carries a different risk profile.
Former UN Special Rapporteur on freedom of expression David Kaye observed that “the scale and complexity of [social media companies] addressing hateful expression presents long-term challenges and may lead companies to restrict such expression even if it is not clearly linked to adverse outcomes.”
Real-world examples show that content’s reach, virality or aggregation can lead to cumulative impacts that disproportionately harm marginalized groups.
While some of the most consequential harms in digital environments develop cumulatively through scale, repetition and algorithmic amplification, platform policies and human rights frameworks must move beyond incident-based analysis towards systemic approaches that recognize how risk grows over time and through platform design.
Unpacking cumulative harms
Many experts point to the unique characteristics of online speech when discussing cumulative harm or cumulatively harmful speech, such as the volume, speed and accessibility of online speech as characteristics of “autonomous harms.”
Online harms arguably do not materialize as one-off or single incidents. Rather, online content and behavior leave traces “that are often permanent, easily searchable, replicable, and scalable through platforms’ own design,” such as through algorithmic amplification over time.
While some individual posts are harmful in small quantities, the accumulation or amplification of certain individual posts can lead to significant human rights impacts because of platform design and the nature of online speech. Such harms can manifest along a spectrum, from microaggressions — subtle exclusionary narratives against certain protected groups — and ongoing harassment to offline violence.
These dynamics become most visible in high-stakes environments, such as conflict and political violence. For instance, after the 2021 military coup in Myanmar, Meta’s algorithmic recommendations reportedly amplified military propaganda, contributing to incitement to widespread offline violence and genocidal campaign against the Rohingya Muslim minority.
Amnesty International’s report stated that “the mass dissemination of messages that advocated hatred inciting violence and discrimination against the Rohingya, as well as other dehumanizing and discriminatory anti-Rohingya content, poured fuel on the fire of long-standing discrimination and substantially increased the risk of an outbreak of mass violence.” Meta has faced criticism for contributing to serious human rights abuses, including incitement to violence and genocide, against Ethiopia’s Tigrayan community during the civil war that began in 2020. Although these examples represent the most acute forms of harm, similar amplification and clustering dynamics arise in a broad range of contexts beyond conflict situations.
Certainly, the most severe forms of harmful speech — such as incitement to violence, intentional targeted harassment or promotion of suicide or self-harm — require immediate intervention. But understanding the cumulative effects of certain types of content helps to explain why not all harms manifest immediately, and why some emerge gradually through repetition, clustering or amplification.
Platform responses to cumulative harm
Between 2018 and 2021, social media approaches began to move beyond the binary keep-up or take-down model. These shifts, while motivated by various factors, also align with concerns raised about cumulative harm.
Under Meta’s 2018 “Remove, Reduce, Inform” framework, violative content is subject to removal; content that does not violate policies, but may still be misleading or harmful is demoted; and users are provided with additional context. YouTube’s similar 2019 policy removes violative content, raises “authoritative voices when people are looking for breaking news and information,” rewards “trusted, eligible creators and artists,” and reduces “the spread of content that brushes right up against the policy line.” In 2021, TikTok announced that it would diversify recommendations, “testing ways to avoid recommending a series of similar content – such as around extreme dieting or fitness, sadness, or breakups – to protect against viewing too much of a content category that may be fine as a single video but problematic if viewed in clusters.”
These strategies signal growing awareness that risk lies not only in individual posts, but also in how content is ranked, recommended and repeated. Following the 2024 US presidential elections, both Meta and YouTube updated their content policies and enforcement practices, particularly focusing on “allowing more speech.” Human rights organizations criticized these changes, and how they affect efforts to address cumulative harms is not yet fully understood.
Insights from regional human rights law
Human rights bodies usually assess violations based on the specific facts in cases. This represents a structural and procedural constraint to developing approaches on cumulative harms that emerge gradually, and often without a single actor directly responsible. The Inter-American and African human rights systems are only beginning to grapple with this phenomenon, while the European Court of Human Rights (ECtHR) has implicitly addressed cumulatively harmful speech in several contexts. (An upcoming paper will discuss the court’s approach in more detail.)
The Economic Community of West African States (ECOWAS) Court of Justice held in ASUTIC and Ndiaga Gueye v. Senegal that the country’s 2023 internet and social media shutdowns violated fundamental rights, including freedom of expression, rights to access information, assembly and work. The ECOWAS Court noted that Senegal failed to provide specific details about the alleged “hateful and subversive” messages -including their authors, intended audiences, or their scale and reach- when justifying the shutdowns. While indirect, and not explicitly about cumulative harms, the Court’s reasoning shows an awareness of scale, reach and impact - elements relevant to cumulative harm analysis.
Though neither the Inter-American Commission on Human Rights nor the Inter-American Court of Human Rights has directly discussed accumulation of harm in online speech, some glimmers of this discussion appear in cases concerning, e.g., gender-based violence and climate action.
The ECtHR has pointed to both the promises and perils of online environments. For instance, in Delfi AS v Estonia, the ECtHR stated that “in the light of [the Internet’s] accessibility and its capacity to store and communicate vast amounts of information,” the risk of harm from online content and communications is “certainly higher than that posed by the press.” It emphasized that “unlawful speech, including hate speech and speech inciting violence, can be disseminated as never before, worldwide, in a matter of seconds, and sometimes remain persistently available online.”
The ECtHR also highlighted cumulative impact in right-to-be-forgotten cases. For example, in Biancardi vs. Italy, the ECtHR agreed with the domestic courts that the restriction on the applicant’s right to freedom of expression was justified, when, as an editor-in-chief of an online newspaper, he was asked to de-index an article about a fight, stabbing and subsequent arrest involving V.X. The ECtHR noted that the applicant had breached V.X.’s right to respect for his reputation by virtue of the “continued presence on the Internet of the impugned article and by [the applicant’s] failure to de-index it,” especially as V.X. asked to de-index the article, not to permanently remove it from the Internet. Though persistence differs from aggregation, these cases reflect the ECtHR’s acknowledgment of how digital environments intensify harm over time.
Discussions about impacts of cumulatively harmful speech also arise outside the online speech context, particularly when the ECtHR discussed speech negatively stereotyping of marginalized groups (Budinova and Chaprazov v Bulgaria), or emphasized risks to social cohesion (Féret v. Belgium).
Why cumulative harms matter for today’s tech policy debates
The emerging regional human rights frameworks do not exist in isolation; they co-exist with new regulatory instruments. The systemic risk assessments under the European Union’s Digital Services Act may offer a blueprint for addressing cumulative harms. As components of the same ecosystem, developments across different pillars may influence each other.
At the same time, many longstanding challenges persist in the new AI-powered era. Models continue to struggle with detecting hate speech, and red teaming exercises show that subtle pathways to “helpful compliance” can facilitate harm.
A key question going forward is how to design systems that protect free expression while recognizing that harms often unfold cumulatively rather than instantaneously. New technologies will introduce both familiar and novel risks - such as new forms of repetitions or automated amplifications, as well as opportunities for rights-enhancing interventions. Designing frameworks that take seriously how harms accumulate should therefore remain top of mind for platforms, regulators and human rights bodies.
Authors
