A Third-World Critique of the Human Rights-Based Approach to Content Moderation

Yohannes Eneyew Ayalew / Jun 23, 2024

This essay is part of a symposium on the promise and perils of human rights for governing digital platforms. Read more from the series here; new posts will appear between June 18 - June 30, 2024.

Max Gruber / Better Images of AI / Clickworker 3d-printed / CC-BY 4.0

The conversation around content moderation in Africa became prominent following the Arab Spring revolutions, which began in 2010 in northern African countries such as Egypt, Libya, and Tunisia. Since then, social media platforms have faced criticism for their inadequate content moderation efforts in many cases across Africa. These platforms have been particularly ineffective in protecting users from external interference during election periods in countries like Kenya and Nigeria, and have also struggled to effectively regulate hate speech in Ethiopia and Sudan, which has contributed to inciting violent ethnic conflicts and civil strife. Given the absence of effective content moderation by the platforms, discussions regarding the regulation of information disorder and hate speech mostly center on state-centric strategies.

This post seeks to examine the limitations of prevailing content moderation systems, which are built on international human rights law (IHRL). Of course, these systems have well documented linguistic and context blind spots, especially in Africa and other places where social and cultural systems are predominantly communal. This post advocates for a radical shift to an African approach to content moderation, calling for an alternative reading of platform governance from an African perspective that emphasizes the application of collective rights and duties and a communal ethos in content moderation practices.

Locating content moderation debates under African human rights law

Although the discussion on content moderation is proceeding at a snail’s pace on the continent, owing to structural challenges, the African human rights system has recently joined the global debate by enacting specific normative frameworks in the region.

The most notable instrument is the 2019 Declaration on Freedom of Expression and Access to Information in Africa by the African Commission on Human and Peoples’ Rights, which includes normative principles on the role of Internet intermediaries in moderating illegal content. In principle, States may not require Internet intermediaries to remove content or demand platforms to provide remedies under African human rights law unless certain conditions are met, as per principle 39 (4). However, African states have positive obligations to establish legal frameworks that govern the actions of platforms, such as making decisions on removal of content with potential human rights implications. Flowing from this, platforms are expected to integrate human rights-based approaches into their processes, as provided under principle 39(3).

A Third World critique of the human rights-based approach to content moderation

International human rights law (IHRL) remains a dominant normative framework for governing content moderation on social media platforms. While the human rights-based approach has much to commend it, both mainstream and critical scholars are now questioning its legitimacy and applicability. Drawing on the critical scholarship of Third World approaches to international law (TWAIL), this post advances three major criticisms of how the human rights-based approach to content moderation often ignores and even renders invisible users in Africa and the Global South more generally.

First, a Western-centric human rights-based approach to content moderation, focusing on an individualistic conception of human rights, may overlook users in Africa, where social structures are predominantly oriented towards communal values. Anghie powerfully articulated that the ideals of human rights, which are meant to protect the individual, were created during the colonial encounter. Mutua likewise argues that individualist and narrow formulations of human rights are insufficient to pull post-colonial African states from the various challenges they have encountered. By placing individuals at the center of attention as the principal rights-holders, IHRL ultimately gives little or no attention to communal and collective rights.

The African Charter on Human and People’s Rights (‘African Charter’) incorporates a bundle of communal rights, such as cultural rights (Article 17(2)), promotion and protection of moral and traditional values (Article 17 (3)), peoples’ right to equality (Article 19), peoples’ right to existence (Article 20), self-determination (Article 20), freedom from domination (Article 20), the right to freely dispose of wealth and natural resources (Article 21) and the right to development (Article 22). Moreover, the exercise and enjoyment of human rights, including freedom of expression under the African Charter, must be with due regard to communal duties, as provided under Articles 27-29. Thus, the drafting history of the African Charter and the overall conception of human rights in Africa confirm that the African human rights system is inclined towards communal values.

Second, platforms’ content moderation overlooks subaltern epistemic locations, jurisprudence and laws in the Global South. For example, Meta’s Corporate Human Rights Policy is anchored in IHRL. It sets out:

We are committed to respecting human rights as set out in the United Nations Guiding Principles on Business and Human Rights (UNGPs). This commitment encompasses internationally recognized human rights as defined by the International Bill of Human Rights—which consists of the Universal Declaration of Human Rights; the International Covenant on Civil and Political Rights; and the International Covenant on Economic, Social and Cultural Rights—as well as the International Labour Organization Declaration on Fundamental Principles and Rights at Work.

Unfortunately, no instrument from Africa, including the African Charter, is mentioned. Although other structural problems are also often at play, this absence means that platforms such as Meta are not only symbolically omitting but also epistemically neglecting laws in Africa, thereby contributing to the marginalization of subaltern laws in platform governance.

While a human rights-based approach requires an ongoing duty of due diligence to prevent or mitigate harms arising from the UN Guiding Principles on Business and Human Rights (UNGPs), platforms are not living up to expectations when it comes to content moderation. This brings us to the third criticism of a human rights-based approach to content moderation: the existence of language and context blind spots. Except for a few languages, platforms lack service and support across most languages in Africa due to inadequate resources invested by these platforms in improving their moderation efforts. Platforms’ community standards are not even available in most local African languages. One reason platforms are struggling to fully understand the nuances of content posted in languages other than English is a perceived absence of investment. For example, Meta has faced criticism for not investing sufficiently in content moderators for the less affluent markets in Africa. While Facebook claimed that it has hired more than 15,000 human moderators working on content moderation worldwide, it does not specify a country-by-country breakdown of moderators, nor clarify what specific roles these moderators do in the Global South, mainly African countries. As a result of this and other factors, platforms are accused of causing algorithmic polarization and offline violence in Africa.

Towards an African approach to content moderation

In this post, I advocate for an alternative reading of platform governance, which I call the African approach to content moderation. Accordingly, the practice of content moderation should embrace a communal conception of human rights, where collective rights are protected alongside individual rights, as articulated in African human rights law. This means that the praxis of content moderation should include peoples’ rights such as linguistic rights, communal duties and social norms such as Ubuntu prevailed on the continent. Applied to content moderation, respecting linguistic and cultural rights requires platforms to redirect resources toward providing content curation services in local languages and establishing a content policy and governance team that is linguistically and culturally diverse. While the exact contours of this approach seem broad, its usefulness is undeniable, especially in the Global South where a content removal decision impacts not just an individual but also the community.

Thus, the African approach offers valuable insights by incorporating a communal conception of human rights into content moderation practices, which involves a delicate balance between the rights to freedom of expression and other interests or rights. Regarding communal rights or interests, content moderators, for example, need to balance the interests of individuals seeking to exercise their free speech, which may be perceived as overriding, while also considering the potential impact of that speech or content on communities.

Likewise, communal duties, as defined under Articles 27-29 of the African Charter, are important in the assessment of content removal decisions. Of course, duties could be factored into any content removal assessment that involves a balancing of competing rights or interests under the African Charter, such as freedom of expression and the community’s right to be protected from discrimination (e.g. hate speech) since duties could be counted in determining the level of appropriate interferences to these rights.

Where do we go from here?

The global conversation on content moderation is largely centered on an individualistic conception of human rights within the Western context. However, such a conception often fails in societies whose social and cultural systems are communal. Drawing on TWAIL scholarship, this post calls for a need to rethink the existing system of content moderation, which is grounded on a narrow formulation of human rights (an individualistic lens) and advocates for an alternative reading of platform governance from an African perspective, which emphasizes the application of communal duties and collective rights in the practice of content moderation.


Yohannes Eneyew Ayalew
Yohannes Eneyew Ayalew (PhD) is currently a Sessional Academic at the Faculty of Law, Monash University, Australia. His doctoral research explored the question of balancing privacy and freedom of expression in the digital environment under the African human rights system. His academic and research i...