How Lawsuits In Kenya Seek to Hold Meta Accountable for the Harm It Causes
Paul M. Barrett / Apr 14, 2025Paul M. Barrett is the deputy director and senior research scholar at the Center for Business and Human Rights at New York University’s Stern School of Business.

Nairobi—Some content moderators for Facebook gather to consult with their lawyer, Mercy Mutemi (unseen), outside the labor court in Milimani, where they filed a complaint in Kenya against Meta, on April 12, 2023. (Photo by TONY KARUMBA/AFP via Getty Images)
As the world’s largest social media company, Meta has resisted a variety of attempts to hold it accountable for the harmful side effects of its business activities — effects on users, on the people who filter content on its platforms, and on societies around the globe.
On April 14, the lawyers from the Federal Trade Commission are scheduled to offer opening arguments in a landmark antitrust lawsuit alleging that Meta illegally amassed monopoly power by executing a “buy or bury” strategy to acquire upstart rivals that threatened its dominance.
Separately, the latest “Facebook whistleblower,” a former global policy director at the company, testified on April 9 before a Senate committee that top executives at the social media titan years ago had contemplated undermining US national security to build a censored version of Facebook for the Chinese market. The former policy director, Sarah Wynn-Williams, who worked on a team that handled issues related to China, has since written a best-selling book which Meta has taken aggressive legal steps to prevent her from promoting.
On yet another front, a recent ruling by Kenya’s top court could force Meta to take responsibility for the political and ethnic violence Facebook allegedly has exacerbated in neighboring Ethiopia. Meta continues to deny legal or moral liability in the innovative Kenyan case. But together with two other lawsuits pending against Meta in Nairobi, the Kenyan litigation provides an unusual opportunity to consider the obligations that powerful technology companies have to the populations that make their mighty profits possible.
The Kenyan litigation is unfolding at a time when Meta is backing away from prior commitments to reduce hateful, bigoted, and false conspiratorial content on its platforms. Back home in the United States, the company responded to Donald Trump’s return to power by announcing in January that it would rescind certain policies barring hate speech based on gender, sexual orientation, and immigration status; dial back automated down-ranking of content likely to be misinformation; and eliminate its US third-party fact-checking program.
In a sense, the Nairobi cases represent a call from another continent to re-evaluate whether the social media industry ought to be allowed to operate with impunity despite mounting evidence of the damage it does around the world.
Allegations about violence in Ethiopia
Kenya’s High Court ruled on April 3 that the country’s judiciary has jurisdiction under the Kenyan constitution to hear claims that Facebook’s automated content system — commonly referred to as its “algorithm” — promoted hateful material that intensified ethnic violence during the civil war fought from 2020 to 2022 in Ethiopia’s northern Tigrayan region.
One plaintiff, Abrham Meareg, alleges that his father, Meareg Amare, was killed in 2021 following threatening posts on Facebook. Another, Fisseha Tekle, a researcher with Amnesty International, says that he faced online hatred for human rights work in Ethiopia. They are asking the courts in Kenya, where Meta operated a content moderation hub during the period in question, to order the company to create a restitution fund for victims of hate and violence and to change its algorithm so that it ceases to promote hateful expression.
The case has received support from major human rights organizations, including Amnesty International, Global Witness, Article 19, the Kenyan Human Rights Commission, and Kenya’s National Integration and Cohesion Commission. In a written statement, Mandi Mudarikwa, Amnesty International’s head of strategic litigation, called the ruling “a positive step towards holding big tech companies accountable for contributing to human rights abuses. It paves the way for justice and serves notice to big tech platforms that the era of impunity is over.”
An expansive constitution
Kenya offers a potentially receptive venue for such reform-oriented litigation. Like that of a number of other formerly colonized African nations, its post-independence constitution explicitly prioritizes the protection of fundamental human rights and freedoms in a way that, for example, the US Constitution does not. But the High Court’s jurisdictional ruling is far from the end of the legal story. Kenya’s chief justice will now send the case to a panel of judges who will address the merits of the claims against Meta.
The company’s lawyers have indicated that Meta will continue to fight the allegations. Meta has previously argued that it “invested in safety and security measures” to address hate and inflammatory language, along with “aggressive steps to stop the spread of misinformation” in Ethiopia.
But a 2022 analysis by the London-based Bureau of Investigative Journalism and the Observer found that at that time, Facebook was still letting users post content inciting violence through hate and misinformation. These findings constitute a troubling echo of the conclusion in 2018 by United Nations investigators that violent incitement on Facebook had played a “determining role” in the ethnic cleansing of the Muslim Rohingya minority in Myanmar.
Kenya has become a key legal battlefield in the campaign to hold Meta accountable because of the company’s convoluted corporate structure. Based in Menlo Park, Calif., Meta has long outsourced most of its human content moderation to other locations — including the Philippines, Ireland, and Kenya — where this crucial function is carried out by relatively modestly paid employees of third-party providers of business services. As I have written elsewhere, this structure not only saves Meta an enormous amount of money; it also encourages a mindset in Silicon Valley that the dirty work of filtering out violent, pornographic, and hateful content is someone else’s problem — not the responsibility of lavishly compensated executives at headquarters.
In the Kenyan courts, Meta has argued for years that it cannot be held liable for claims related to the moderation of content on Facebook, Instagram, and other platforms because it did not directly employ the workers who were doing the moderation.
Meta seeks the status of a corporate phantom
Mercy Mutemi is a civil rights attorney in Nairobi representing a group of former content moderators in a class action against Meta. Her clients worked for an outsourcer called Sama between 2019 and 2023, but they contend that Meta bears responsibility for the toll on their mental health related to extended exposure to brutal and offensive material without adequate supervision or counseling. Meta’s position is that while Sama’s content moderation in Nairobi was adequate, Meta itself was essentially a corporate phantom — providing digital services to millions of Africans but, for legal purposes, remaining invisible and untouchable.
“This left behind hundreds of trauma-impacted people and a trail of human rights violations,” Mutemi wrote in an op-ed published on April 1, days before the ruling in the separate Ethiopia-incitement case. “Meta argues that it never employed the Facebook content moderators and bore no responsibility to them. This litigation is ongoing, and the moderators now rely on the courts to unravel the complexities of their employment dynamics.”
(Disclosure: I consented to Mutemi submitting to the Kenyan court a report I wrote in 2020 about the mechanics of content moderation. I am not otherwise involved in the litigation. In the report, I argued that Meta and other social media companies should end outsourcing of content moderation and bring the function in-house to improve the treatment and supervision of workers and the effectiveness of content filtering.)
The third lawsuit pending against Meta in Kenya alleges that the company unlawfully thwarted efforts to unionize content moderators who sought better mental healthcare. The unsuccessful labor organizing was followed by layoffs and the relocation of Facebook content moderation elsewhere.
Closer to home, and with the prospect of more prolific US media coverage, Meta has been willing to settle with outsourced content moderators unhappy about their working conditions. In July 2021, a California state judge approved a settlement worth $85 million between the company and a class of more than 10,000 US-based content reviewers who had accused Meta of failing to protect them from psychological injuries. The deal included the creation of a $52 million fund for ongoing mental health treatment and other payments to class members in four states. The company also agreed to provide class members, who were employees of third-party vendors, with safer work environments, including mental health counseling.
Meta did not admit to any wrongdoing as part of the US settlement.
Authors
