Home

Policies vs. Enforcement: What’s Up with Meta’s Platforming of Violent Extremist Hate Account “Libs of TikTok”?

Alejandra Caraballo / Mar 21, 2024

Far-right activist Chaya Raichik, left; Meta founder and CEO Mark Zuckerberg, right.

Over the past two years, Chaya Raichik has built out her extremist Libs of TikTok (LoTT) brand of mass harassment and violent incitement by exploiting the unique capabilities of X, the platform formerly known as Twitter, with the explicit support of its sympathetic owner, Elon Musk. However, she has continued to build her presence with accounts on other social media platforms — notably Meta’s Instagram and Facebook.

Recently, in its “Post in Polish Targeting Trans People” ruling, Meta’s quasi-independent Oversight Board slammed the company’s refusal to enforce its own basic hate, bullying and harassment policies — observing that: “The fundamental issue in this case is not with the policies, but their enforcement. Meta’s repeated failure to take the correct enforcement action, despite multiple signals about the post’s harmful content, leads the Board to conclude the company is not living up to the ideals it has articulated on LGBTQIA+ safety.”

LoTT is devoted to “content that's meant to degrade or shame and posts that are merely forms of “bullying and harassing a private individual, targeting them on the basis of sexual orientation or gender identity (the text in bold italics here and below are citations from Meta’s Bullying and harassment and Hate speech policies). Of course the engagement generated by the account creates enormous profits, both for account-owner Raichik and for Meta.

It seems difficult to imagine that Meta genuinely believes these posts are NOT “expressing contempt and disgust” for the people, organizations, and institutions being targeted. As has been extensively documented, the account’s targeting and incitement is creating a variety of offline harms, in addition to the very real harm that results simply from public bullying and harassment, which Meta also claims it prohibits (“We do not tolerate this kind of behavior because it prevents people from feeling safe and respected on Facebook, Instagram, and Threads.”).

Meta continues to somehow interpret LoTT’s content as not violating its hate, bullying, and harassment policies even as clearly the company is well aware — given its many previous suspensions of the account — that Chaya Raichick vehemently and continually expresses extreme hatred of LGBTQ people, and especially trans and nonbinary people, both on her Meta accounts, across her other social media accounts, and in the media.

The broader issue of Meta’s enforcement is its continual head in the sand approach towards these types of accounts and their willful ignorance of the broader context that this account operates under. This exact problem was highlighted by former Twitter Trust and Safety head, Yoel Roth, where he underscored the problem of legalism in content moderation policies that simply seek to look at the four corners of a post for compliance with guidelines while ignoring the broader impact. Roth observed that:

“[T]he account’s conduct had already plainly emerged as dangerous, with significant offline consequences; surely no one can argue that a bomb threat disrupting the operations of a major children’s hospital is ‘just’ harmless online trolling. But a strict, literal application of the company’s rules to the account’s posts pulled in the opposite direction. Looking ‘within the four walls of the tweet,’ as one executive liked to put it, what could we point to that violated the letter of the law? Even as we agreed that the posts contravened the spirit of the company’s policies, Raichik’s carefully constructed posts stopped short of clearly breaking the Twitter rules.”

This effectively allows bad actors to technically conform their posts to community guidelines, while violating the spirit through networked incitement and a message that is perceived by the account’s audience as one intended to degrade and shame on the basis of protected characteristics. It also allows exceptionally large accounts with millions of followers to point their followers towards a target they know will receive threats and harassment while retaining plausible deniability. The tragic irony is that the accounts with the biggest potential to inflict harm are the least likely to face any accountability.

Meta continuing to refuse to recognize the account's behavior as “mass harassment does not change the fact that, in fact, that is exactly what it is doing and everyone (including Chaya Raichik and every single decision-maker at Meta) knows it. Several news outlets and researchers have extensively documented the impact of this account such as dozens of bomb and death threats towards children’s hospitals, schools, public officials, teachers, and anyone else targeted by the account. Surely, 41 threats of violence linked directly to this account's posts is enough of a striking correlation to warrant responsibility on the part of social media platforms. The issue remains the nature of the “direction” of the mass harassment since the account doesn’t explicitly direct her followers to harass targets but does so implicitly through her curated following. However, LoTT has acknowledged her link to the bomb threats by mocking them and making her personal X (formerly known as Twitter) profile banner a cartoon bomb with her Libs of TikTok logo in it.

The instances of targeting individual people (private individuals who happen to be educators, activists, healthcare providers, LGBTQ and trans allies, etc.) are effectively examples of “directed mass harassment” and networked incitement. Meta’s policies state that it will: "Remove directed mass harassment, when... Targeting, via any surface, ‘individuals at heightened risk of offline harm’, defined as: ... Member of a designated and recognizable at-risk group." [At-risk groups are those with protected characteristics including gender identity and sexual orientation]. The targets of LoTT’s posts clearly meet this definition but there is a stubborn refusal to acknowledge this.

As such content remains on the platform it is harming the targets every day. Any victim of targeted harassment can attest that an enormous aspect of the offline harm is simply the psychological trauma and distress of being terrified of what physical real-world harm may be forthcoming, whether such physical harm ever actually happens or not. When such terror campaigns happen, targets also have to implement security measures and mitigations such as having to manually delete violent or violative comments on their posts from LoTT followers, setting social media accounts to private, monitoring for attacks across other platforms, and implementing additional steps for physical safety — all of which, in fact, results in an extraordinary level of suppression of their free expression and a situation in which they are now unable to engage online out of fear of further attacks.

As Global Project Against Hate and Extremism co-founder Wendy Via explains in a January 2023 Salon article (Libs of TikTok owner Chaya Raichik ramps up her anti-LGBTQ crusade): "When you have a person with a very large audience abusing a community, that person is putting that community at risk, either for online harassment or for intimidation in real life and violence in real life.”

Meta’s responses to advocacy efforts on behalf of affected users have been galling to say the least. Efforts to get Meta to act on LoTT’s posts targeting trans people with repeated and intentional misgendering and deadnaming are effectively futile. Prominent trans public figures are told that they are exempted from targeted misgendering policies and thus cannot stop repeated efforts to degrade and humiliate them. For private individuals who are targeted, Meta requires that the target of these posts report them, which they may not know to do, and also requires them to have a Meta account, which they may not have in the first place. Even when the target reports it, it is never acted on and attempts to appeal to the Oversight Board are stymied by months-long waits to seek review or the lack of ability to seek an appeal after a review. Thus, Meta relies on a Kafkaesque moderation system designed to avoid responsibility or take action. Public figures with the means to push back against the harassment are exempted from the policies while private individuals who lack those means are left to fend for themselves.

LoTT’s ongoing free reign across Instagram, Facebook, and Threads is harmful for all platform users and advertisers, not to mention society as a whole as we witness the real-world offline harms (bomb threats, death threats, vandalism, Pride flag-burnings, hate crime assaults, hate crime murders, etc.) generated by the account’s stochastic harassment behavior and prolific “dehumanization” of LGBTQ people (not merely, “I don’t like these people” but full-on dangerous false assertions such as, “these people are coming for your children.”).

The current state of Meta’s interpretations of its policies continues to result in extraordinary harm to LGBTQ people, especially transgender people — and to everyone.

Authors

Alejandra Caraballo
Alejandra Caraballo is a Clinical Instructor at Harvard Law School's Cyberlaw Clinic where her work focuses on the intersection of gender and technology particularly with telemedicine access to abortion and networked harassment. Prior to joining the clinic, Alejandra was a staff attorney at the Tran...

Topics