Home

The Word Censorship Has An Actual Meaning: A Defense of Content Moderation

Dylan Moses / Apr 16, 2024

Last month, the US Supreme Court heard oral arguments in Murthy v. Missouri. Plaintiffs in that case – Republican State Attorneys General in Missouri and Louisiana, along with individual plaintiffs – allege the Biden Administration coerced social media companies to remove disfavored speech about COVID-19 under threat of retaliation if they did not comply. Whether that actually happened, or the platforms were performing their routine content moderation practices, free from government interference, is something the Court will likely decide this term. Key to that analysis, however, is disentangling censorship from content moderation.

If you ask free speech enthusiast, Elon Musk, what he thinks of content moderation he’ll likely say “[m]oderation is a propaganda word for censorship.” But if you ask a trust & safety professional what’s the difference between content moderation and censorship, you’ll likely get some form of, “censorship is really something only the government can do. That’s not what we do.”

In fact, at a recent talk at the Berkman Klein Center, Dave Willner, a former trust & safety executive at companies including Facebook and OpenAI who is now a non-resident fellow in the Stanford Program on Governance of Emerging Technologies, said as much when he responded to a question about censorship and content moderation in the age of generative AI. “Mark Zuckerberg does not own any prisons and has never put someone in them, and I think that is an important distinction,” he quipped.

Willner is right. As a former content moderator, I shudder when I hear the “C” word, and I’ve given the same retort. During my time with the platforms, my work was focused on mitigating the effects of hate speech, misinformation, and violent extremism on a global scale – from deplatforming conspiracy theorists like Alex Jones to managing the aftershocks of livestreamed terrorist attacks. When I think of censorship, though, I think of the government silencing someone for their political views. My immediate reaction is to think of China's great firewall and state surveillance, or the internet censorship that happens in Iran, or on broadcast TV in Russia.

In the US, our protection against this form of censorship – the type where a government entity can arbitrarily bring both the force of law and violence against its citizens for expressing political speech – is the First Amendment. The Supreme Court’s jurisprudence on speech can (non-exhaustively) be summed up as the following: “[A]ny person [online] can become a town crier with a voice that resonates farther than it could from any soapbox.” The Internet is the “modern public square . . . and [f]oreclosing access to social media altogether thus prevents users from engaging in the legitimate exercise of First Amendment rights.” “[The internet’s potential would be denied] if we assume the Government is best positioned to make [speech] choices for us.” This isn’t to say that the First Amendment shields us from every form of government censorship, or that government censorship doesn’t exist at all in the US. Rather, it’s to point out that there is a set of laws and doctrinal principles that effectively stand as a first line of defense against government interference of the people’s right to speak freely.

The First Amendment, however, is a right citizens have against government censorship – not from the editorial decisions of private actors. And to really engage with the heart of the question, one must reckon with the fact that censorship is not just a legal issue, but a moral concept. It denotes a “complex strategical situation” between disfavored speakers and the majority group based on an “objective” set of values where the majority group’s structure could be characterized either by any of the following: group size, moral/cultural authority/orthodoxy, social status, resource allocation. And through those dimensions, they dominate the discourse, censoring or sanctioning speakers (through social or violent means) for their speech. In short, one group has the power to decide how another group can express themselves. The latter group, quite literally, doesn’t have a say.

No doubt the attendee who asked Willner about his views on content moderation had examples of this form of non-governmental censorship in mind – there are plenty to go around, both on and offline. Throughout the COVID-19 pandemic, platform companies were pilloried over accusations of censorship as they removed COVID-19 and vaccine hesitancy content of dubious veracity. More recently, over-enforcement on Arabic language content surrounding the Israel-Hamas conflict has been the subject of controversy. The offline world is similarly full of these examples, ranging from groups attempting to ban books in various states, to even the Harvard Law Review being accused of censoring the voice of a Palestinian scholar.

I’m sympathetic to the notion that these instances are examples of censorship. Something feels off when, instead of governments, certain private entities, with their gatekeeping authority, silence the speech of disfavored groups. And in particular, content moderation at most major platforms usually evokes notions of censorship because speech people care most about – certain forms of political speech – are usually moderated heavily. But not all speech is equal. Calling for death, disease, or harm against protected groups; fomenting a violent insurrection; or promoting falsehoods about the efficacy of the COVID vaccine has little, if any, social utility, and certainly causes harm both online and off. Conflating content moderation with censorship, in this context, is problematic for several reasons:

1. The goals and consequences of censorship and content moderation are different.

Government censorship aims to limit speech that dissents from the government-approved political orthodoxy. On the other hand, content moderation on social media platforms is meant to offer principled, operable, and explicable ways to promote free expression and minimize user harm. While content moderation is admittedly inconsistent (sometimes for political reasons, but more often because humans are just bad at it) these values seem opposed to the previous examples where it’s clear that the censorship at play is meant to enact political uniformity and control. Both recognize you cannot have unvarnished freedom of expression, but that prohibition in one context seeks to limit dissent, while the other is meant to maximize inclusivity.

Conflating government censorship with content moderation suggests a desire to be free from both. But we know that doesn’t work. Allowing certain speech purely on the basis of the speaker’s positionality in the majority is a recipe for disaster. This arbitrary standard is what allows people that say they’re being silenced by the platforms to go out and ban books or dox people. It’s also what allows others to attempt to elevate certain marginalized voices, while protesting voices they disagree with. Whether it’s about the freedom to say what’s on your mind or speak truth to power, that right should end when it puts someone in harm’s way or the speech is otherwise illegal. There needs to be a standard for allowable speech for public discourse on platforms that undeniably have a wider influence than a user would without that platform.

The consequences are different too. With government censorship, a speaker may be implicitly threatened with physical violence, the loss of liberty or money, or harassed by the government or its proxies because of a vague all-encompassing notion of what might be offensive to the government. Content moderation can certainly have negative results, like being banned from a site if you break the platform’s speech rules consistently, or otherwise promote objectionable content like terrorism or CSAM. But otherwise, when speech is removed, there’s often a strike system in place to warn you when you’ve violated the rules; or, there are methods to limit the reach of a user's content when the quality of their content is of dubious value. (Notably, these rules may not apply uniformly to all users, especially if the user is a celebrity or political figure.)

2. Accusations of censorship delegitimize the value of integrity work.

The job of a trust & safety professional is generally difficult. The most challenging aspects of the role include responding to public relations fires (which are seemingly endless), having to visually digest some of the most heinous content users upload, and working diligently to ensure that the tools, policies, and procedures in place to manage millions of content decisions meet a threshold of fairness and competence that is justifiable to both the internal teams and outside world. The process by which these tools, policies and procedures are developed is rigorous, and is continually updated to respond to the changing dynamics of online speech. The professionals who work in this field are often selfless, and the work is thankless. Yet, these employees continue working on these challenging issues because they genuinely care about the interests of users and want to ensure that policy and enforcement decisions are made with those people in mind.

We want people with the skills and empathy these practitioners have making important speech decisions online, but misguided accusations of censorship delegitimize the profession. The purpose of social networks is to "[g]ive people the capacity to form communities and bring the globe closer together.” Effectuating that lofty goal is the whole point of integrity work: it is meant to safeguard a platform company’s communities from users who harm other users, either intentionally or otherwise, to ensure an environment that people feel safe engaging in. The work involves assessing harmful content against a set of proscribable behaviors like bullying, harassment, misinformation, hate speech, etc. – the standards for which are developed by internal and external subject-matter experts across the political spectrum – and the analysis takes into consideration the context and discernable intent of the user who posted that content.

There are often times where integrity professionals want to remove content because it feels wrong to continue hosting it, yet the policy doesn’t allow for removal. And on the flipside, there are often times when integrity workers feel that content should stay up, even though it might violate a particular policy, but abide by the decision to remove because fidelity to the policy is key to their legitimacy. Expressing, or even suggesting, that this work is done by people with a "political agenda" or with the implied purpose of silencing disfavored speakers is almost never based in fact.

3. Users have options beyond just one platform.

Finally, conflating content moderation with censorship presupposes that once a user is “silenced” on one platform, there is nowhere for them to freely express themselves online. If this were truly the case, then people banned on Facebook would likely categorically be banned from YouTube, TikTok, Reddit, etc. Because when a government censors a speaker, they are effectively “deplatformed” from public and private opportunities to express their dissenting views.

But that is simply just not the case with social media platforms. If a user is permanently banned from a platform – which typically happens for consistently violating the platforms’ prohibitions on particular types of antisocial behavior (e.g., calling for violence against certain groups, promoting terrorism) – users are free to go to other platforms and express themselves in a similar manner. Different platforms have different thresholds for what speech they tolerate; for example, it would surprise no one that Facebook’s moderation policies differ significantly from Truth Social’s. In fact, a user banned on the former might readily find an accepting audience on the latter. The issue is that many users believe they have a right to a social media account and, accordingly, the right to say whatever they want to whomever. But as Stanford Internet Observatory research manager (and Tech Policy Press board member) Renée DiResta's now-famous refrain reminds us, freedom of speech doesn't imply freedom of reach. You don’t have the right to a social media account, and if someone is consistently being banned from a social media platform, they should probably consider how their actions align with that sites’ values.

Of course, there are reasons to be concerned about large social media companies, with an almost state-like presence, governing the speech of billions. Examples abound vis-a-vis platforms’ callous moderation efforts (and sometimes their selective moderation of political content). These events understandably make users question “why did you do this to me and not to someone else?” and undermine the integrity of the moderation efforts.

But the field of trust & safety is in the process of professionalizing its practices. The industry now has third-party professional organizations, like the Integrity Institute, that focus on holding the platforms accountable for the digital environments they create; opportunities for practitioners to share best practices about content moderation that balance freedom of expression and user safety; and a growing desire to be more transparent about how these decisions are made. As the industry develops, users should take solace in the fact that people who care about doing the right thing for their online communities are at the front lines of these thorny issues working to build trust and inclusivity into the user experience. And if they’re concerned about moderation – they should join.

Conclusion

There is a difference between content moderation and censorship. Content moderation seeks to ensure the most people possible can participate while limiting the risk of harm to the individuals participating; censorship attempts to limit political participation. The former seeks plurality and inclusivity. And the latter seeks uniformity and control. Of course, this means that moderators are making value judgments. But unrestrained First Amendment values do not protect users from the harms that arise from, or are exacerbated by, the internet – whether it is a live-streamed shooting in Christchurch, a genocide in Myannmar, or a riot at the US Capitol on January 6th.

While we should be skeptical of state-like entities picking and choosing which speech is more valuable, we need to also recognize that collapsing what the platforms qualify as content moderation into a broader notion of censorship risks diluting the phrase of its importance; delegitimizing the important work trust & safety practitioners do on a daily basis; and making it seem as if users have no agency in the matter. Instead, we should cabin content moderation to its own category – allowing us to critique it on its own terms, independent of the stigma “censorship” brings with it.

Authors

Dylan Moses
Dylan Moses is a Graduate Student Fellow with the Berkman Klein Center for Internet & Society at Harvard University and a Founding Fellow with the Integrity Institute. Dylan previously held several roles in trust & safety at Facebook and YouTube focused on mitigating the effects of hate speech, terr...

Topics