Home

What Universities Might Learn from Social Media Companies About Content Moderation

Daniel Kreiss, Matt Perault / May 20, 2024

Ever since last fall, when university presidents appearing at a contentious congressional hearing hedged on whether a call for “genocide to jews” would violate campus policies, universities have faced intense pressure about their policies on campus speech. The latest round of controversy surrounding the arrests of protestors at campuses across the country, including at our school, the University of North Carolina at Chapel Hill, has made it clear that many universities still lack a workable model for moderating speech on campus.

While universities have struggled, one group of professionals have built deep experience handling speech questions at an unprecedented scale: the trust and safety teams who develop policies that moderate the millions of pieces of content that are posted daily on social media platforms. Over the past 20 years, social media companies have learned lessons about how to protect both the expression and the safety of their communities, and how to moderate content in line with the missions and business interests of their organizations.

Perhaps universities can learn something from them.

Of course, companies like Meta and YouTube might not be the first place most people would look for role models on how to create speech environments that balance both individual expression and community needs. Since their founding, US-based social media companies have been accused of moderating too much speech and too little, and have managed to alienate both Democrats (who typically think they permit too much harmful speech) and Republicans (who typically think they censor too much legitimate speech) – especially in the context of politics and health. They have been embroiled in debate after debate, ranging from whether breastfeeding images and nude art should be permitted to whether beheading videos and negative body image content should be removed. These questions have continued to torment them as their products have shifted from text to images to video to live video, and in recent months, to generative artificial intelligence.

They have also made misstep after misstep, with executives repeatedly called to testify in Congressional hearings to explain controversial decisions or to justify inaction. Their actions have repeatedly landed them in court, forced to defend their content practices against lawsuits from private parties, state attorneys general, and federal enforcers. The lines they have drawn around elections and public health have repeatedly changed, leading to charges of inattention, bias, and greed.

And yet, despite these challenges and failings, social media companies have arrived at policies for moderating speech in ways that balance expression and safety. Most importantly, platforms based in the US have content-based policies that go far beyond the parameters of the First Amendment. Companies such as Meta, Google, Snap, and X have broad definitions of hate speech and harassment, and when people violate these policies, the companies often impose educational remedies, rather than draconian ones.

In contrast, universities have struggled to state clearly what speech should be permissible (or impermissible) on campus, and then have resorted to arrests, suspensions, and even expulsion for students who violate campus policies, such as codes of conduct governing the use of university common space. In addition, compared to universities, social media platforms offer greater transparency about how they enforce their content policies. They also conduct research to develop a better understanding of harmful content and the efficacy of interventions designed to address it.

Learning from these approaches might help universities to better navigate the speech challenges they are facing.

Universities’ challenges today have echoes in the content controversies that have dominated tech policy for years. Tech companies, like universities today, have been called on to remove legal speech that some people believe poses a safety threat to the community or some groups within it. Tech companies, like universities today, have struggled to facilitate controversial expression while also protecting people from harm.

In the congressional hearing last fall, for instance, private university presidents suggested that free speech principles tie their hands from taking action, emphasizing that speech on its own would not warrant intervention. Harvard’s then-President Claudine Gay said that Harvard could intervene only “when speech crosses over into conduct that violates our policies” or if the speech is targeted “at an individual, severe, pervasive.” Liz Magill, then-President of the University of Pennsylvania, drew a similar line, saying that speech could be punished if it “becomes conduct.”

Similarly, some of the analysis of the recent protests has implied that universities can use only “time, place, and manner” restrictions to restrict campus speech. The idea is that “time, place, and manner” rules are content-neutral, meaning that they do not impose restrictions based on the content of the speech. These types of rules are more likely to be consistent with the First Amendment.

Social media companies have wrestled with these very same questions over the past 20 years, considering whether to take action against speech that is otherwise protected by the First Amendment. And, tech companies have reached a different conclusion than the presidents of many private universities: private organizations can and should moderate speech that is harmful to their communities. In fact, these companies are actively defending that principle in cases before the Supreme Court this term.

They have arrived at this approach through the process of evaluating billions of pieces of content while facing intensive scrutiny from press, policymakers, and researchers — in addition to their users. Most companies – including X, YouTube, Snap, and Meta – prohibit speech promoting violence, speech that bullies or intimidates, and yes, even speech that calls for harm to people based on their religious, ethnic, and racial affiliations and identities. These speech policies are content-based by design: they specify the content that you are prohibited from creating and sharing on these platforms. Even X, a platform whose policies have received heavy criticism recently on the grounds that they are too permissive, explicitly states that it prohibits “media or text that refers to or depicts…genocides.”

If antisemitism, Islamophobia, calls to kill Zionists, or celebrations of Palestinian deaths appear on social media platforms, these companies do not wait to see if the online speech is accompanied by violent conduct offline, and do not wait for additional context. If someone calls for genocide to Jews or murdering Palestinians, the post violates policy, and should be removed. While enforcement of these policies is imperfect, and companies routinely confuse users because of changing policies and inconsistent enforcement, at the end of the day they have more developed and transparent speech policies than most other organizations, including universities.

Ironically, the content on social media that has been most controversial in recent years has been almost exclusively speech that is protected by the First Amendment: false claims of election fraud, hate speech targeting people of color and women, COVID misinformation, and climate denialism. Legal scholar Daphne Keller refers to this category of content as “lawful but awful.” Social media companies have faced enormous pressure to remove this speech even though it is lawful, including from lawmakers who are barred by the First Amendment from passing laws to remove it themselves. And, in many cases – especially in health and politics – they have determined that lawful content is awful enough for their users, and the democracies and societies they live in, to warrant removing or downranking it.

It’s not as though social media companies ignore the First Amendment. They use it as a guide in developing their speech policies and enforcement procedures. Mark Zuckerberg gave a much-maligned speech to that effect at Georgetown in 2019.

But even if the First Amendment would prohibit the government from censoring this speech, it does not prohibit tech companies from enforcing their own community policies, developed to promote expression and safety, ultimately to keep their users returning to post, share, and consume. So while Congress can’t pass a law prohibiting hate speech, a social media platform can prohibit hate speech in its terms of service, and then remove that speech when it appears. As private entities, so can Harvard, Penn, and MIT.

Public universities like our own have the power to implement some content-based speech policies too, though the questions are more difficult. Unlike private universities, public universities are treated as state actors for the purposes of constitutional analysis, so their activity is constrained by the First Amendment. For government actors, content-based restrictions are subject to strict scrutiny review, and judges will strike them down unless they are narrowly tailored to survive a compelling interest. The rule must be the least restrictive means of achieving that interest. Could a university prevail in showing that a code of conduct prohibiting certain types of hate speech — like calls to kill Jews or Palestinians — is the least restrictive means of furthering its interest in cultivating an environment where Jewish and Palestinian students can participate fully in the educational community? It might.

Public universities are also obligated to comply with some content-based prohibitions under Title VI of the Civil Rights Act of 1964, which bans certain types of discrimination. In May 2023, the the Department of Education emphasized that Title VI protects Jewish students from discrimination and harassment. And in a follow-up letter after the Hamas attacks on October 7th, 2023, the Department reiterated its concerns about protecting Jewish students on campus, and used the same reasoning to explicitly call out discrimination against Muslim, Arab, Palestinian, and Israeli students. Students have filed several lawsuits claiming that universities, including public universities, have violated Title VI because they have tolerated antisemitic, anti-Israeli, anti-Zionist, and anti-Palestinian speech.

Why do social media companies take action against content that is protected from government censorship by the First Amendment? In general, the rationale is that they must take action against this content in order to cultivate the kinds of communities they want. Social media companies tend to have mission statements focused on information access and distribution. For example, YouTube’s mission is “to give everyone a voice and show them the world.” But you might never go to YouTube if you saw a graphic video every time you opened it up. If Facebook didn’t prohibit some forms of nudity, it might essentially be a porn site. If people were bullied without recourse on TikTok, they’d be less interested in using the app. Likewise, permitting antisemitism or Islamophobia might frustrate social media companies’ missions. Companies believe these rules help them to advance their missions, and they also help them achieve their revenue goals. If advertisers repeatedly see their ads alongside content they believe to be problematic, they’re likely to take their ad dollars elsewhere. Prominent ad boycotts from companies like Patagonia have made this risk clear.

Although social media companies and universities clearly have different missions, a similar rationale applies in the university context. Universities have an educational mission: they are communities of learning, and play a quasi-parental role in mentoring and guiding students. Title VI recognizes this reality, as the Department of Education asserted in its May 2023 letter. It said that if there is “harassing conduct that is sufficiently severe, pervasive, or persistent so as to interfere with or limit the ability of an individual to participate in or benefit from the services, activities, or privileges provided by a school,” then a university “must take immediate and appropriate action to respond.” If administrators sit on their hands when hate speech against a religious or ethnic group disrupts a student’s ability to participate fully in campus life, they may undermine the fundamental mission of their organizations: to educate students.

Consistent with their missions, social media platforms govern speech through four core elements that we believe universities could learn from and apply in their own organizations: their policies, the remedies they apply to violators, transparency, and research.

First, social media platforms restrict content that threatens their missions and the communities they seek to build. Similarly, universities should place a paramount emphasis on their educational missions and restrict speech that is destructive to the learning experience. Social media companies in turn have sought to make these restrictions as narrow as possible to preserve the balance between expression and safety, constantly evaluating whether speech will cause actual harm to their communities (or beyond them). By the same token, universities should evaluate their policies through the lens of whether speech will result in actual harm to the learning experience, not simply if it is offensive or upsetting.

Second, social media platforms impose remedies for violations of their communities guidelines in ways that universities should learn from. Platforms separate the process of determining whether a violation has occurred from the decision about what remedy to impose.

In general, social media platforms do not ban a user after a first violation. Instead, they couple warnings with education, and some platforms even reduce penalties if a user completes a training on the company’s content policies. On YouTube, for instance, a violator receives a “warning,” but the warning expires after 90 days if they complete a “policy training.” A person can be expelled from a platform, but expulsion typically occurs only if a person repeatedly violates the same policy in a specific window of time. Platforms may also connect offenders to educational resources to help them understand what they did wrong, the rationale for the speech prohibition, and how to avoid policy violations in the future.

Universities can learn from this and seek to impose remedies for violations that are consistent with their role as educational institutions. A commitment to education means that universities should avoid binary approaches to punishment that are too lenient (ignoring problematic conduct) or too heavy-handed (arrests and expulsion). They could give warnings to students who violate the speech provisions of codes of conduct, rather than suspending, expelling, or arresting them. They could provide educational opportunities developed by faculty to students who commit violations of speech and conduct provisions, and facilitate mediated dialogues across lines of difference to help students better understand the implications of what they have said. The most egregious remedies, like suspension or expulsion, could be reserved for repeat offenders.

For some speech, social media companies limit the reach of speech rather than removing it entirely. Similarly, for certain categories of sensitive speech, universities could permit it only in certain locations (essentially a “time, place, or manner” restriction) so it does not disrupt their educational missions. Or they could restrict the prevalence of the speech in other ways, such as by limiting the number of events, by imposing caps on the size of certain protests, or restricting where organizers are able to post event information.

Of course, student protestors often deliberately violate policy and disrupt learning to gain attention and convey the urgency of the problems they are demanding a response to – that is the point of social movements. In which case, universities must evaluate their educational missions and the learning environments for their students and the groups they are a part of before taking any response. Ultimately, any response must be guided by the educational needs of their students and communities, as determined through a consultative process with faculty and student stakeholders – not just university administrators acting unilaterally.

Social media companies also take steps to ensure that violations don’t follow a person forever. A “strike” expires after one year on Facebook and Instagram, for instance. Similarly, universities could expunge speech-related violations from a student’s record after one year, or remove them once they engage in related educational opportunities. Harvard has already taken steps to implement a similar approach. After negotiating with protestors to end the encampment on Harvard Yard, interim president Alan Garber stated that he would “ask that the Schools promptly initiate applicable reinstatement proceedings for all individuals who have been placed on involuntary leaves of absence.”

Social media companies also provide transparency onto their speech policies and enforcement. Companies publish their policies in their apps and on their websites so that users are able to review the rules. They publish transparency reports that include statistics on the volume of content they remove for each category of speech they prohibit. Some companies also submit to third-party audits to evaluate whether they are acting in accordance with their published terms, such as through membership in the Global Network Initiative and the Global Trust & Safety Partnership. In some circumstances, companies commit to voluntary codes of conduct that govern their speech practices, such as the recent AI Elections Accord. Several companies have active consent orders with the Federal Trade Commission (FTC), and if allegations emerge that they violate their commitments under those agreements, they are subject to investigations, fines, and other penalties. Meta has even developed a quasi-judicial process for adjudicating speech-related decisions: its Oversight Board.

Similarly, universities could publish statistics on their enforcement of campus speech policies and provide detailed rationales for speech-related decisions, including reports that retroactively analyze campus controversies which stakeholders can then convene around. Just as social media platforms submit to external oversight, universities could participate in voluntary audit organizations that publish best practices for developing and enforcing campus policies on speech. And just as the FTC’s consent orders provide a mechanism for government oversight of harmful content on social media, Title VI enforcement might also serve as a useful tool for identifying and addressing discriminatory speech at universities. In fact, in November 2023, the Department of Education announced a list of universities under investigation for violations related to antisemitism, anti-Muslim, and anti-Arab speech.

Finally, social media companies conduct ongoing research that enables them to better understand potential harms associated with their policies and to help them develop options for improvement. Some of that research is conducted by internal teams. In other instances, a company may commission external research or provide data that independent researchers can use. Many tech companies have adopted some of the best practices of other industries, and now routinely conducthuman rights impact assessments” to evaluate the impact of their tools on human rights. They use these assessments to analyze how their speech policies will work in different political and social environments. They also solicit feedback from a wide range of stakeholders, including civil society organizations, so they can bring that expertise to bear on their policies.

Research is the lifeblood of universities, so conducting research on speech-related harms is entirely consistent with universities’ expertise and educational missions. Universities should conduct research to better understand campus speech harms and to evaluate the efficacy of their speech policies. They should conduct impact assessments to understand the costs and benefits of their speech policies for university communities. And they should solicit feedback from a broad range of diverse stakeholders in the field, and then use these learnings to improve their practices over time.

Of course, there are some big differences between virtual, online spaces and real-life university spaces. Universities have an educational mission; social media companies have mission statements that focus on information sharing. Social media platforms are public companies, beholden to Wall Street’s revenue targets and accountable to shareholders. In contrast, even though universities’ handling of recent issues like athlete compensation and COVID reveal that they too focus on the bottom line, most universities are nonprofit organizations. And at public universities, professors and staff are employees of the state, not a private entity.

There are other important differences too. Universities aiming to fulfill an "in loco parentis mission" have a more connected relationship with students than platforms typically have with their users. Students are not simply “users” of a university community; many students also live on campus, and get their meals there. The connectedness of this relationship makes the speech resonate differently in each context. Online speech has powerful effects, but there is a strong argument that in-person speech on university campuses may have a more significant impact on individuals.

And of course social media platforms do not hold all of the answers. Social media platforms are in the early stages of figuring out what their policies should be on some of the most difficult speech questions to emerge from October 7th and the Israel-Hamas War. While universities may be able to emulate social media companies’ prohibition of calls to kill Jews, Israelis, Muslims, and Palestinians, the lessons are blurrier on other content, like how to approach chants of “from the river to the sea.”

The answers to questions like these are not yet clear, either online or on campus. But trust and safety professionals at social media companies have spent the past 20 years working through some of the hardest questions about how to police speech, and universities may be able to take lessons away from their experiences. A commitment to governing speech consistent with an organization’s mission – through well-tailored remedies, transparency, research and engagement with a broad array of stakeholders – should guide the path forward.

Authors

Daniel Kreiss
Daniel Kreiss is the Edgar Thomas Cato Distinguished Associate Professor in the Hussman School of Journalism and Media at the University of North Carolina at Chapel Hill and a principal researcher of the UNC Center for Information, Technology, and Public Life.
Matt Perault
Matt Perault is the director of the Center on Technology Policy at UNC-Chapel Hill, a professor of the practice at UNC’s School of Information and Library Science, and a consultant on technology policy issues at Open Water Strategies. Matt previously led the Center on Science & Technology Policy at ...

Topics