Home

Donate

Buffalo Shooting Demonstrates the Limitations of Existing Content Moderation Protocols

Welton Chang / May 17, 2022

Dr. Welton Chang is co-founder and CEO of Pyrra Technologies. Previously he was the first Chief Technology Officer at Human Rights First.

Tops Supermarket, Jefferson Avenue, Buffalo New York. Adapted from Andre Carrotflower.

In the aftermath of the Buffalo, New York mass shooting, the role of major social media platforms such as Twitch, Meta/Facebook, Discord and Twitter in the spread of media associated with the white supremacist attack is under scrutiny. While much of the criticism of these platforms is well deserved, it often obscures much deeper problems with present-day content moderation.

First, stopping the spread of noxious content such as video of the Buffalo mass shooting is not something that platforms, even major ones, can do on their own. This is because there is an inherent asymmetry between broadcast capability, content proliferation and the efficacy of moderation systems. When every internet user has relatively anonymous access to commercial grade content-delivery systems, there will always be a workaround to actions taken by centrally-managed platforms. While Facebook did not host the video of the shooting, as was the case in the Christchurch incident, users were able to share links to the much smaller platform Streamable, leading to millions of views before Facebook’s teams were able to stem the violative content tide.

As of this writing, versions of the video and the shooter’s manifesto are available across a smattering of private file hosting sites, the dark web, and even large platforms such as Dropbox. While the Global Internet Forum to Counter Terrorism (GIFCT) does a commendable job given what it was designed for (e.g., enabling and coordinating content removal actions at large tech companies) smaller companies and sites are not a part of the voluntary organization. The root issue has two parts: 1) user-generated content (UGC) can be problematic for platforms even if they do not host the content and 2) reliable file-sharing is no longer difficult to create and access, rendering actions taken such as GIFCT’s “Content Incident Protocol” less than effective. Ultimately every platform that hosts content must be concerned with its own UGC, as well as the UGC of every other internet platform, since it can be so easy to hotlink to violative content. That is a decidedly difficult situation to fully grapple with.

Second, the role of smaller social media sites, video and file-sharing platforms in contributing to the radicalization of the shooter in the first place needs to be more broadly and publicly examined. The radicalization pathway of the shooter is deeply familiar to anyone who studies modern far right violent extremists: exposure to content on a dark corner of the internet that spurs curiosity (in this case, a gif of the Christchurch massacre). With just a tiny bit of internet sleuthing, the shooter found the entire video and went down a content rabbit hole that helped inspire the brutal and senseless murders of ten innocent souls in Buffalo. Many of these smaller platforms lack in-house content moderation, lack the technology and even the capability to build the technology to do automated content detection and removal, and perhaps lack the will to pursue even the bare minimum of compliance with existing content norms and protocols.

And why should these companies care? They remain largely shielded from liability by Section 230 of the Communications Decency Act, a situation that Congress and state legislatures are unable to find the consensus to change. Sadly, we are almost a full three years since the Christchurch Call and versions of the massacre video are easily found on both the surface and dark web, which the Buffalo shooter explicitly cites as inspiration for his actions. It is not enough to simply call out the site 4chan– praise for the shooter’s actions reverberates on chan clone sites and other far right and extremist forums. We must recognize that there is an entire ecosystem of content that supports the belief systems that lead to tragedies such as Buffalo.

Third, even if platforms, the public, and governments agree on removing the most noxious content (and that’s a big if, considering the vocal outcries from free speech absolutists), there is a complete lack of agreement on how to deal with the sort of content that formed the basis and justification for the Buffalo shooter’s actions: so-called replacement theory. A simple search for the hashtag #whitegenocide on Twitter shows a deep and robust conversation on this false conspiracy theory. Even on a platform with a large cohort of in-house content moderators and deep investment in AI tools to automatically detect and remove violative content, replacement theory has long had a home. Recent amplification of the false idea by Tucker Carlson and Republican lawmakers has injected it into the mainstream conversation, but Twitter has been a hospitable host for the conversation for years.

So here we are again, dealing with the aftermath of another mass shooting, in a media environment that will move on to the next crisis soon and a social media environment that continues to amplify and incentivize extreme content for clicks and eyeballs, including false accusations that the Buffalo shooting was a false flag attack. It is hard not to become discouraged, as someone who has been immersed in building systems to tackle disinformation and hate since 2017. The status quo cannot hold– right now, someone is likely watching Buffalo-shooting related content, perhaps reading the manifestos of other mass shooters, finding inspiration for a future heinous action.

Yes, more must be done, but the lack of accountability levers that can be pulled by the government– the only entity that can compel technology companies to act given most of their shareholder structures– makes voluntary compliance the only way that this problem gets better in the short term. Specific carve outs to Section 230’s liability shield are one route to change, but it is right to be wary of overbearing government interference in private enterprise and free expression. But even absolutists such as Elon Musk recognize that there are legal limitations on speech, such as speech that incites imminent violence. More preferable is a non-adversarial, daresay, cooperative, approach to content decision-making. While Facebook’s Oversight Board (OB) is not without faults (Facebook maintains complete control over the cases that are referred to the OB) the spirit with which the organization operates is more in-line with acceptable solutions in the space as private companies struggle with existing economic incentives: what may be good for the public and society in general may not be good for the bottom line. But sometimes public benefit outweighs these other important factors, and the only way to achieve compliance is through government action and threat of action.

Just as we should not completely trust private companies with hefty legal matters such as what constitutes compliance with existing free speech laws and rulings, we should not trust the government with developing and maintaining the technology that’s needed to assist companies with policing their online spaces and enforcing policies. While GIFCT and organizations like it– such as the National Center for Missing and Exploited Children (NCMEC)-- can help with hashing violative content, the hashes do little good for companies and sites without the ability or willingness to employ technical solutions on their sites. Companies such as Google and Facebook have open-sourced algorithms in the past, and one way to help smaller companies deal with problematic content is indeed to open-source computer vision and other detection algorithms, as well as share compute and storage resources. There’s just too much attack surface for companies to be selfish with this technology, considering that third party hosted content continues to be one of the main vectors (see: Streamable links on Facebook) to show violative content to users.

More government intervention. More tech solutionism. Even though we need both, it won’t be enough to defeat white supremacy or other forms of violent extremism, ultimately. The ideas and the content will persist and users will continue to proliferate it, some of them doing so to be “edgy” and others with the intent to influence or provoke violence. From reading the shooter’s manifesto, it’s clear that he discovered a seductive set of ideas, that while internally consistent, are completely divorced from reality. This is a phenomenon similar to what happens to QAnon adherents, cultists, and others radicalized online: the ideas appear truthful because the ecosystem in which they proliferate is entirely self-referential.

Ultimately, that is the problem that is so hard to address: these ideas are difficult to contain. Even if the largest social media and messaging platforms achieve perfect moderation– an ideal that is far from the current reality– the threat will persist, as white supremacy is deeply embedded in American society, in our politics, in our communities, and even in families. Just as we must take a systemic, cross-platform view of the content moderation problem that violent extremism poses, we must take a whole-of-society approach to confronting the hate that inspired the attack in Buffalo.

Authors

Welton Chang
Dr. Welton Chang is co-founder and CEO of Pyrra Technologies. Most recently he was the first Chief Technology Officer at Human Rights First and founded HRF's Innovation Lab. Prior to joining HRF, Welton was a senior researcher at the Johns Hopkins Applied Physics Laboratory where he led teams and de...

Topics