Home

Perhaps YouTube Fixed Its Algorithm. It Did Not Fix its Extremism Problem

Cameron Ballard / Nov 9, 2023

Cameron Ballard is Director of Research at Pluro Labs, a non-profit that harnesses AI to deepen and defend democracy in the United States and globally.

Recent research appears to suggest that YouTube has substantially addressed the problem of online “rabbit holes” that lead individuals to extreme content and misinformation. The reality is that despite whatever improvements have been made to its algorithms, YouTube is still a massive repository of dangerous content that spreads across other social media and messaging apps both organically and through recommendations, particularly in non-English speaking communities. Too little is known about these phenomena, but what is clear is that YouTube is hardly without fault when it comes to the overall volume of hatred, conspiracy theories, and misinformation on social media.

Algorithms are not the entire story

Algorithms are undeniably influential in modern life. They affect not just the online content we consume, but access to credit, employment, medical treatment, judicial sentences, and more. The companies that make them present them as an inscrutable system, impossible for outsiders to understand. The supposed complexity of an algorithm is used not just for marketing; it also allows tech companies to shirk responsibility for their own policies and development priorities. When something goes wrong, a “bad algorithm” is blamed.

However, if you peel back the layers of statistical complexity, at the end of the day, an algorithm is just a set of instructions; a recipe. If you go to a restaurant and are told the food is bad just because of a bad recipe, would that be enough of a response? Or would you wonder why this recipe was used in the first place? Who decided on it? Why haven’t they stopped using it? These same questions should be leveled at tech companies. Treating the algorithm as the beginning and end of harmful online content prevents a real solution by obscuring larger structural concerns and the bad incentives that plague YouTube and other social media platforms.

While Facebook received the bulk of public scrutiny in the aftermath of the 2016 election because of the Cambridge Analytica scandal, YouTube also served as a home for the alt-right and political conspiracy theorists. Commentators and politicians blamed social media companies’ algorithms for the promotion of harmful content. Three years later, YouTube announced it had made changes to its recommendation algorithm that reduced the prominence of harmful content. Today, a lot of academic and journalistic research on social media remains focused on “the algorithm,” with a few recent studies supporting YouTube’s claim that its recommendation algorithm less frequently promotes fringe content to viewers that aren’t seeking it out. These studies received media attention, leading some to think that YouTube’s radicalization problem was largely fixed. Whether it was because of this impression, the more intense focus on Twitter and Facebook, or something else entirely, former YouTube CEO Susan Wojcicki was notably absent when Mark Zuckerberg and Jack Dorsey were brought in front of Congress.

The fact is, YouTube remains a major site for political extremism, conspiracy theories, and other harmful content. The same studies that show improvements in YouTube’s algorithm concede that the platform still does promote and host various alternative and extremist content. In particular, Chen et al. found that while harmful content was mainly recommended to people already subscribed to that content, these recommendations still occurred at a similar rate to mainstream content. In comparing off-platform (from another site) and on-platform (YouTube’s algorithm) referrals, they found that roughly half of both mainstream and alternative views came from on-platform referrals, and, similarly, roughly half of the recommendations from harmful content directed users to other harmful content.

One conclusion is that the algorithm has stopped promoting harmful content to mainstream viewers. Another conclusion is that now it only promotes this content precisely to the people who are most likely to believe it. Either way, the result is that even if the YouTube algorithm isn’t driving unsuspecting people down rabbit holes, it still promotes extremist content and provides a safe space for creators to build a community.

This is not an academic exercise

These communities have real world consequences. In 2020, well after YouTube’s algorithmic changes, YouTube was a major host for QAnon content. I worked with the January 6 House Select Committee to find evidence of creators promoting the insurrection and calling for violence on YouTube. In Germany, YouTube not only contributed to the growth of QAnon across the Atlantic, but provided a digital organizing space for the Reichsbürger movement.

Admittedly, YouTube cracked down on this content after the fact, removing many QAnon channels in 2020 and many channels involved in Jan 6th after the insurrection. But these were reactionary responses that came too late. Without moments of intense public scrutiny, there is little incentive for social media companies to change their behavior. Indeed, YouTube even walked back its policy on 2020 election misinformation after public backlash over online misinformation died down, claiming the threat had passed.

If we take a step back from the failures and successes of “the algorithm,” we would recognize that YouTube has a strong profit motive to foster these harmful communities. Academics and journalists have demonstrated how fake news and outrageous content can drive ad revenue to creators and platforms. In my own research, I characterized exactly how this advertising happens on YouTube conspiracy videos specifically. A recent report by Ekō showed how the site profits off of anti-LGBTQ+ content. YouTube’s algorithmic changes may have removed these videos from mainstream circulation, but it continues to promote this content to an audience and profit off of its success.

In other sectors of the advertising industry, this behavior would be unacceptable. Social media companies’ marketing value comes from the ability to segment viewers into distinct audience groups so that advertisers can deliver content to interested groups. It would be absurd if Nielsen created “American white nationalist” or “German anti-semitic terrorist group” audiences. Yet when YouTube maintains, promotes, and profits off of such communities, it can dismiss such groups as an unintended consequence of the algorithm and promise to do better next time. As long as YouTube can harbor and profit off this content without ramifications, it will continue to find ways to promote it. The problem will not be fixed, it will simply be hidden from view.

A global problem requires a global approach

A focus on algorithmic promotion of extreme political ideologies in the US also fails to adequately address international content. Studies of the site rely on subsets of harmful content, painstakingly collected and labeled by academic researchers. This leads to a bias for English language content from the United States, framed by US politics. Prominent conspiracy theorists and white nationalist figures like Alex Jones or Nick Fuentes have been banned, but similar Latin American political commentators remain. For instance, a host of 'news' channels parrot conspiracy theories broadcast by Mexican president Andrés Manuel López Obrador’s own YouTube channel. While YouTube may have reduced the prominence of harmful English language content, there is little knowledge of the same problem in other languages and locales. Changes to the whole platform’s incentive structure and accountability would go much further in improving the platform globally than any game of algorithmic whack-a-mole ever could.

Greater global involvement would also provide key levers for actually changing structural incentives at social media companies. In the US, online moderation is often framed as a free speech issue, shaping public discourse and ultimately limiting any government response. Platforms can shirk responsibility for the content they host by claiming they uphold free speech rights with an algorithm that promotes all content equally. In reality, they are still deciding what content to promote, even if they only promote it to a small audience and offload their decision making process onto an algorithm. Outside the US, in places like Germany, restrictions on speech are much more common and accepted. The EU is a leader in data privacy and tech regulation. Extending these policies to more aggressively push for platform accountability in the content it hosts and promotes could force YouTube to change its policies in a way that improves content moderation across the world.

A technical solution is not enough

Ultimately, it’s important to remember that radicalization and extremism is a social problem, not just a technical one. The people who stormed the US capitol on January 6 didn’t do it because they found some video online. They did it because there were both on- and offline communities promoting 2020 election misinformation and calling for a revolution on January 6, bolstered by political elites and partisan media that fed their furor. The algorithm may have promoted these videos less to a mainstream audience, but YouTube did little else to curb this content until well after it became a problem with real world consequences.

It is better to understand harmful content, the creators that make it, and the platforms that host and promote it, as part of a community rather than as lone actors posting individual videos to a neutral and unaccountable platform. If I didn’t find that Nick Fuentes video through a YouTube recommendation, one of my friends might find it and send it to me on Telegram. Instead of asking whether or not YouTube recommends alternative content to the average Joe, we should be asking what its role is in the creation and maintenance of spaces for actively harmful communities. YouTube still hosts this content. It still promotes it to subscribers. It still allows users to express support and direct to other harmful content. And it still profits from it and shares those profits with creators. Without addressing these phenomena, we can never adequately address the problem of damaging online content.

Moving beyond the algorithm

My goal is not to fault studies of YouTube’s algorithm. These kinds of studies are necessary and good. Unfortunately, public focus on the success or failure of social media algorithms distracts from the conclusions we should be making. Even the study authors agree that the problems of harmful content “center on the way social media platforms enable the distribution of potentially harmful content to vulnerable audiences rather than algorithmic exposure itself.” Chen’s study ultimately makes the argument not that harmful content is not a problem on YouTube, but that the “rabbit-hole” phenomenon of algorithmic radicalization is not representative of actual online behavior.

I don’t know if YouTube fixed its algorithm. Maybe the problem of rabbit holes is fixed. Maybe it never existed. But I do know that YouTube is still a major space for harmful online content. Even if YouTube fixed its algorithm, that doesn’t mean the whole platform is fixed. It just means we need different strategies to fix it.

Authors

Cameron Ballard
Cameron Ballard is a data scientist working to understand and reduce the spread of problematic content online. Cameron spent years as an academic researcher, with work ranging from public health disinformation to tracking the spread of QAnon online. For their PhD research fellowship, they studied an...

Topics