Home

New Study Suggests Right-Wing Bias in YouTube Recommendation Algorithm

Prithvi Iyer / Dec 7, 2023

YouTube is the most popular video platform in the world, and has been described as potentially “one of the most powerful radicalizing instruments of the 21st century.” However, this claim has not been backed conclusively by scientific evidence, a reality further complicated by the fact that YouTube’s sheer scale and opacity make it difficult to study.

A new paper by researchers from Princeton University and UC Davis puts two related concerns to empirical scrutiny. The authors examine whether a user’s YouTube recommendations are “congenial” – read, compatible – with their political ideology, and if these recommendations become more extreme as the user goes deeper down the YouTube “rabbit hole.” Contrary to popular belief, the authors do not find meaningful increases in ideological extremity in video recommendations. However, they do notice a “growing proportion” of recommendations come from “problematic” channels like “Alt-right,” “Conspiracy,” “Socialist,” and “QAnon,” among others, especially for users categorized as “right” and “very-right.” And, they conclude that their audit of the platform “suggests the presence of right-wing bias in YouTube’s algorithm.”

“This paper adds another piece to the puzzle of understanding the effects of algorithmic recommendations,” said Manoel Ribeiro, a researcher at EPFL who has looked at similar questions. “Their findings indicate that, indeed, following the ‘algorithmic rabbit hole’ increases the recommendation of extreme content. This paper stands out because the authors conducted a high quality, comprehensive audit on an unprecedented scale.”

While researchers like Ribeiro have previously shown that YouTube indeed drives ideologically moderate users to extreme channels, research by Hosseinmardi et al found “little evidence that YouTube’s recommendation algorithm is driving attention to radical content.” The authors of the new paper attribute these conflicting conclusions to methodological differences in the studies that produced them. One approach in prior work used untrained sock puppets (dummy accounts with no user history) which could not accurately mimic real user activity. Research conducted by studying the behavior of actual users cannot tease out the differences between the role of the algorithm from the “actions of the user.”

To address these concerns, the authors use 100,000 “trained sock puppets.” Put simply, these sock puppets are “automated browser instances that mimic YouTube users by watching videos and gathering recommendations.” These sock puppets are then “trained” by watching videos from a particular political ideology. As with a real user, YouTube provides personalized recommendations for these sock puppets based on what the platform thinks would drive engagement. The recommendations of these trained sock puppets on the YouTube homepage are used to test the platform’s role in exacerbating polarization and online radicalization.

The authors find that the homepage recommendations for the trained sock puppets are ideologically similar, but the extent of the similarity is less compared to the training phase because the sock puppets were trained on videos exclusively from one political ideology. This finding aligns with YouTube’s policy of recommending a “mixture of personalized recommendations, subscriptions, and the latest news and information.” When comparing the “congeniality” of sock puppets across different ideologies, the authors find that for users on the far-left and far-right, the home page recommendations are more likely to be of the same ideology, thereby encouraging echo chambers. Specifically, the probability of a far-right user receiving a first recommendation that is compatible with their political perspective is 65.5%, a figure that is slightly lower for the far-left user (58.0%). When the authors examine the congeniality of all eight video recommendations part of the YouTube homepage, they find that “recommendations for very-right sock puppets are significantly more right-leaning compared to the center recommendations than those for the very-left sock puppet”. The researchers also conclude that ideologically cross-cutting recommendations are offered significantly less often to far-right users.

Does this phenomenon get worse the longer someone watches YouTube videos? To test if ideological congeniality increases the longer users watch YouTube videos, the authors follow the “Up-Next’ recommendation trail for the sock puppet accounts and find it to “increase the odds that right-leaning sock puppets will continue to watch ideologically congenial videos” while it “does not increase recommendations to congenial videos for left-leaning sock puppets.” These findings cumulatively suggest that the possibility for filter bubbles via YouTube recommendations is not uniform; rather, it is higher for users on the political right.

Another important dimension of the paper’s findings relates to the relationship between YouTube’’s recommendation algorithm and online radicalization. Here, the authors rely on their estimates of a video’s ideological slant. They compile a list of 4,150 problematic YouTube channels that are known to promote extremist content to check if videos from these channels show up in the recommendation trail. The findings indicate that the recommendations only get “slightly more extreme as the sock puppets traverse the trail.” Interestingly, the recommendations get more extreme for far-left and far-right sock puppets, while moderate sock puppets don't observe significant increases in extremity. However, when looking at exposure to problematic YouTube channels, the findings are disturbing. On average, 36.1% of the sock puppets across the ideological spectrum receive recommendations from problematic channels in the trail. This number rises to 40% for far-right sock puppets. This indicates that the likelihood of encountering extremist content substantially increases the longer a user spends time on YouTube.

Recommendation algorithms are almost always optimized to drive user engagement. This study indicates that for political content on YouTube, the platform indeed recommends content “closely aligned with prior political leanings of users.'' However, this does not mean that the algorithm solely promotes like-minded content, as reflected in the fact that sock puppets also received cross-cutting content in the testing phase. Most notably, the study found that for very-right sock puppets, the chances of encountering far-right recommendations increased by 37% the longer they engaged with YouTube, with the videos getting progressively more extreme. Thus, although it is rare for the average YouTube user to be driven towards extremism and filter bubbles, those with more extreme political beliefs on either side of the political spectrum may be more susceptible to online radicalization.

“One should be extremely careful when reporting those findings because there are important limitations,” cautioned Ribeiro. “They did not use real user histories, which means that it is unclear to what extent their experiment translates to the experience of someone using YouTube. They do not simulate real users, with agency, which do not blindly follow recommendations as they are shown to them.”

The authors of the new paper acknowledge this. “Studies extending our work to actual YouTube users and testing over time effects of algorithmic systems are needed,” they write. Still, the fact remains that exposure to extremist channels on YouTube remains dangerous, and YouTube hosts a massive amount of such content. Given the political climate in the US and beyond, the need for transparency in recommendation algorithms and frequent audits examining these platforms is more pressing than ever.

Authors

Prithvi Iyer
Prithvi Iyer is a Program Manager at Tech Policy Press. He completed a masters of Global Affairs from the University of Notre Dame where he also served as Assistant Director of the Peacetech and Polarization Lab. Prior to his graduate studies, he worked as a research assistant for the Observer Resea...

Topics