Why False Bias Claims Don’t Undermine the Case for Social Media Regulation
Amber Sinha / Jun 17, 2025Amber Sinha is a contributing editor at Tech Policy Press.
Days before the recently concluded presidential elections in Poland, international NGO Global Witness found that TikTok showed new users five times more content supporting nationalist right candidate Karol Nawrocki than centrist candidate Rafal Trzaskowski. This finding came weeks after another investigation by Global Witness in Romania, which revealed that TikTok’s algorithm was serving nearly three times as much far-right content as other political content. Earlier this year, an investigation conducted prior to the German elections revealed similar behavior by the recommendation systems of X and TikTok.
These findings, at the end of a global election megacycle, reinforce the failures of large social media platforms in responding to the problems of extremist online content despite several studies suggesting the need to address them for election integrity. These investigations by Global Witness and other civil society organizations over the past two years run counter to the narrative popular among the far-right that social media has an anti-conservative bias.
Understanding the social network algorithmic bias narrative
Engagement-driven algorithms, commonly known as recommendation systems, have consistently been identified as key drivers in the global spread of harmful content, such as health misinformation, political disinformation, and hate speech. A striking example is YouTube, where Mozilla's YouTube Regrets project demonstrated that its algorithm is responsible for an estimated 700 million hours of daily watch time (70% of total), profoundly impacting viewers and fueling radicalization and societal division. Likewise, internal revelations from Meta confirm that its platforms' fundamental mechanisms—such as virality and optimizing for engagement—deliberately incentivize angry and polarizing content, including hate speech and misinformation, despite the company acknowledging the detrimental effects. TikTok's algorithms are similarly implicated in actively promoting radicalization, polarization, and extremism, acting as a direct conduit for harmful material rather than merely offering personalized content.
Experts have long understood these inherent problems with engagement-based algorithms. For instance, throughout the 2024 election cycle, it was observed that algorithms on Meta, TikTok, and YouTube consistently favored sensational and polarizing content, pushing misleading information to voters in Taiwan and elsewhere. Eve Chiu, CEO of the Taiwan FactCheck Center, articulated this issue, stating that the algorithmic media ecosystem often prioritizes "low-quality information like disinformation, misinformation, fake news, sensational stuff, rumors, hoax" over high-quality content.
Much like the echo chambers reinforced by social media platforms, the narrative about how platforms deal with content across the ideological spectrum operates in starkly divided bubbles of its own. Conservatives have long argued that social media platforms are biased against their point of view. Amongst the sweeping content moderation changes made at Meta earlier this year, Zuckerberg also announced the relocation of moderation and safety teams from California to Texas. Ahead of the Trump inauguration, this much-publicized move was a play for MAGA approval to appease the conservative base against “the concern that biased employees are overly censoring content.”
However, evidence for this belief remains scant. Research by Yale SOM’s Tauhid Zaman found that from analysis of the data from 2020 that “accounts sharing pro-Trump or conservative hashtags were suspended at a significantly higher rate than those sharing pro-Biden or liberal hashtags—they were about 4.4 times more likely to be suspended.” On the face of it, this suggests an anti-conservative bias; however, the research also found that users associated with conservative-leaning content were more likely to share links from low-quality or misinformation-heavy sources, which explains the disproportionate number of suspensions.
It is well-established that social media algorithms, including Facebook's feed, X's trending topics, and recommendations on TikTok and YouTube, consistently elevate extremist narratives. This prioritization, exemplified by how trending topics measure ‘velocity’ over sheer volume, inherently disadvantages complex or thoughtful content. A thriving democracy relies on engaged citizens, not just information consumers, yet platforms treat users as the latter. Their business model, which aims to trigger our most basic reactions, might be lucrative, but it is detrimental to democratic health. Citizens require exposure to diverse perspectives and arguments to form well-rounded opinions, moving beyond merely agreeable or extreme views. However, algorithms often take our provisional ‘likes’ and ‘clicks’ as definitive signals, solidifying nascent opinions too quickly. Adding to this, politically motivated actors are increasingly exploiting these platforms to inundate users with manipulative misinformation, hate speech, and polarizing content.
For some years now, the regulatory conversation has focused more on algorithmic recommendation engines, with their inherent bias towards extremist, viral, and provocative content, as the core issue in content dissemination, significantly undermining democratic functions. Platforms have cleverly diverted attention away from their dwindling investments in transparency and content moderation, thereby sidestepping genuine accountability for the harmful tendencies of their algorithms. This reinforces the need to cut through the clutter of divergent opinions on which way the large social network platforms lean in their recommendation as well as takedown decisions.
Failures of self-regulation
Another announcement by Mark Zuckerberg revealed a strategic partnership with the Trump administration, aimed at countering international regulatory initiatives targeting US technology companies. This alliance is bolstered by threats from the US government to retaliate against sovereign states, including the European Union and Brazil, that seek to impose taxation or regulatory frameworks on major American tech corporations. This development occurs after approximately a decade during which prominent tech platforms primarily utilized self-regulatory mechanisms, such as community guidelines, partnerships with fact-checkers, and content moderation, to govern online discourse. Nevertheless, the demonstrable shortcomings of these self-governing approaches in mitigating significant digital harms, such as online radicalization and the incitement of violence through hate speech, have intensified global demands for external regulation. In response, the technology industry has deployed substantial financial capital in lobbying efforts across various jurisdictions.
A recent report by LobbyControl, in collaboration with Balanced Economy Project and Global Justice Now, indicates a considerable increase in the tech sector's lobbying expenditure within the European Union, with the five largest companies contributing significantly to this sum. The confluence of powerful lobbying capabilities and pervasive market dominance within the tech industry poses a direct challenge to democratic principles. As national economies and societal structures become increasingly interdependent on the products and services offered by private technology companies, governmental bodies exhibit heightened susceptibility to the sector's influence, frequently prioritizing corporate objectives over collective welfare. Tech magnates leverage this systemic dependence to expand their commercial enterprises and advance their political agendas, thereby attenuating the fundamental democratic tenet of equal voter representation.
Although social media platforms have largely circumvented significant jurisdictional regulations by asserting self-regulation through content policies—an argument that conventionally serves to mitigate state censorship for media entities—the current self-regulatory practices of Big Tech demonstrably fail to meet the requisite criteria for effective and meaningful self-governance. These criteria include, but are not limited to, independence from governmental, commercial, and special interests; establishment via a fully consultative and inclusive process; robust complaint mechanisms with clear procedural rules for assessing ethical breaches; and the authority to impose sanctions. None of these essential factors is adequately addressed by the existing self-regulatory frameworks employed by Big Tech platforms.
Platforms benefit by focusing even critical discussions on their declining investment in transparency and content moderation, thereby avoiding conversations about concrete measures to make changes to their recommendation systems. We must reframe the conversation to mandate transparency and accountability, with a specific focus on the internal recommendation engines. These engines are currently designed to elevate and promote highly damaging content—including hate speech, misinformation, and material that incites racial violence—under the guise of maximizing user engagement and ultimately, profit.
Authors
