What Does Research Tell Us About Technology Platform “Censorship”?
Lisa Macpherson / May 20, 2025Lisa Macpherson is policy director at Public Knowledge.

Sign on a doorway at the Federal Trade Commission in Washington, D.C. Shutterstock
Like many other stakeholders, Public Knowledge is preparing a response to a request for public comment from the Federal Trade Commission on the topic of “technology platform censorship.” The FTC’s request encourages respondents to reply to a series of questions by recounting ways platforms like Facebook, YouTube, and X (previously Twitter) may have disproportionately “denied or degraded” users’ access to services based on the content of the users’ speech or affiliations.
The request appears to be part of an effort by the FTC, Federal Communications Commission, and Department of Justice to break up a "censorship cartel" that Trump administration officials claim systematically censors Americans’ political speech. Based on the submissions so far, the FTC can expect to receive hundreds, if not thousands, of anecdotal – and many anonymous – comments that staff will probably not be able to verify actually occurred.
To ensure our own comments to the FTC are rooted in evidence, we reviewed eight years of research on political content moderation. Our literature review included research studies and white papers from academics, journalists, whistleblowers, social scientists, and platforms going back to 2018. Our goal for this post is to provide a summary of this research and the conclusions we draw from it.
Challenges of researching platform content moderation
Unfortunately, research investigating questions about algorithmic curation and bias – and content moderation in general – has been constrained by these challenges:
- Limited collaboration between platforms and researchers;
- The difficulty of defining and quantifying bias in research design;
- Frequent changes to the platforms’ feed-ranking algorithms;
- Controlling for platform features such as content personalization; and,
- In the absence of platform data, the need to work with user histories or web-scraped data that may reflect the user’s own preferences (such as channels or subscriptions).
If anything, the technology platforms have compounded these challenges over time by restricting access to their data: Meta unwound its CrowdTangle tool (researchers consistently say the company’s new “content library” does not provide the same insight) and X has restricted access and increased application programming interface, or API, fees for researchers. These barriers make it easier for conspiracy theories about content moderation to emerge and spread. Despite these challenges, clear themes emerged from the body of research.
Themes from research regarding political content moderation
Our secondary research review showed these dominant themes (see the subsequent sections of this post for links to the relevant studies):
- There is little empirical evidence that platforms disproportionately deny or degrade conservative users’ access to services or that conservative voices or posts are disproportionately moderated due to their speech or affiliations.
- If anything, platform algorithms advantage conservative, right-wing, or populist content because such content tends to be highly engaging, and because there are structural advantages for right-wing or populist political influencers on technology platforms.
- Some of the characteristics that make this content more engaging also make it more likely to violate platform content moderation policies. So, when conservative or populist content is disproportionately moderated, it is because it is more likely to violate the platforms’ community standards and terms of service. That is, asymmetric moderation results from asymmetric user behavior. This dynamic crosses international borders.
- To the extent that platforms do disproportionately deny or degrade service based on the content of speech (even if it does not violate platform policies), it overwhelmingly impacts marginalized communities, including people of color, LGBTQ+ people, religious minorities, and women. This may be due to how content policies are crafted, bias in moderation algorithms and training sets, and/or automated content moderation systems that do not understand cultural context or language cues. For technology platform users in general, these automated systems are incapable of understanding political motivation or affiliation.
Note: In order to focus on dominant themes, we didn’t include every study we reviewed in this post. We encourage readers to use the links provided to understand the methodology in each study, and the citations within each study to access more information and resources.
There is little empirical evidence that conservative voices are over-moderated
Researchers at New York University Stern School of Business’s Center for Business and Human Rights produced what may be the most comprehensive review of available research (as of February 2021) addressing the claim that platforms are biased in their moderation of conservatives. These researchers also conducted various analyses and rankings using Facebook’s CrowdTangle tool in the 11-month run-up to the 2020 US election. They found that right-leaning Facebook pages contained the most-engaged-with posts; right-wing media pages trounced mainstream media pages in engagement; and Donald Trump beat all other US political figures on the same measure. Independent studies by NewsWhip and Media Matters for America, cited in the same review, also showed that right-leaning Facebook pages and media publications outperformed left-leaning pages or performed similarly. The researchers also recounted a study showing that on YouTube, “partisan right” channels like Fox News and The Daily Wire performed similarly or better than “partisan left” channels, such as MSNBC and Vox, on key measures.
Research also shows that outcomes users attribute to “bias” may actually be the result of a neutral product design. One research study about Google Search in 2018 noted, it’s “difficult to tease apart confounds inherent to the scale and complexity of the web, the constantly evolving metrics that search engines optimize for, and the efforts of search engines to prevent gaming by third parties.” This study found that the direction and magnitude of political “lean” in test subjects’ search engine results pages (SERPs) depended largely on the input query, not the self-reported ideology of the user. It also varied by component type on the SERP (e.g. "answer boxes"), and variable ranking decisions by the platform. If anything, “Google’s rankings shifted the average lean of SERPs to the right.” Another study of Google Search from 2018 showed that conservative users of the platform did not fully realize how dependent their results were on the phrases they used in their search queries. Nor did they have a consistent or accurate understanding of the mechanisms by which the company returns search results. (In the authors’ view, there’s no reason to believe this would differ for liberal users.) A study published in The Economist in 2019 showed that Google’s search algorithm mostly rewarded reputable reporting. That is, the most represented sources were center-left and center-right, and results indicating “bias” were actually the result of the user’s search term.
If anything, platforms’ engagement-based design advantages right-wing content
The single biggest driver of the societal impact of platforms’ content moderation is rooted in human nature: People are wired to pay more attention to information that generates a strong reaction. Research studies have shown that engagement on social media is associated with, for example, increased negativity and anger; outrage and confrontation; or incivility and conflict. Platforms must maximize engagement (e.g., posting, dwelling, liking, commenting, sharing) to optimize profit because of their advertising-based business model. As a result, even modest tweaks to algorithms to increase engagement (such as one Facebook made in 2018 to emphasize “meaningful social interactions”) can end up amplifying provocative and negative content. And as we describe in the next section, research consistently shows that right-wing sources use this type of content more often, and more effectively, on digital platforms.
A study published in The Economist in 2020 aimed to determine what content then-Twitter’s algorithm promoted. The researchers found that compared to its previous chronological feed, Twitter’s new “relevant” recommendation engine favored inflammatory tweets that are more emotive and more likely to have come from untrustworthy or hyper-partisan websites. Another study The Economist published later that year focused on Facebook. It showed that the most prominent news sources on Facebook are significantly more slanted to the right than those found elsewhere on the web, and that right-wing content from Fox News and Breitbart has more Facebook interactions than left-leaning news sites.
The aforementioned report from NYU’s Stern Center for Business and Human Rights also concluded that social platforms’ algorithms often amplify right-wing voices, granting them greater reach than left-leaning or nonpartisan content creators. The authors analyzed engagement data and case studies of content related to high-profile incidents, finding no sign of anti-conservative bias in enforcement, even around contentious events like the January 6 riot at the US Capitol. They noted that right-leaning content frequently dominates user engagement metrics, largely due to Facebook’s algorithmic promotion systems, which reward content that provokes strong reactions. In other words, because Facebook’s feed algorithm optimizes for engagement, and outrage-driven or partisan posts often generate more clicks and shares, conservative pages that specialize in such content tend to benefit disproportionately.
Media Matters has tracked engagement on social media through several studies dating back to 2018. These studies undermine the idea that Facebook, in particular, is biased in its content moderation and reinforce the idea that platform algorithms favor engagement above all. One nine-month study completed in 2020 found that partisan content (both left and right) did better than nonpartisan content on Facebook, but “right-leaning pages consistently earned more average weekly interactions than either left-leaning or ideologically nonaligned pages.” The findings were similar to those in studies Media Matters conducted in 2018 and 2019. Their research in 2021 showed these effects were actually compounded after Facebook tweaked its algorithm to reduce the prominence of news, civic, and health information and video became more popular on the platform.
A study from Politico and the Institute for Strategic Dialogue in 2020 showed that “right-wing social media influencers, conservative media outlets, and other GOP supporters dominate online discussions” around the Black Lives Matter movement and voter fraud, including in Facebook posts, Instagram feeds, Twitter messages, and conversations on two popular message boards.
A study from the Brookings Institution in 2022 focused on YouTube, one of the first platforms to offer “recommendations” to users, also found that regardless of the ideology of the study participant, the algorithm pushes all users in a moderately conservative direction.
Most publicly available data for Facebook shows that conservative news regularly ranks among the most popular content on the site, and Facebook has acknowledged that right-wing content excels at the engagement measures that drive algorithmic amplification. In the election year of 2020, study after study found that the Facebook posts with the most engagement in the United States – measured by likes, comments, shares, and reactions – were organic posts from conservative influencers outside the mainstream media. When asked about this dynamic, a Facebook executive noted, “Right-wing populism is always more engaging” and said that the content speaks to “an incredibly strong, primitive emotion” by touching on such topics as “nation, protection, the other, anger, fear.”
Twitter has also acknowledged that its algorithms favored right-wing content. In 2021, Twitter published its own study that “reveal[ed] a remarkably consistent trend: In six out of seven countries studied, the mainstream political right enjoys higher algorithmic amplification than the mainstream political left.” (This was before Elon Musk purchased the platform and rebranded it to X.) Germany was a notable exception. Twitter, at the time, acknowledged the results were problematic but could not determine whether certain tweets received preferential treatment because of how the Twitter algorithm was constructed or because of how users interacted with it.
Another cross-national comparative study based on Twitter in 26 countries, published in 2025, also found that this pattern extends internationally and that certain political ideologies are linked to a higher likelihood of spreading misinformation. Specifically, politicians associated with radical right-wing populist parties – characterized by exclusionary ideologies and hostile relations to democratic institutions – spread more online misinformation than their mainstream counterparts. The authors concluded that misinformation should be “examined as an aspect of party politics, serving as a strategy designed to mobilize voters against mainstream parties and democratic institutions.”
More recently, a study focused on the role of social media in the February 2025 election in Germany showed that X, TikTok, and Instagram (a Meta platform) were all most likely to show right-wing content to nonpartisan users. Content shown across every platform tested displayed a right-leaning bias. This included both content from the accounts the researchers set out to follow, and content that was selected “For You” by the platforms’ recommender systems.
Besides the lift from the algorithms, conservative elites may also gain greater engagement on technology platforms due to structural advantages in how they use these platforms. One sociologist and professor noted in her 2019 book, “The Revolution That Wasn’t,” that “there is a lopsided digital activism gap that favors conservatives.” For example, online participation is greater with middle- and upper-class movements than their working-class counterparts, and conservative activists tend to come from higher income levels than progressives. Conservative groups, therefore, have more time and resources to invest in content and engagement, and their simple, powerful messaging focused on “freedom” and threats to America fits best with social media’s short attention span and character limits.
A nationally representative survey of Americans conducted by Pew Research in 2024 shows another advantage that now accrues to conservative users: the growing popularity, distribution, and political orientation of news influencers. About one in five Americans now say they regularly get news from news influencers on social media. News influencers are defined as individuals who regularly post about current events and civic issues on social media and have at least 100,000 followers on any of the major social media platforms (Facebook, Instagram, TikTok, YouTube, and particularly X, which is the most common site for influencers to share content). According to Pew’s research, more news influencers explicitly present a politically right-leaning orientation than a left-leaning one in their account bios, posts, websites, or media coverage. Influencers on Facebook are particularly likely to prominently express right-leaning views.
Right-wing content is more likely to violate platforms’ community standards
One of the most consistent themes in the research on content moderation of political content is that what users may perceive as “biased” asymmetric moderation is actually the result of users’ own asymmetric behavior. Specifically, the research shows that conservative, right-wing, and populist platform users (the term varies by research project) are more likely to violate the platforms’ terms of service and/or community standards. Many of the examples date from 2020 and 2021, when platforms evolved their content moderation policies in response to the COVID-19 pandemic and the 2020 US election, both of which became highly politicized. In the interest of public health, safety, and democratic participation, most platforms selected authoritative sources of information such as the World Health Organization, Centers for Disease Control, and local election offices to calibrate their content moderation, up- and down-rank user content, fact-check and label content, and direct people to the latest available information. (For more details by platform in regard to COVID-19, see our blog post.) Users sharing information inconsistent with that of the authoritative sources selected by the platforms found themselves in violation of platform policies. Conspiracy theories, content that calls for violence against particular groups, and other forms of violative content incompatible with platform standards also resulted in disproportionate moderation.
For example, a study of 6,500 state legislators on Facebook and Twitter during the tumultuous time in 2020 and early 2021 (e.g., the pandemic, the 2020 election, and the January 6 riot at the US Capitol) showed that state legislators could gain increased attention on both platforms by sharing unverified claims or using uncivil language such as insults or extreme statements. The results affirm that platform algorithms generally favor content likely to get a strong reaction. However, Republican legislators were significantly more likely to post “low-credibility content” on Facebook and Twitter than Democrats, and Republican legislators who posted low-credibility information were more likely to receive greater online attention than Democrats.
A new research report focused on X’s Community Notes program, now in preprint, examines whether there are partisan differences in the sharing of misleading information. The study is particularly relevant now that both Meta and TikTok have moved to community notes (user-sourced assessments of content) to add context to posts instead of third-party fact-checking partnerships. The researchers’ abstract highlights that posts by Republicans are far more likely to be flagged as misleading compared to posts by Democrats, and not because Republicans are over-represented among X users. Their findings “provide strong evidence of a partisan asymmetry in misinformation sharing which cannot be attributed to political bias on the part of raters, and indicate that Republicans will be sanctioned more than Democrats even if platforms transition from professional fact-checking to Community Notes.”
One 2020 study used YouTube as a lens to investigate whether the political leaning of a video plays a role in the moderation decisions for its associated comments. The researchers found that user comments were more likely to be moderated under right-leaning videos, but this difference is “well-justified” because the videos and comments are also more likely to have characteristics that violate the platform’s rules. These include extreme content that calls for violence or spreads conspiracy theories, or misinformation based on fact-checks. Or, the videos and comments have poor social engagement (such as a high “dislike” rate). Once these behavioral variables were balanced, there was no significant difference in moderation likelihood across the political spectrum.
A prominent study published in Nature showed that users estimated to be pro-Trump/conservative were, in fact, more likely to be suspended from Facebook than those estimated to be pro-Biden/liberal. However, this was because conservative users shared far more links to low-quality news sites – even when “news quality” was determined by groups of only Republicans – and they had higher estimated likelihoods of being bots. As noted above, Facebook’s recommendation algorithm maximizes for user engagement, and this study was one of several that found that misinformation content was more engaging to right-wing audiences. Facebook’s algorithm also appeared to rank misinformation more highly for right-wing users. (In other words, Facebook’s algorithm is doing what it is optimized to do: serve up more content that proves engaging to a particular audience.) The authors concluded that political asymmetry in moderation resulted from asymmetries in violative behavior, not politically biased content policies or political bias on the part of social media companies. This study was one of four that studied, with Facebook’s cooperation, the impact of Facebook’s recommendation algorithm during the 2020 US presidential election.
Another group of data scientists and academic researchers who were given access to Facebook data regarding the impact of social media on elections and democracy in 2019 noted the same thing. They found that most of the high-profile examples of moderation of conservative content resulted from “more false and misleading content on the right" at a time when platforms were more aggressively moderating content related to elections. This researcher noted that, if anything, “Facebook's algorithms could also be helping more people see right-wing content that's meant to evoke passionate reactions.”
Researchers from the Observatory on Social Media at Indiana University, in their own comments to the FTC, described two studies they conducted that explored this question, one from 2017 and one from 2019. The studies “did not support claims of platform censorship.” The researchers noted, “The much simpler interpretation of the data is that the online behavior of partisans is not symmetric across the political spectrum.”
Platforms are most likely to degrade access for marginalized communities
There is also a substantial body of research about the discriminatory impact of content moderation on marginalized communities, specifically people of color, LGBTQ+ people, religious minorities, and women. It was informed by a history of research designed to understand the impact of automated decision-making (in real estate, employment, financial services, and the like) on individuals that share characteristics protected by anti-discrimination legislation, including race, gender, and religion. Those systems, designed to profile individuals and make decisions about the allocation of economic opportunities, consistently showed the strong potential for bias in computational systems. In particular, they were shown to reproduce the historical, inequitable outcomes embedded in their data training sets and project them into the future as predictions of future outcomes. (Public Knowledge wrote about how harms from the algorithmic distribution of content are too often concentrated on historically marginalized communities in this blog post. We have also researched and written extensively about moderating race on platforms and Section 230 and civil rights.)
In her 2018 book, Algorithms of Oppression, UCLA professor Safiya Umoja Noble used textual and media searches to show “how negative biases against women of color are embedded in search engine results and algorithms.” She shared the premise that the profit motives of platforms combined with their monopoly status lead to a biased set of search algorithms. In regard to content moderation, research has focused on how the various elements of content moderation – the drafting of policies, the methods of enforcement, and the vehicles for redress such as user appeals – often mean that the voices of marginalized communities are subject to disproportionate moderation while harms targeting them remain unaddressed and the perpetrators protected.
For example, a field study of actual posts from a popular neighborhood-based social media platform found that when users talk about their experiences as targets of racism, their posts are disproportionately flagged for removal as “toxic” by five widely used moderation algorithms from major online platforms, including the most recent large language models. In the same study, human users also disproportionately flagged these disclosures for removal. The researchers further demonstrated a chilling effect: simply witnessing these valid posts discussing experiences with racism getting removed made Black Americans feel less welcome online and diminished their sense of community belonging.
Another study was specifically designed to understand which types of social media users have content and accounts removed more frequently than others, what types of content and accounts are removed, and how content removed may differ between groups. The researchers found that three groups of social media users experienced content and account removals more often than others: political conservatives, transgender people, and Black people. However, the types of content removed from each group varied substantially. Consistent with the studies cited above, conservative participants’ removals often involved harmful content removed according to site guidelines (e.g., posts deemed offensive, COVID-19 claims inconsistent with those of authoritative sources, or hate speech), while transgender and Black participants’ removals often involved content related to expressing their marginalized identities. Despite following site policies or falling into content moderation gray areas, this content was removed.
There are multiple contributors to this double standard. A report from the Brennan Center for Justice in 2021 highlights 1) how content policies are crafted, 2) bias in automated moderation algorithms, 3) content filters that lack cultural context, and 4) an inability to detect language nuance as the key drivers. Algorithmic systems may attribute the use of words or phrases describing authentic experiences related to gender identity, racism, domestic violence, or mental health to violative behavior on the part of users. Human moderators may also manifest their own bias: whether they lack training, time, or cultural understanding, they make false positive calls on content related to racism and equity more often for some groups. In the study – based on Facebook, Instagram, YouTube, and Twitter – the researchers found that “content moderation at times results in mass takedowns of speech from marginalized groups [communities of color, women, LGBTQ+ communities, and religious minorities], while more dominant individuals and groups benefit from more nuanced approaches like warning labels or temporary demonetization.” The implication is that marginalized voices face extra hurdles to free expression online.
More recently, a group of over 200 researchers signed on to a letter that “affirm[ed] the scientific consensus that artificial intelligence can exacerbate bias and discrimination in society,” noting that “thousands of scientific studies” have shown that AI systems may violate civil and human rights even if their users and creators are well-intentioned.
Summary of research conclusions
In conclusion, empirical research over the past decade reveals that social media content moderation has not always been neutral in its social or political impact. But it is marginalized voices that often bear a disproportionate burden – whether through higher rates of wrongful content removal, diminished reach in algorithmic feeds, demonetization, or threats to free expression that come from harassing, hateful, and false information posted by others online. Conversely, disproportionate moderation of conservative, right-wing, or populist content generally results from asymmetric compliance with platform community standards and terms of service.
Authors
