The Politics of Social Media Research: We Shouldn’t Let Meta Spin the Studies It Sponsors

Justin Hendrix, Paul M. Barrett / Jul 2, 2024

Meta's corporate headquarters in Menlo Park, California. Shutterstock

For years, and especially after the attack on the US Capitol on January 6, 2021, the effects of social media on political beliefs and phenomena such as political polarization, extremism, and incitement to violence have been the subject of heated debate. Much concern has focused on the platforms operated by Meta, including Facebook, the largest social media platform in the world and in the United States, where the company is headquartered.

But the release last July of the first in a set of studies conducted in a collaboration between Meta and academic researchers who received privileged access to data during the 2020 election cycle spawned headlines suggesting that concerns over Facebook were perhaps overblown. “New research suggests Facebook algorithm doesn't drive polarization,” read a headline in Axios in July last year. “New studies: Facebook doesn’t make people more partisan,” wrote Politico. “So maybe Facebook didn’t ruin politics,” read a headline in The Atlantic.

Such headlines likely pleased senior executives at Meta, including Nick Clegg, its President of Global Affairs, who hailed the research as providing more evidence that the prevailing narrative about Meta’s negative impact on politics is wrong. “The experimental studies add to a growing body of research showing there is little evidence that key features of Meta’s platforms alone cause harmful ‘affective’ polarization, or have meaningful effects on key political attitudes, beliefs or behaviors,” Clegg wrote in a corporate blog post.

The company’s aggressive spin on the results of the research it sponsored prompted disagreement by the academic scholars that produced the studies, even as the project’s independent ombudsman wrote in Science about the limitations of the collaboration, dubbing the relationship between the academics and the company “independence by permission.”

A ‘straw man’ argument

We wrote previously about the disconnect between the nuanced findings in the research and Meta’s attempt to use those findings to exonerate itself. The company’s public relations campaign has consistently relied on a “straw man” argument that suggests Meta’s critics claim it is a primary source of political discord. It is not, and that is not what serious-minded critics claim. There are larger forces and personalities at play in US politics. But that is beside the point. We and other analysts have investigated and written about how platforms like Facebook and Instagram interact with these larger forces, create troubling incentives for these personalities, and exacerbate the information distortions that plague our civic life.

Since last summer, only one more paper from the promised series of 16 has been published. Like the others, it offers nuanced findings but does little to dispel the broader set of concerns about Meta’s role in politics and its penchant for issuing dubious PR statements about the results. On the verge of another potentially disruptive US election, it is clear Meta would like to disarm its critics (and perhaps reassure its employees). We argue this would be unwise, for three reasons.

1. None of the new social science is substantial enough to counter evidence from inside Meta itself.

We collaborated on a report published in September 2021 by the NYU Stern Center for Business and Human Rights which, in the wake of January 6, pointed out Clegg’s slay-the-straw-man rhetorical strategy. No serious participant in the public debate about social media’s role in politics, we argued, contends that tech platforms are, in Clegg’s words, “the unambiguous driver” of heightened political polarization in the United States.

Our report went on to observe that a range of social scientists have concluded that use of social media does contribute to partisan animosity in the US. For example, a group of 15 researchers offered a nuanced assessment in an article published in October 2020 in the journal Science. “In recent years,” they wrote, “social media companies like Facebook and Twitter have played an influential role in political discourse, intensifying political sectarianism.” Reinforcing the point, a separate quintet of researchers summed up their review of the empirical evidence in an August 2021 article in Trends in Cognitive Sciences: “Although social media is unlikely to be the main driver of polarization,” they concluded, “we posit that it is often a key facilitator.”

While the committed researchers involved in Meta’s 2020 US elections research project have produced additional incremental findings about Facebook’s role in polarization and the formation of political beliefs, on balance, the results are not enough to outweigh the evidence that gave rise to concern in the first place. In the aftermath of January 6, the most distressing evidence about Facebook’s role emerged in the form of internal documents leaked to journalists and Congressional investigators, including the trove provided by whistleblower Frances Haugen. For instance:

  • Documents provided to the Securities and Exchange Commission by Haugen “provide ample evidence that the company’s internal research over several years had identified ways to diminish the spread of political polarization, conspiracy theories and incitements to violence but that in many instances, executives had declined to implement those steps,” according to the Washington Post.
  • An internal team at Facebook “concluded that the company had failed to prevent the ‘Stop the Steal’ movement from using its platform to subvert the election, encourage violence, and help incite the Jan. 6 attempted coup on the US Capitol,” according to BuzzFeed News, which first published the company report.
  • Internal communications obtained by the House Select Committee charged with investigating the January 6 attack suggested technical problems with the company’s moderation of Facebook groups prevented it from effectively applying its policies on violence and incitement for months: “Hundreds of groups, profiles, pages, and accounts that should have been disabled were not, and once the issue was corrected more than 10,000 group received strikes, with more than five hundred disabled as a result.”

Other documents revealed in the Haugen leaks showed that the company knew that its algorithms stoked outrage, and that its attempts to resolve the problem backfired; that its own researchers documented complaints from European politicians who believed that Facebook created a “structural incentive to engage in attack politics” and saw “a clear link between this and the outsize influence of radical parties on the platform”; and that again and again, the company’s leadership failed to address a range of other harms its researchers identified.

The significant body of internal evidence now in the public domain about Facebook’s role in political polarization, the spread of misinformation, and the attack on the US Capitol is far more compelling a record than the limited and nuanced scientific findings from the recent studies. And beyond what they tell us about specific events, the documents expose a pattern of apparent negligence and mismanagement at the company, suggesting that failures of its leadership may well have had more effect in shaping the company’s role in politics than its algorithms.

2. The 2020 US election project tells us more about the limitations of research conducted in collaboration with Meta than it does about the effects of Meta’s platforms on politics.

The project’s first tranche of papers provide a number of worthwhile insights, and 11 more studies remain to be released. But on the whole, the work does more to show the limitations of the broader undertaking than to exonerate Meta. Looking closely at the results of the latest study shows why.

The fifth study in the series, published in May in Proceedings of the National Academy of Sciences (PNAS), assessed the effects on a randomized subset of more than 35,000 Facebook and Instagram users who were paid to deactivate their accounts for six weeks before the 2020 election. Abstaining from the two platforms appeared to decrease belief in misinformation circulating online while also diminishing knowledge of general news. Staying off Facebook also “may have reduced reported net votes” for Donald Trump, but not by a significant amount, the researchers concluded. Finally, absence from Facebook and Instagram had “close to zero” effect on polarization, views on the legitimacy of the election, and voter turnout. (Researchers compared the effects on the six-week deactivation group to those on a control group paid to deactivate for only one week before the 2020 election.)

Meta wasted no time in claiming vindication.“These findings are consistent with previous publications in this study in showing little impact on key political attitudes, beliefs or behaviors,” the company said in a statement. The company covered the costs of the research but didn’t compensate the scholars or their university employers, according to the PNAS publication.

But like the previous studies in the series, the PNAS paper merits qualification based on the modest duration of the platform deactivation period: a mere five-week differential between the “treatment group” and the control group. The study’s authors acknowledged this and other limitations. Specifically, by the time of their relatively brief paid break from Meta’s platforms, the social media users being scrutinized may well have been locked into the beliefs they took with them to the voting booth.

The new results need to be factored into the continuing debate about the effects of social media on democracy. But when interpreting them, it makes sense to listen to what the scholars say about their work, not the spin Meta or any other self-interested corporation tries to sell to the public. In an interview with NBC News, one of those scholars, Matthew Gentzkow, an economist at Stanford, said: “This study cannot say one way or the other—in a decade-long sense—whether social media is causing polarization or not.” In a press release that accompanied the release of the findings, Gentzkow explicitly denied that the results somehow exonerated Meta. “We are not ruling out the possibility that Facebook and Instagram contribute to polarization in other ways over time,” he said.

When we spoke to him, Gentzkow allowed that the design of this study, like the others in the series, precludes resolution of the larger questions about the role of Facebook in US politics. “We can’t roll back the clock and look at ‘Facebook not existing in the world,’” he said. Temporarily removing direct exposure to Facebook and finding little effect from that isn’t the same as establishing that Facebook played no role in shaping beliefs or had no effect on polarization. It isn’t the same as proving that 20 years of Facebook shaping incentives for the news media, politicians, political parties, and everyone else has had little effect on politics.

3. The reality is that in this critical election year, Meta is closing doors to external researchers.

If Meta were so confident that giving researchers access to its data would continue to produce results that vindicate it on questions about its role in politics, you might expect it would be preparing to launch a similar research collaboration around the 2024 US election cycle, or that it would be taking steps to make data more readily available to researchers. But barring some announcement to the contrary, it would seem the opposite is true. Meta is sharing less information with researchers in 2024 than it did in 2020, and there are signs it may be producing less internal research, as well.

Academic researchers who study social media platforms have spent months raising alarms about the impending shutdown of Meta’s CrowdTangle, a software tool that enabled scrutiny of Facebook and Instagram. It will cease to function on August 14. A coalition of academic and civil society researchers recently issued a letter warning that “Meta’s decision will effectively prohibit the outside world, including election integrity experts, from seeing what’s happening on Facebook and Instagram — during the biggest election year on record.” They say Meta’s alternative to CrowdTangle, its Content Library, lacks sufficient functionality and is only available to a limited number of researchers. “This means almost all outside efforts to identify and prevent political disinformation, incitements to violence, and online harassment of women and minorities will be silenced,” the letter says. “It’s a direct threat to our ability to safeguard the integrity of elections.”

It also appears that Meta does not intend to invest in an undertaking of similar scale to its 2020 US election research project in 2024. Last fall, University of California, Berkeley researcher Jonathan Stray pointed out in Wired that he was “aware of at least one large research project Meta recently canceled, and the company said it ‘does not have plans to allow’ another wave of election research in 2024.” In Europe, regulators are investigating Meta’s decision to discontinue CrowdTangle and what it means for its obligations under the European Union’s Digital Services Act. No such oversight mechanism exists in the US.

Taken together with other indications that Meta, like YouTube and X, has adopted a less active posture on election integrity issues in 2024, its reticence to enable research on the types of questions researchers pursued in 2020 should be of concern. While what we know from the research during the 2020 cycle may be insufficient to answer the big questions, we are likely to learn even less about the 2024 cycle.


Meta's stated mission is “to give people the power to build community and bring the world closer together.” It thus has a clear incentive to counter claims that it may in fact do the opposite, and to spin the results of research on related subjects. But for journalists and others charged with scrutinizing the company, its business practices, and its effects on democracy, it’s important not to be misled by its characterization of the results of social science conducted four years ago. Rather, it’s more important to consider the company’s behavior in the present and to demand that policymakers introduce the reforms necessary to police it.

Executives at Meta may believe it’s time to turn the page when it comes to concerns over social media and democracy. Policymakers, journalists, civil society experts, and researchers need to resist the argument that crucial questions about social media and democracy are somehow settled by a narrow set of results from research conducted four years ago. Instead, we all need to push for policies that provide independent researchers the access to data bringing greater clarity to the complicated relationship between platforms and politics.


Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a new nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & ...
Paul M. Barrett
Since 2017, Paul M. Barrett has served as the deputy director and senior research scholar of the NYU Stern Center for Business and Human Rights. Barrett is also an adjunct professor NYU School of Law, where he co-teaches a seminar called, "Law, Economics, and Journalism." Before coming to NYU, Barre...