Home

Donate
Perspective

The Case for Supporting Social Media Data Access

Mark Scott / Oct 24, 2025

Mark Scott is a contributing editor at Tech Policy Press.

In the hierarchy of digital policymaking priorities, it’s artificial intelligence, not platform governance, that is now the cause célèbre.

From the United States’ public aim to dominate the era of AI to the rise of so-called AI slop created by apps such as OpenAI’s Sora, the emerging technology has seemingly become the sole priority across governments, tech companies, philanthropic organizations and civil society groups.

This fixation on AI is a mistake.

It’s a mistake because it relegates equally pressing areas of digital rulemaking — especially those related to social media’s impact on the wider world — down the pecking order at a time when these global platforms have a greater say on people’s online, and increasingly offline, habits than ever before.

Current regulatory efforts, primarily in Europe, to rein in potential abuses linked to platforms controlled by Meta, Alphabet and TikTok have so far been more bark than bite. Social media giants remain black boxes to outsiders seeking to shine a light on how the companies’ content algorithms determine what people see in their daily feeds. On October 24, the European Commission announced a preliminary finding under the EU's Digital Services Act that Meta and TikTok had failed under their obligations to make it easier for researchers to access public data on their platforms.

These companies’ ability to decide how their users consume content on everything from the Israel-Hamas conflict to elections from Germany to Argentina is now equally interwoven into Washington’s attempts to roll back international online safety legislation in the presumed defense of US citizens’ First Amendment rights.

Confronted with this cavalcade of ongoing social media-enabled problems, the collective digital policymaking shift to focus almost exclusively on artificial intelligence is the epitome of the distracted boyfriend meme.

While governments, industry and civil society compete to outdo themselves on AI policymaking, the current ills associated with social media are being left behind — a waning after-thought in the global AI hype that has transfixed the public, set off a gold rush between industrial rivals and consumed governments in search of economic growth.

But where to focus?

In a report published via Columbia World Projects at Columbia University and the Hertie School’s Centre for Digital Governance on October 23, my co-authors and I lay out practical first steps in what can often seem like a labyrinthine web of problems associated with social media.

Our starting point is simple: the world currently has limited understanding about what happens within these global platforms despite companies’ stated commitments, through their terms of service, to uphold basic standards around accountability and transparency.

It’s impossible to diagnose the problem without first identifying the symptoms. And in the world of platform governance, that requires improved access to publicly-available and private social media data — in the form of engagement statistics and details on how so-called content recommender systems function.

Thankfully, the European Union and, soon, the United Kingdom have passed the world’s first regulated regimes that mandate social media giants provide such information to outsiders, as long as they meet certain requirements like being associated with an academic institution or civil society organizations.

Elsewhere, particularly in the US, researchers are often reliant on voluntary commitments from companies growing increasingly adversarial in their interactions with outsiders whose work may shine unwanted attention on problematic areas within these global platforms.

Our report outlines the current gaps in how social media data access works. It builds on a year of workshops during which more than 120 experts from regulatory agencies, academia, civil society groups and data infrastructure providers identified the existing data access limitations and outlined recommendations for public-private funding to address those failings.

All told, it represents a comprehensive review of current global researcher data access efforts, based on inputs from those actively engaged in the policy area worldwide.

At a time when the US government has pulled back significantly from funding digital policymaking and many philanthropies are shifting gears from social media to artificial intelligence, it can feel like a hard sell to urge both public and private funders to open up their wallets to support a digital policymaking area fraught with political uncertainty.

But our recommendations are framed as practical attempts to fill current shortfalls that, with just a little support, could have an exponential impact on improving the transparency and accountability pledges that all of the world’s largest social media companies say they remain committed to.

Some of the ideas will require more of a collective effort than others.

Participants in the workshops highlighted the need for widely-accessible data access infrastructure — akin to what was offered via Meta’s CrowdTangle data analytics tool before the tech giant shut it down in 2024 — as a starting point, even though such projects, collectively, will likely cost in the tens of millions of dollars each year.

But many of the opportunities are more short-term than long-term.

That was by design. The workshops underpinning the report made clear the independent research community needed technical and capacity-building support more than it needed moonshot projects which may fail to deliver on the dual focus on increased transparency and accountability for social media.

The recommendations include expanded funding support to ensure academics and civil society groups are trained in world-class data protection and security protocols — preferably standardized across the whole research community — so that data about people’s social media habits are kept safe and not misused like what happened in the Cambridge Analytica scandal in 2018.

It also includes programs to allow new researchers to gain access to social media data access regimes that often remain accessible to only a handful of organizations, as well as attempts to create international standards across different countries’ regulated regimes so that jurisdictions can align, as much as they can, on approach to social media data access.

Such day-to-day digital policymaking does not have the bells and whistles associated with the current AI hype. It's borne out of the realities of independent researchers and regulators seeking to address near-and-present harms tied to social media, and not in the alarmism that artificial intelligence may, some day, represent an existential threat to humanity.

That, too, was by design. Often, digital policymaking, especially on AI, can become overly-complex — lost in technical jargon and misconceptions of what technology can, and can not, do.

By outlining where public and private funders can meet immediate needs on society-wide problems tied to social media, my co-authors and I are clear where digital policymaking priorities should lie: in the need to improve people’s understanding of how these global platforms increasingly shape the world around us.

This post was updated to include mention of the European Commission's preliminary finding on Meta and TikTok's performance on obligations under the DSA to make it easier for researchers to access public data on their platforms.

Authors

Mark Scott
Mark Scott is a Contributing Editor at Tech Policy Press. He is a senior resident fellow at the Atlantic Council's Digital Forensic Research Lab's Democracy + Tech Initiative, where he focuses on comparative digital regulatory policymaking topics. He is also a research fellow at Hertie School's Cent...

Related

The Next Step in Social Media Data Access: How to Turn Rules into RealityMarch 19, 2025
Perspective
The UK's Opportunity to Lead on Social Media TransparencyJuly 15, 2025

Topics