How Information Asymmetry Inhibits Efforts for Big Tech Accountability
Hanna Barakat / Apr 16, 2025
WASHINGTON, DC—JANUARY 20, 2025: Guests including Mark Zuckerberg, Lauren Sanchez, Jeff Bezos, Sundar Pichai, and Elon Musk attend the Inauguration of Donald J. Trump in the US Capitol Rotunda. (Photo by Julia Demaree Nikhinson - Pool/Getty Images)
For decades, large technology companies have continued to perpetuate societal harm, from enabling human trafficking and predatory targeted ads to antitrust violations and content moderation abuses. As public evidence of such phenomena accrues, technologists, activists, academics, lawyers, and civil society organizations have struggled to overcome gridlocked paths to accountability.
While Europe is pushing for greater transparency through the Digital Services Act (though its effectiveness remains uncertain), the United States has made relatively little progress towards advancing transparency and accountability.
Big Tech’s consolidation of power—what Stanford scholar Marietje Schaake calls the "Big Tech Coup"—has never been more apparent. From Elon Musk's outsized role in the US government to the now famous picture of Google, Meta, and Amazon leaders at Trump’s presidential inauguration, Big Tech's political influence in the US is accelerating while viable strategies for institutional accountability feel like a distant echo.
Why is legal accountability so challenging to achieve? What evidence is brought against these companies? Who determines the connections between evidence and responsibility? Is the US judicial system equipped to handle the rapid and opaque nature of litigating Big Tech? Answers to these questions are embedded in a web of interwoven challenges, at the center of which lies a structural issue: an asymmetrical information ecosystem.
In this piece, we begin by analyzing Big Tech’s playbook for controlling the information ecosystem, exposing the systemic barriers that sustain its dominance–from Big Tech’s corporate capture of independent research and the closing off of public API access to the strategic use of Section 230 of the Communications Decency Act. Next, we zoom in on the US court system and explore how restrictions on information discovery, treatment of evidence, and perception of expertise effectively insulate these companies from legal accountability. Finally, we highlight the need for translation and collaboration for a more interconnected flow of information.
This piece builds on insights from the Exhibit X podcast series, led by Alix Dunn and Prathm Juneja of Computer Says Maybe, which unpacks the systemic barriers to holding Big Tech accountable in US courts. Across five episodes, they explore the aforementioned challenges with experts like Elizabeth Eagen, Alexa Koenig, Meetali Jain, and Frances Haugen.
Big Tech’s playbook for stifling information
Technology platforms have strategically obscured the landscape of available information, hindering efforts to track what harm occurs, when it happens, and how it unfolds. We highlight key tactics; however, they are neither exhaustive nor exclusive to Big Tech (for example, there are some striking parallels with the Big Tobacco industry’s tactics).
Tactic 1: Block access to information
This playbook begins with suppressing access to independent information and, consequently, independent research. Over the past few years, there have been increased restrictions on Application Programming Interface (API) Access, which previously allowed researchers and developers to access public data.
For example, Twitter/X restricted its Academic API, charging $42,000/month for enterprise access, ending many independent research projects. Similarly, Meta shut down CrowdTangle, its public insights tool, which researchers previously used to study misinformation.
By leveraging restrictive terms of service, increasing CFAA threats, and aggressive litigation, Big Tech companies’ tight grip on information suppresses independent research.
Tactic 2: Corporate capture of researchers
Big Tech’s growing influence over independent research is the flip side of restricted information access. Companies shape the field of knowledge production by funding grants, endowing professorships, and controlling access to data, all while suing researchers under terms-of-service claims. Big Tech’s influence operates explicitly through direct funding and implicitly by shifting research incentives and infrastructure, determining what research is deemed viable and who gets to conduct it.
As public funding sources like the National Science Foundation and National Institutes of Health face deep budgetary cuts, research universities increasingly struggle to afford the infrastructure necessary to keep pace with the costs required for computing research. Beyond funding compute, the Trump administration’s targeting of studies with keywords like bias, diversity, and accessibility threatens the research that advocates rely on for future litigation.
Tactic 3: Fill the information gap
As independent research faces growing suppression and corporate capture, the knowledge gap widens. Big Tech readily fills this gap by publishing its own research. A 2020 Brookings Institution study found that 38% of the submitted papers to AI conferences had co-authors from large tech companies, compared with only 22% in 2000. This percentage has likely increased since then.
This growing shift in research would appear to quietly distort academic incentives and position Big Tech experts as the sole arbiters of information and authority on technology’s societal impact.
An asymmetrical information ecosystem creates a dependency on whistleblowers
The above tactics mean that the tech accountability community is increasingly dependent on whistleblowers. Employees and former employees leaking information about Big Tech harms is one key way to reveal hidden knowledge about harm to the public. Whistleblowers provide invaluable information when sharing evidence of those harms and helping litigators understand the internal language to query in discovery.
Meta is one emblematic company known for a history of privacy violations, political manipulation, and child harm. In 2021, former Facebook product manager Frances Haugen disclosed 22,000 internal Meta documents, which she shared with The Wall Street Journal and the Securities & Exchange Commission. The documents demonstrated that Meta was keenly aware its products were causing various kinds of harm.
Haugen asserts that Meta “knew about neurological vulnerabilities in teenagers, and they explicitly designed features to take advantage of them,” directly leading to worsening body image and mental issues. The documents suggest Meta was well aware of its role in spreading misinformation after the 2020 US election, and increasing polarization in the US, Europe, and beyond. She reflects, “History will look back on these lawsuits as the first time we, as the public, were allowed to say how was the sausage made? What was the process of how we got these products in the end?”
While Haugen provided critical information for regulators, her case, like others before her, is a reminder that whistleblowers alone cannot bring about accountability. While bringing such evidence to light, critical challenges arise within the court system.
The Courts
In the absence of transformative tech-policy legislation, accountability is often left to the courts. When Big Tech faces lawsuits, new barriers to information emerge, exposing systemic issues in information discovery, treatment of evidence, and perception of expertise.
Examining the flow (or lack thereof) in the US court system exposes the layers of stagnation and friction that hinder the flow and accessibility of information.
Information discovery
For years, Big Tech companies have strategically leveraged Section 230 and First Amendment rights as a legal shield to block information discovery, effectively acting as “a double whammy of insulation from accountability.” As Meetali Jain, founder of the Tech Justice Law Project, explains, “Section 230 is statutory. First amendment is constitutional, but they have the effect if they're both successful at foreclosing many kinds of liability from platforms.”
However, in recent years, more plaintiffs have been prosecuting cases against Big Tech, and more cases are moving into the discovery phase (e.g., in Lemon v. Snap, the Ninth Circuit ruled Snapchat's high-speed filter was not protected under Section 230, setting a precedent for holding platforms accountable for their own features).
As more legal cases challenge the strategic use of Section 230 to block information discovery, the Tech Justice Law Project’s Jain remarks, “Everyone's looking around like, well, what do we ask for [in discovery]? We haven't been here before, and this is where I think the role of expert witnesses becomes critical.”
Perception of expertise
As courts are presented with an array of new evidence—from social media data to synthetic media—expert witnesses are required to interpret the evidence and measure its impact. However, given the sociotechnical nature of these proceedings, many expert witnesses studying Big Tech’s harms are social scientists.
Elizabeth Eagen, deputy director of the Citizens and Technology Lab at Cornell University, highlights a catch-22 when trying to address Big Tech’s harm through traditional legal frameworks: the expert social scientists studying these cases are reluctant to assert the direct causality ("X caused Y") often required in legal contexts precisely because these cases involve multiple intersecting factors and arise from a combination of variables rather than a single, direct cause.
Courts do not need to prove that a product is harmful to everyone in order to hold a company responsible for harms to individuals. In this mismatch between what courts need and what scientists are trained to provide, opposing sides can cast doubt on scientists and undermine their credibility. Eagen worries "because scientists and litigators operate in different cultures of knowledge, they don't always realize that there's room for progress in better collaboration and mutual understanding of how their different fields work."
Jain echoes this sentiment. The US court system has yet to figure out how to map the value of socio-technical experts onto the Daubert principles, which are often used to ‘ordain’ expert witnesses. The challenge of presenting sociotechnical research in court has led to an over-reliance on open-source evidence and survivor testimonies.
Treatment of evidence
In the context of the US court system, the introduction of digital evidence has exposed challenges around proving reliability and authenticity. The decentralization of digital content on social media forces courts to triangulate fragmented data, underscoring the challenges of constructing a “more objective” understanding of the sociotechnical claims.
Dr. Alexa Koenig, director of UC Berkeley's Human Rights Center, states that the expansion of digital evidence, specifically in human rights law, has led to a “very different kind of relationship to data and to information” characterized by a “sense of distrust in the broader communities of practice… where everything is going to have to start being questioned.”
The nature of distrust surrounding digital evidence can be categorized into three specific areas: (1) reliability and authenticity, informing how much weight to give digital evidence; (2) anatomizing gray areas around the “chain of custody” (i.e., making sure the digital evidence has not been altered or tampered with); and (3) crafting cohesive narrative that helps the courts’ understand of how disparate pieces of evidence fit together (as was crucially demonstrated in the International Criminal Court’s case against Ahmad Al Faqi Al Mahdi).
A triple timeline
The challenge of information asymmetry manifests across multiple dimensions: the speed at which information travels, the obscurity when using specialized jargon, the distinction between open and closed-source evidence, and the varying degrees of permanence of digital evidence.
These asymmetries are further complicated by what Computer Says Maybe’s Prathm Juneja calls a "triple timeline challenge"—a tension between the rapid pace of big tech development and resulting harms, the slower pace of academic research, and the even slower movement of the legal system. This misalignment creates a situation where academics struggle to identify what courts find relevant, and the courts struggle to identify what research is useful, while tech companies accelerate, avoiding repercussions.
Toward an interconnected information ecosystem
Mapping the barriers to information flow is a critical lens in understanding how Big Tech power operates in society. The monopoly of information represents one of the most potent mechanisms for consolidating power—those who control access to what people perceive as facts, especially within the court system, wield tremendous influence. Under the Trump administration, the following recent developments exacerbate the monopoly of information:
- Budgetary cuts to the NSF and NIH, shrinking critical independent tech research.
- The Federal Communications Commission—which, as of right now, does not appear to have enforcement or interpretation powers over Section 230 but does oversee key internet infrastructure and traditional TV media—has plans to reshape the independent agency in support of the conservative agenda, which is unlikely to focus on Big Tech's harms to the most vulnerable communities.
- While the Federal Trade Commission seems to be proceeding with litigation regarding antitrust and consumer harm from Big Tech, Chairman Andrew Ferguson’s statements comparing content moderation to censorship have raised concerns about platform trust and safety.
As legal efforts to increase transparency, investigate censorship, and break up Big Tech monopolies continue to play out, it is crucial to note that legal accountability differs fundamentally from social and political accountability—translation work between these approaches is necessary.
While accountability for Big Tech harms may never be fully achieved through the courts alone, distributing information across the ecosystem will allow for multidisciplinary collaboration and community-first approaches—work that continues thanks to the numerous independent organizations that the Tech Justice Law Project, Berkeley’s Human Rights Center, and WITNESS are facilitating to hold Big Tech accountable for its harms.
For more discourse on strategies to combat information asymmetries, explore the Exhibit X podcast series from Computer Says Maybe.
- The Whistleblowers (featuring Frances Haugen):
- The Litigators (featuring Meetali Jain):
- The Courts (featuring Alexa Koenig):
- The Community (featuring Elizabeth Eagen):
- Tech and Tobacco
Authors
