Home

Donate
Analysis

Bridging the Gap Between Litigation and Science

Prithvi Iyer / May 7, 2025

Technology companies like Google and Meta face increasing scrutiny for their negative societal impacts, which are often grounded in scientific literature. For example, in October 2023, New York Attorney General Letitia James and a coalition of 32 state AGs launched a lawsuit against Meta for its role in harming youth mental health, alleging that the company ignored scientific evidence conducted by its internal research team and hid this information from the public. This lawsuit demonstrates the importance of science as a tool to keep technology companies accountable to the public.

But a new commentary in the Journal of Trust and Safety by J. Nathan Matias and Jonathon Penney shows that despite the importance of scientific evidence in public policy, there is a lack of research and guidance into science’s role in “the context of technology accountability and the legal process, leading to many open questions for litigators on documentation, evidence, legal standing, and causality.” Moreover, as it stands, lawyers and courts alike lack access to “the expertise needed to identify and interpret evidence, guided by experts who don’t have conflicts of interest with tech firms.” Litigators must understand the potential and limits of scientific evidence because evidence-based litigation is crucial to bolster tech accountability. As the authors note, "due to regulatory gridlock and other weak governance responses, litigation and the judicial process become the 'primary' or even sole means of legal and public accountability for powerful technology companies."

The authors examine a fundamental challenge: the legal system's uncomfortable relationship with statistical evidence. The authors find that even when academics present compelling statistical evidence, courts cannot make sense of it because “cases are often built around individuals, while statistics describe patterns.” Given this knowledge gap between science and litigation, this essay bridges the gap between science and legal practice by identifying “failure points that have prevented statistical evidence from being consistent and usable to litigators”, while also providing recommendations for how to employ scientific evidence in legal cases pertaining to tech accountability.

The authors provide a historical perspective on the “uneasy relationship” between law and statistical evidence, notably seen in the case of McCleskey v. Kemp, where the Supreme Court rejected statistical evidence of racial discrimination in death penalty sentencing, ruling that general statistical patterns couldn't prove discrimination in a specific case. It is also true that law schools often don't teach statistics, and lawyers usually don't know the variety of tools available to scientists studying the phenomenon being litigated. This leads to a situation wherein “litigators are likely to contact scientific experts after a case has been defined rather than incorporating science into their core arguments—creating a further disconnect with science.”

Toolsets for Bridging Science and Law

The authors present practical approaches to help bridge this divide:

  1. Specific versus General Causality: Courts often reject scientific evidence because they believe that statistics can describe general patterns but not prove specific cases. However, there are ways science can combine both, as seen with environmental litigators who combine general and specific causation in the same case. The key is that causality is strongest when built on a “chain of evidence,” and it is crucial to bring different experts to describe the general and specific harms, respectively.
  2. Reverse versus Forward Causality: Courts often examine evidence from the past, while causal scientific research seeks to predict the future. To bridge this gap, the authors recommend using various methodologies, such as randomized trials, quasi-experiments, and simulations, to establish causation both prospectively and retrospectively.
  3. Systematic Review and Meta-Analysis: Courts can also struggle with interpreting scientific evidence because of the lack of consensus, given that some studies might make one claim that is refuted by newer research, leading to confusion and skepticism. This problem is compounded by the fact that some researchers and news outlets prioritize self-serving results. To address this, scientists should avoid making general claims unless proven via a systematic review. However, a lack of research into a topic might hinder action and allow powerful tech companies to “delay accountability by commissioning new research rather than fixing problems—allowing harms to persist for decades.”
  4. Pattern Causality and Discrimination: Antitrust cases against tech firms often rely on statistical evidence to show if one platform was preferred over another. But studying discrimination can be trickier as it relies on counterfactuals (e.g., would this person have been hired if they were a different gender). In such cases, randomized control trials (RCTs) cannot be conducted, so the authors recommend using audit studies, a method increasingly used by computer scientists studying AI bias.
  5. Magnitude, Doses, and Trade-Offs: A key hurdle to integrating science into the legal process pertains to the difficulty in interpreting findings, which often requires basic knowledge about experimental design, statistics, and, in some cases, econometrics. Relatedly, people often disagree on whether a particular finding is consequential. For example, when a study found that 3% of YouTube users were exposed to extremist content despite not subscribing to it, the platform deemed this “too low to be a matter of concern,” despite scientists believing otherwise. In courts, when it comes to determining the effects of technology, judges view technology as “doses,” failing to recognize that technology impacts aren't linear. Thus, educating litigators about how to interpret scientific findings and their veracity is crucial, especially when technology platforms can fund big studies making grand claims premised on complex statistical analyses.
  6. Evidence of Negligence: It is important to document and act on when companies deliberately avoid research that might reveal harms or when they ignore internal research findings that suggest problems, as seen in the lawsuit against Meta.
  7. Damages and Remediation: In a time where Big Tech funding has steered research on online harms in ways that protect the company’s interests, the authors believe it is crucial for courts to “serve the public interest by creating similar funds to support industry-independent research from the resources secured in future rulings and settlements.”

Conclusion

Matias and Penney suggest there is much to be done to connect science and litigation for greater technology accountability. By systematically addressing the challenges of bringing scientific evidence into courtrooms, they provide a practical roadmap for more effective litigation against tech platforms.

As they note, "At its best, scientific evidence could protect firms from unsubstantiated accusations and obtain justice and redress for people who have faced systemic harms." Based on conversations from their workshop, they also offer questions that require future research and collaboration. The questions address important considerations to help courts:

  • Differentiate between “good” and “bad” science.
  • Ensure that integrating reliable science into the legal process does not create barriers to justice.
  • Navigate conflicts of interest between academics and technology companies.

In a landscape where technology increasingly shapes our lives but accountability mechanisms lag, this essay provides crucial guidance for bridging the gap between scientific knowledge and legal action. The toolsets and questions it raises will likely influence technology litigation for years to come, and independent research on technology companies must be supported and nurtured as a counterweight to corporate capture.

Authors

Prithvi Iyer
Prithvi Iyer is a Program Manager at Tech Policy Press. He completed a masters of Global Affairs from the University of Notre Dame where he also served as Assistant Director of the Peacetech and Polarization Lab. Prior to his graduate studies, he worked as a research assistant for the Observer Resea...

Related

New Tools Move Faster Than New Rules: Bridging Science and Law for a Better Technology EcosystemDecember 13, 2024

Topics