Home

Donate
Perspective

Establishing Legal Incentives to Hold Big Tech Accountable

Isabel Sunderland, Emma Jones / Jan 8, 2026

Isabel Sunderland is a Policy Lead for the Technology Reform program at Issue One. Emma Jones is a policy intern at Issue One.

In the 21st century, Big Tech has mastered the same tactics Big Tobacco once used to protect its profits — deny harm, deflect blame, and delay accountability. What began as a handful of internet startups has evolved into a global cartel of data barons whose influence reaches deeper into American life than any industry before it. Their platforms shape elections, warp childhoods, and quietly rewire public discourse, while hiding behind legal shields and multimillion-dollar lobbying campaigns that outspend most governments.

Despite bipartisan outrage, even the most promising efforts to rein in this power have repeatedly collapsed under pressure. Promising reforms, from privacy protections and youth safety, routinely collapse against the same obstacle: Section 230 of the Communications Decency Act, a 1996 statute increasingly stretched beyond its purpose and recognition. What was once intended as a narrow shield for fledgling internet forums has become a sweeping immunity doctrine that blocks lawsuits before discovery, robbing American families of their day in court and insulating some of the most powerful corporations in the world from meaningful accountability.

How Section 230 distorts platform incentives

Section 230(c)(1) states that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” When Congress adopted this language in 1996, the goal was narrow: to protect early internet service providers — dial-up chatrooms, message boards, and nascent online communities — from being held liable for user posts.

But in the decades since, many courts have interpreted the term “information” so broadly that platforms largely enjoy immunity not just for user speech, but also for the design, engineering, and operational decisions that predictably facilitate harm. This doctrinal expansion has produced a simple but dangerous legal result: platforms avoiding accountability even when their products’ architecture directly contributes to exploitation, addiction, or abuse.

This expansive interpretation has slammed the courthouse doors on many families seeking justice —particularly minors harmed by platforms that knowingly deploy features that expose young users to predators.

In Doe v. Grindr, a 15-year-old boy was raped by four men he met through the app. The complaint alleged that Grindr intentionally targeted young users on popular teen platforms, engineered a location-based recommendation system that matched minors with nearby adults, and failed to implement basic safeguards despite knowing that children were using the service. Yet, the US Court of Appeals upheld dismissal of the case under Section 230, which prevented even basic factual discovery about what the company knew and when.

A similar outcome occurred in Doe v. Snap, where a high school teacher groomed and abused a 15-year-old student using Snapchat’s disappearing message feature. Plaintiffs argued that the app’s design — not user content — facilitated the abuse by leading users into believing that their images would be erased. Once again, the court ruled that Section 230 barred the claims.

Courts justify these outcomes by invoking Section 230’s aspirational policy goals: promoting a “vibrant” marketplace of ideas, encouraging innovation, and fostering political discourse. But as legal scholar Mary Anne Franks explains, platforms have been shielded “even when they encourage illegal action, deliberately keep up manifestly harmful content, or take a cut of users’ illegal activities.” Scholars Danielle Citron and Olivier Sylvain have made similar arguments, documenting how courts have gradually transformed Section 230 from a targeted speech-protection provision into a sweeping corporate immunity shield.

The heart of the problem is that Section 230 has been interpreted to immunize all “information,” allowing platforms to sidestep the normal legal limits on conduct. Where other industries must show that their editorial decisions merit First Amendment protection, platforms can avoid that scrutiny altogether. Under long-standing First Amendment doctrine, courts distinguish between two categories: expressive speech or conduct, which is usually protected, and non-expressive conduct, which generally is not.

This is a familiar inquiry: protest armbands in Tinker v. Des Moines were protected as expressive conduct; burning draft cards in United States v. O’Brien was determined to involve both expressive and non-expressive elements, so the non-expressive part, the damage of the draft card itself, could be regulated. In social media policy, however, once a court labels something “information,” platforms receive immunity, even when the claim centers on design like algorithmic amplification, geolocation matching, and dark patterns. Section 230 has effectively short-circuited the kind of careful, element-by-element analysis undertaken in Tinker, O’Brien, and dozens of other cases, which has left the digital world without healthy scrutiny.

An instructive case study

If Section 230(c)(1) were revised so that platforms are immune only for protected speech, not all “information,” courts would have been required in cases like Doe v. Grindr and Doe v. Snap to ask a basic question: Did the alleged harm stem from expressive content, or from non-expressive product design?

Companies would then bear the burden of demonstrating that specific design features — like Grindr’s matching algorithm or Snapchat’s disappearing messages — served an expressive purpose entitled to First Amendment protection. In both cases, the central allegations centered on product design enabling exploitation, not on censoring or evaluating speech.

But today, courts routinely treat any feature distributing or organizing user profiles as “publisher activity.” In Doe v. Grindr, the Ninth Circuit leaned on a line of precedent — Dyroff v. Ultimate Software,Herrick v. Grindr, and Doe v. Myspace — to conclude that because Grindr’s matching system processed user profiles and location data, any harm necessarily arose from publishing that content. Under this logic, even manifestly dangerous design tools receive immunity if they interact at all with user inputs.

This approach means courts may never evaluate whether the algorithm, the interface, or the notifications contributed to the abuse. Features that addict, escalate risk, or facilitate exploitation escape scrutiny before a single fact can be uncovered.

A targeted clarification of Section 230 would end this distortion. Courts could again distinguish between speech and conduct, and lawsuits alleging design-based harms could proceed to discovery — the minimal threshold for accountability in every other sector of American commerce.

Reforming Section 230 to immunize only protected speech would not curtail free expression. It would restore the constitutional distinction between expression and conduct that the statute’s current interpretation has erased. Political opinions, artistic expression, debate, and advocacy should and would remain shielded. But product decision choices that platforms know will endanger children, facilitate exploitation, or drive addictive behavior are not “speech.” They are engineering decisions. And when those decisions predictably generate harm, the companies making them should not enjoy automatic immunity.

Just as liability forced the tobacco industry to curtail the manipulative marketing once engineered to hook children, targeted liability here would create strong incentives for platforms to prioritize safety, transparency, and responsible design. The result would not be less speech — it would be more accountability, and a healthier internet that does not require sacrificing public health and safety in exchange for corporate profit.

The cost of delay

Reforming Section 230 is not a silver bullet for the challenges posed by Big Tech. Issues ranging from data privacy to children’s online safety to national security vulnerabilities and AI risks all require sustained policymaking. But without a functional accountability mechanism — without the ability for courts even to evaluate harmful design — these reforms will continue to falter.

A narrowly tailored amendment to Section 230 would restore the judiciary’s ability to distinguish constitutionally protected expression from harmful content and allow meritorious cases to proceed. Through the discovery process, families would finally learn what the platforms knew, when they knew it, and how their product designs contributed to avoidable tragedies.

This reform advances a framework in which social media platforms can be held accountable for the harms that result from the company’s design and business choices, while continuing to provide a safe harbor in cases where the harms directly result from the speech of its users. This framework is one where platform incentives would more closely align with the development of a safer digital square: including more robust protections for kids and effective safeguards to prevent the exploitation of these digital spaces by bad actors, traffickers and pedophile rings. Not only does Section 230 reform open the door for platform design legislation, it also shifts the burden to engineers and trust and safety experts, who can pioneer solutions that are more adaptive than the blunt instruments of legislation.

The choice before us is clear. We can continue to allow families, communities, and children to bear the cost of a system optimized for profit over safety. Or, we can update outdated laws so the companies that engineer the digital public square share some responsibility for the impacts of their technologies. A safer, more democratic internet is within our grasp.

Authors

Isabel Sunderland
Isabel Sunderland is a Policy Lead for the Technology Reform program at Issue One, a leading cross-partisan political reform group in Washington, DC. She works to advance state and federal policies on child safety, platform design, Section 230, data privacy, and national security, advocating for str...
Emma Jones
Emma Jones is a policy intern at Issue One focused on bipartisan tech reform, spanning social media, children’s online safety, and platform accountability. Prior to joining Issue One, she worked at Tulane University’s Institute for Data Science, applying computer science to public-interest research.

Related

Perspective
The Dance with Big Tech is Different under Trump 2.0November 13, 2025

Topics