Home

Donate
Perspective

A New Section 230: Why AI Preemption Would Let Tech Off the Hook Again

Brad Carson / Sep 29, 2025

Picture of a sunrise at the United States Capitol building, provided by the Office of Congressman Robert Garcia via his House website.

Members of Congress are yet again preparing to roll out a bill that would preempt state laws on artificial intelligence. Strip away the polish and you’ll recognize a familiar playbook: granting broad immunity for Big Tech with minimal safeguards and potentially no end date in sight. It’s essentially version 2.0 of Section 230, the liability shield that has allowed social media platforms to escape accountability for over a decade. This time, it’s aimed at letting tech off the hook for AI harms.

This new preemption push follows Washington’s failed attempt to impose a decade-long moratorium on state AI regulation earlier this year, which the Senate overwhelmingly struck from the One Big Beautiful Bill in July. Now lawmakers are reviving the concept in an expected preemption package that would bar states from enacting AI guardrails tailored to local needs and emerging harms.

That matters because while Congress has largely stalled on passing AI safeguards, states are responding to growing public concerns. Lawmakers from Tennessee to California are enacting critical protections for young people online, for artists and creators and for voters in our elections. The expected preemption bill threatens to wipe out those safeguards and instead continue a system of zero accountability for the largest tech companies.

If this feels familiar, it should. Section 230 offered near-total immunity to online platforms for third-party content and courts interpreted it broadly. The result was a regime that incentivized toxic content and addictive engagement over responsibility. We saw the viral spread of disinformation, the monetization of outrage and the normalization of products that hook kids while exposing them to exploitation and self-harm.

Now imagine handing tech a similar shield that blocks any state law that threatens to hold companies accountable for AI harms. That’s the preemption proposal’s trajectory.

Consider three parallels.

First, child safety. Section 230 dulled the incentive to design for child well-being on social media, and families paid the price. With frontier AI systems, we’re already starting to see the same story play out. This month, parents testified before a Senate Judiciary Subcommittee about the devastating impact of AI tools on their children. These parents — who saw their own children fall down a dark hole of mental health impacts, self-harm and even suicide after engaging with AI chatbots — urged senators not to eliminate accountability for AI firms by preempting state AI safeguards.

Second, election integrity. Platforms flourished under Section 230 while disinformation metastasized. But if social media provided a megaphone to those seeking to undermine our democracy, AI models threaten to hand those same bad actors a loudspeaker stack worthy of a concert arena. Looking at the policy landscape, state lawmakers are the ones taking the lead on legislation that cracks down on deepfakes, voice clones and AI-enabled disinformation in elections. A federal preemption bill not only threatens to sweep those protections aside, but also to prevent state lawmakers from passing new laws as future AI harms emerge.

Third, accountability. Section 230 made it close to impossible for victims of harmful social media products to seek redress. Preemption would copy-paste that error into the AI era, insulating model providers and large platforms from state-level liability and consumer remedies. Voters don’t want that. In a recent poll by the The Artificial Intelligence Policy Institute, 73% of Americans said AI companies should be liable for harms caused by their technology.

Over the past couple decades, the Section 230 model for regulating tech has failed badly, not just in its consequences for users online, but in the inability of lawmakers in Congress to fix a legal framework that has become the foundation for much of today’s tech industry. The lesson is clear: develop a high-powered industry in a low-accountability environment and the political will to address its harms later will fail to materialize.

Preemption’s defenders insist that a patchwork of state laws is overwhelming AI frontier labs — some of the best-funded companies globally — and that national leadership demands a single rulebook. Putting aside the question of whom preemption legislation is designed to benefit, such a strategy only makes sense if the proposed federal rulebook for regulating AI is real.

A substantive national framework would set enforceable duties of care, require risk assessments and incident reporting for high-risk systems, guarantee transparency to researchers and regulators, keep dangerous systems offline and preserve state authority in domains where harms manifest like consumer protection and child welfare. Anything less is not harmonization; it’s abdication.

Preemption isn’t just unpopular with voters; it’s a lightning rod in Congress. Lawmakers already demonstrated in July that there is broad, bipartisan discomfort with blanket preemption in this space. Members of Congress recognized that bulldozing state safeguards isn’t “pro-innovation,” as proponents portend, it’s pro-immunity for Big Tech. Innovation and adoption flourish when the rules reward trust and quality, not just reckless speed.

We don’t need round two of Section 230. We need thoughtful policies that protect people while letting responsible innovators compete and win. If we learned anything from the last 25 years, it’s that immunity without responsibility doesn’t make technology better — it makes it much worse.

Authors

Brad Carson
Brad Carson is president of Americans for Responsible Innovation. Carson is a former congressman representing Oklahoma's 2nd District and served as Acting Undersecretary of Defense.

Related

News
US Senate Drops Proposed Moratorium on State AI Laws in Budget VoteJuly 1, 2025
Analysis
The State AI Laws Likeliest To Be Blocked by a MoratoriumJune 6, 2025
Perspective
Fragmented AI Laws Will Slow Federal IT Modernization in the USMay 30, 2025

Topics