Home

Donate
Perspective

Why Both Sides Are Right—and Wrong—About A Moratorium on State AI Laws

Gideon Lichfield / May 23, 2025

Gideon Lichfield has spent nearly three decades as a journalist. A former editor-in-chief of MIT Technology Review and WIRED, he now writes Futurepolis, a newsletter about reinventing democratic governance.

House Republicans’ proposed 10-year moratorium on enforcing any state-level or local AI regulations has caused the predictable uproar. They argue that the AI laws now passing in dozens of states will create a patchwork of conflicting and often poorly drafted regulations that will be a nightmare for companies to comply with, and will hold back American AI innovation. The countervailing view, in an open letter signed by more than 140 organizations, from universities to labor unions, is that it will give AI companies license to build systems that cause untold social harm without facing any consequences.

Both are right—just not entirely; both are wrong—just not completely. There’s an argument for a moratorium—but a much narrower one than what Republicans propose.

The idea of a “learning period” to let the AI industry develop before imposing laws on it was first floated last year by Adam Thierer at the center-right think tank R Street. He wrote:

An AI learning period moratorium should block the establishment of any new general-purpose AI regulatory bureaucracy, disallow new licensing schemes, block open-ended algorithmic liability, and preempt confusing state and local regulatory enactments that interfere with the establishment of a competitive national marketplace in advanced algorithmic services.

Over at Reason, Kevin Frazier fleshes out the argument:

A hodgepodge of state regulations, however well-intentioned, will inevitably stymie AI innovation. Labs could be subjected to conflicting, sometimes contradictory, compliance schemes. While behemoths like Google or Microsoft might absorb the legal and operational costs of navigating 50 different sets of rules, smaller labs and university research teams would face a disproportionate burden.

Frazier goes on to cite three bills currently before state legislatures. In California, SB 813 would establish a regulator that “certifies AI models and applications based on their risk mitigation plans.” In Rhode Island, SB 358 makes AI developers liable in some cases for harms caused to non-users of their systems. In New York, the RAISE Act requires AI developers to prevent their models from causing “critical harm” by having safety protocols and submitting to audits.

Frazier is right that these kinds of laws would burden the AI industry and create a maze of conflicting rules. And he’s right, in particular, to warn that this could disproportionately benefit the tech giants. In the EU, lawmakers are now getting ready to pare back GDPR, after evidence that smaller firms are drowning in the bureaucracy it generates.

But here’s the issue: laws like the ones Frazier mentions are a tiny minority of the enacted and proposed state regulations on AI. The majority put limits not on how AI is developed but on how it’s used. This is like the difference between telling GM and Ford what kinds of cars they can build and telling people how fast they can drive. Speed limits don’t hobble the auto industry. Rather, they help it by making driving safer.

You can see this just by skimming the National Conference of State Legislatures database (for 2024 and 2025) of proposed and enacted state laws on AI. Take, for example, laws adopted in New Hampshire and Alabama to ban political campaigns from using AI-generated deepfakes. Or those in Indiana and North Carolina which prohibit AI-generated revenge porn (much like a federal law Trump signed on May 19). Or Illinois’s update to its human rights act, which says that employers who use AI-based tools to make hiring recommendations may not set them up to infer someone’s race from their zip code.

Are the laws a patchwork? Sure—and so are building codes, environmental regulations, road safety laws, and any number of other rules that states pass because, well, they’re states. The Republican version of the moratorium would rule out nearly all of this category of laws for AI. In fact, it’s much broader than Thierer’s original proposal, which really only addresses rules that would constrain AI developers.

There’s one other argument for a broad moratorium, however. Basically, it’s that laws to prevent bad uses of AI are unnecessary, because bad uses are already illegal. Here’s Frazier again:

[T]he rush to regulate at the state level often neglects full consideration of the coverage afforded by existing laws. As detailed in extensive lists by the AGs of California and New Jersey, many state consumer protection statutes already address AI harms.

And here’s Neil Chilson, a former chief technologist of the Federal Trade Commission and now head of AI policy at the libertarian Abundance Institute:

[C]ivil rights, privacy laws, and many other safeguards are completely unaffected by the moratorium. SOME requirements to tell customers they are speaking to an AI may be affected, but even those could be easily tweaked to survive the moratorium. Just change the law to require all similar systems, AI or not, to disclose key characteristics.

Is this right? In some cases, at least, arguably yes. I’m no lawyer, but it seems pretty clear-cut that Illinois doesn’t need to explicitly ban AI-driven racial discrimination in hiring because the non-AI-driven kind is already verboten. But will it be true in every case? It’s impossible to say. AI is such a powerful and general-purpose technology that we can’t predict all the ways it will be used; it might make possible harms that no existing legislation contemplates.

In short, I think there’s a case for a narrow, Thierer-type moratorium on laws that impose constraints on AI developers. (As Chilson also notes, the idea that this would allow AI companies to ride roughshod over us all is hyperbolic: “[T]raditional tort liability as well as general consumer protections and other laws would continue to apply. Deliberately designing an algorithm to cause foreseeable harm likely triggers civil and potentially criminal liability under most states' laws.”) But for the rest, the kind that try to prevent harms in how AI is used, there’s a case for the opposite approach: let the states legislate all they want, and watch what happens.

It’s often said that the states are laboratories for American democracy. The US is now running a giant controlled experiment in AI legislation in 50 separate laboratories. Yes, it’s messy; yes, many of those state laws will be poorly drafted and unnecessary; yes, they’ll conflict. That’s the whole point. These various efforts could yield a wealth of data for anyone who actually wants to get AI law right, especially at the federal level.

AI companies in particular should welcome this. Washington is friendly to them right now, but it may not always be. A few years of data from the states could give them some ammunition against future overreach. And if lawmakers want a “learning period” for AI regulation, it’s hard to think of a better way to learn than by running 50 experiments at once.

Authors

Gideon Lichfield
Gideon Lichfield has spent nearly three decades as a journalist. A former editor-in-chief of MIT Technology Review and WIRED, he now writes Futurepolis, a newsletter about reinventing democratic governance. He also writes on tech, policy, and geopolitics for Bloomberg, the Financial Times, and other...

Related

News
US House Passes 10-Year Moratorium on State AI LawsMay 22, 2025
Analysis
Expert Perspectives on 10-Year Moratorium on Enforcement of US State AI LawsMay 23, 2025
Perspective
Proposed Moratorium on US State AI Laws is Short-Sighted and Ill-ConceivedMay 21, 2025

Topics