Don’t Silence the States on AI
Tom Daschle, Darrell Steinberg, David Beier / Jun 24, 2025Former Senator Tom Daschle (D-SD) is the founder of The Daschle Group. Darrell Steinberg is a former California State Senator and is now a principal of Steinberg Mediation and Consulting. David Beier is a venture capitalist and former chief domestic policy advisor to Vice President Al Gore.

Congressman Jodey Arrington (R-TX) is pictured with Speaker of the House Mike Johnson (R-LA) at a press conference discussing the House passage of the "One Big Beautiful Bill Act." (March 29, 2025, X)
Buried in the recent House budget bill is a sweeping provision that would prohibit any new state or local laws regulating artificial intelligence for the next ten years. Simply put, this is a moratorium on public oversight, without hearings, debate, or a clear regulatory alternative. That’s not just a bad process—it’s a shortcut that undermines both democratic norms and federalist principles.
With decades experience in the public and private sectors, we understand the wisdom of restraint in regulating emerging tech. We believe deeply that there are moments when regulation stifles innovation and drives breakthroughs offshore. But this isn’t one of those moments. AI is too important, too fast-moving, and too central to our economy to leave to political shortcuts.
That’s why, even as the Senate reconciliation text appears to narrow the House’s sweeping moratorium without rejecting it outright, the full Senate should still reject the effort to bar state laws for a decade. Lawmakers should instead commit to what insiders call regular order: hearings, stakeholder input, and clear-eyed law-making. Call us naïve, but we still believe regulation should emerge from the democratic process, not from budget bill riders.
And let’s be clear, the American federalist system is designed to accommodate complexity. Over the past two centuries, Congress and the judiciary have navigated the blurry lines between state and federal authority over interstate commerce. Sometimes, federal preemption is appropriate. But those decisions should be made surgically, not by sweeping away every possible state statute related to AI, from civil rights protections to consumer safety enforcement.
A surgical approach would channel the wisdom of Justice Louis Brandeis, who viewed states as laboratories of democracy—places where new policy ideas could be tested, refined, and, when successful, scaled nationally. That principle is relevant now because we’re in the early stages of understanding how AI will shape work, markets, and daily life. We need experimentation, not a federal freeze. Rather than silencing state governments, Congress should focus on three urgent priorities.
First, subject-matter committees should begin updating the laws they oversee to reflect AI’s growing presence in the economy. Banking, health care, agriculture, education—each has its own dynamics, risks, and opportunities. Rather than pass AI laws in the abstract, Congress should examine use cases and consider whether tailored, sector-specific updates are needed. The committees closest to those markets know the stakeholders, understand the tradeoffs, and are best positioned to act quickly and precisely.
Second, Congress should mandate meaningful transparency and disclosure standards for AI systems that carry high potential for harm or confusion. These requirements shouldn’t be framed as pre-approval gatekeeping—AI is not a drug requiring FDA clearance before use—but they should create real, enforceable obligations around disclosure, recordkeeping, and model behavior. Transparency can’t just be a corporate value statement. It must be codified, enforced, and independently auditable.
Third, lawmakers must ensure that AI is not used to create catastrophic risks, such as autonomous weapons, biologically engineered threats, or tools to manipulate financial markets. Existing law may cover some of these edge cases, but a thorough review is needed. The public deserves assurance that Congress is not leaving the most dangerous uses of this technology in a legal gray zone.
And finally, as Congress considers federal legislation, it must tread carefully around general-purpose state laws—those that don’t target AI specifically but apply to all kinds of conduct. State statutes governing fraud, discrimination, negligence, product liability, and unfair business practices have long served as the front line of consumer protection.
Put another way, AI does not exist in a legal vacuum. If AI is used to help con someone out of retirement savings, or if an AI pricing algorithm facilitates price-fixing in the housing market, the public should be able to seek justice in state court.
A broad preemption statute risks turning ordinary tort claims into federal cases. That would have massive implications for the judiciary. State courts handle roughly 100 million cases a year. Federal courts hear fewer than half a million. There are about 30,000 state court judges, and only around 1,700 federal judges. Swamping federal dockets with disputes over chatbots and automated decisions would grind the system to a halt.
This isn’t an abstract concern. We’ve seen this movie before. In past efforts to regulate securities fraud and digital advertising, Congress drew careful jurisdictional lines between federal and state oversight. There’s no reason it can’t do the same here, ensuring consistency without stripping states of the ability to protect their residents.
Yes, AI is a national challenge. But that doesn’t mean the solution must be exclusively federal. The right approach is a partnership: clear federal guardrails in high-risk areas, targeted updates to sectoral law, and state-level flexibility to enforce basic rights and responsibilities. Let Congress do its job—and let the states do theirs.
In the end, AI is not just a technological issue, it’s a governance test. We can either take the time to legislate wisely or allow procedural shortcuts to shape the future of a foundational technology. A decade-long gag order on the states would be a failure of both imagination and responsibility.
Authors


