2025 May be the Year of AI Legislation: Will We See Consensus Rules or a Patchwork?
Jules Polonetsky / Jan 10, 2025In 2024, lawmakers across the United States introduced more than 700 AI-related bills, and 2025 is off to an even quicker start, with more than 40 proposals on dockets in the first days of the new year. In Washington D.C., a post-election reshuffle presents unique opportunities to address AI issues on a national level, with one party controlling the White House and both houses of Congress. But, while Congress has shown strong interest in AI generally, the 119th Congress seems more likely to prioritize other tech issues, such as online speech and child safety, over regulating the consumer protection aspects of AI.
In the absence of a federal law, state lawmakers across the political spectrum are showing a clear appetite to act. In 2024, Connecticut and Colorado lawmakers led similar legislative efforts focused on addressing “automated decisionmaking” in specific high-stakes scenarios like employment, education, and lending, culminating in the enactment of the Colorado AI Act. Meanwhile, a high-profile California bill backed by pioneering AI scientists took a dramatically different direction, aimed at specific critical harms such as AI-driven biological weapons development and AI-boosted cyberattacks potentially posed by “frontier models.” The proposal passed through the legislature only to be vetoed by the Governor, and is just one of many that is likely to be modified and back for debate in 2025. Another approach, recently proposed in Texas, is more expansive and aims to regulate a broader range of AI technologies and consumer harms.
With so many approaches to AI governance under consideration, there is a risk that states will adopt divergent or even conflicting regulations, resulting in a challenging regulatory patchwork. Many state policymakers are aware of the risks of pursuing incongruent rules and frequently look across state lines to develop policies to support innovation and protect their constituents. They are also aware of the need to understand the relevant tech, business practices, and potential harms to individuals to legislate effectively.
In 2023, several state lawmakers asked the Future of Privacy Forum (FPF) to serve as an independent, nonpartisan facilitator for conversations between lawmakers and experts on AI from industry, civil society, and academia. The Multistate AI Policymaker Working Group has become a network for lawmakers of every ideology to learn more about AI, discuss their ideas, and share legislative strategies. Any state policymaker is welcome to participate, and more than 45 states have been represented at informal discussions over the last year.
In our role as the group’s facilitator, FPF does not press lawmakers to adopt any particular regulatory approach, nor do we draft any bills. While lawmakers may choose to pursue different paths to legislating, we hope these conversations can form the basis for more convergent approaches to artificial intelligence across the US, mitigating the risks of many widely conflicting regulatory frameworks. At a minimum, bi-partisan conversations and education can help ensure that often under-resourced state lawmakers have access to the country's leading experts from academia, civil society, and industry.
In hosting these conversations, we have observed a few notable shared perspectives among many state lawmakers.
- First, policymakers are optimistic about developments in Artificial Intelligence. There is a sense that carefully drafted legislation can promote innovation by providing guidance and certainty, especially in areas where existing law may apply to a technology but with unclear impact.
- Second, there is a recognition that a federal standard would be the best way to address most AI concerns. But without a federal AI bill on the immediate horizon, state leaders feel an imperative to both protect their constituents and generate consumer trust in these emerging technologies.
- Third, there is broad alignment on the wisdom of focusing on the most serious, concrete risks. New rights and protections should be geared toward use cases and outcomes where there is a demonstrated possibility of harm to individuals and society.
In our view, the US needs a federal approach to AI that promotes leadership and provides strong protections against the most important risks. There is real opportunity for Congress to advance federal standards, and we are eager and available to support policymakers in doing so. But, in the absence of national law, we expect constituents to demand, and state lawmakers to pursue, bills that tackle the most pressing AI risks. As we enter the year of AI regulation, it’s promising that lawmakers are collaborating across political aisles to find common ground.