Home

Donate
Perspective

California Governor’s Report Sidesteps AI Liability

Jonathan Mehta Stein, David Evan Harris / Jun 23, 2025

California Governor Gavin Newsom photographed in 2024. Source

AI is making rapid advancements and integrating into all aspects of our lives. Its dangers and flaws are by now well-known, but in the United States, no substantial government regulation exists — in fact, the federal government has welcomed it with open arms. With no meaningful action from Congress, it’s up to states like California to create a balanced, thoughtful regulatory approach that will enable us to enjoy the benefits of AI while protecting us from its harms.

Critical to that balanced regulatory approach is liability — enabling members of the public to take action to hold AI companies accountable if their products cause injury. Accountability of this kind exists for other industries, but not AI. Will it be sacrificed in the name of profit?

Potentially so.

Last September, California Governor Gavin Newsom made headlines when he vetoed Senate Bill 1047, the "Safe and Secure Innovation for Frontier Artificial Intelligence Models Act," from State Senator Scott Wiener (D-CA11). The bill sought to establish legal liability for high-risk AI systems, so harmed parties could seek accountability if AI went badly wrong, similar to consumer protection measures in other industries. That said, SB 1047 was notably more modest than liability regimes in other industries — it only held companies liable in the event of harms causing “mass casualties” or damages costing upwards of five hundred million dollars. The bill passed through both chambers of the State Legislature with strong support, but it also generated significant public debate. Newsom commented that it was so contentious that it had “created its own weather system.”

In his veto letter, he wrote that “a California-only approach may well be warranted — especially absent federal action by Congress — but it must be based on empirical evidence and science…” He proposed the creation of a working group that would produce a report on the issue, a tactic similar to those used by the tobacco and fossil fuel industries when they sought to derail legislative efforts using what researchers call the “deny and delay playbook.”

This report is now here. While it thoroughly explores many aspects of AI policy, it fails to include any substantive consideration of AI liability. Unfortunately, this report—especially with its publication falling more than halfway through the state’s 2025 legislative cycle—means that progress on meaningful AI liability legislation will likely stall until at least 2026. While the Governor positions himself as an opponent of Trump’s effort to impose a ten-year moratorium on state AI legislation, Newsom has — intentionally or not — managed to stifle the most meaningful legislative efforts to hold AI companies legally liable for at least two years.

In fact, because this report offers California legislators no clear guidance on how to proceed with AI liability legislation, and since it was released with fanfare and a clear endorsement from the Governor, the danger is worse than delay. Lawmakers may read the situation to mean that future AI liability bills will meet a similar veto as SB 1047.

Absent a major change of heart from the Governor, Californians are now left without strong protections from known AI harms, such as scams, deepfakes, and discrimination. Another option is a statewide ballot measure, which is what it took to put California’s nation-leading privacy laws in motion. The longer Sacramento can’t take clear and decisive action, the more likely it is that other stakeholders take matters into their own hands.

Failing that, another alternative is to wait for courts to decide the outcome of AI liability cases, which could take anywhere from a few years to a decade or more, which could be too late for what the report describes as the “severe and, in some cases, potentially irreversible harms” wrought by AI.

That said, there is still some room for important legislative action to be taken based on the principles endorsed in this report. Even though the report clearly states that it “does not argue for or against any particular piece of legislation or regulation,” it could also be read as supporting a number of bills currently making their way through the legislature, most of which fall into the broad class of “evidence-generating policy” that the report endorses or generally increases the transparency around AI systems.

Assemblymember Buffy Wicks’s (D-CA14) AB 853 requires social media platforms to put labels on AI-generated and authentic content, increasing transparency for users and helping researchers understand the impact of synthetic content on society. Assemblymember Rebecca Bauer-Kahan’s (D-CA16) AB 1018 requires companies to conduct third-party risk assessments of AI decision-making systems, again providing transparency and evidence of how AI is being used in critical contexts. Senator Scott Wiener’s new 2025 AI bill, SB 53, resuscitates two key elements of SB 1047: whistleblower protections for employees at frontier AI labs and taking steps towards the creation of the CalCompute public AI computing cluster, likely to be housed under the University of California. This report could be read as an implicit endorsement of bills like these, which we are supporting and would like to see make their way to the Governor’s desk for signature soon. At the same time, we wish that this year’s crop of AI bills went even further.

The Governor notes that the purpose of the report was to “...help California develop workable guardrails for deploying generative AI, focusing on developing an empirical, science-based trajectory analysis of frontier models and their capabilities and attendant risks…” Unfortunately, the final report’s recommendations stop one step short of guardrails. More evidence is always helpful, but we already have evidence in the form of AI scams targeting seniors, AI chatbots encouraging violence, nude deepfakes targeting women and girls, AI-generated disinformation polluting our civic discourse, and automated decision-making tools causing race- and gender-based discrimination.

What constitutes enough harm to finally take action?

Effective regulation gives clarity and shape to industries and direction to companies. We need regulation for AI so we can experience its benefits while being protected from the harms already emerging. But creating the “guardrails” the Governor mentions would mean introducing clear accountability mechanisms to hold AI companies liable for the harms that their products generate in the world. It’s time for California citizens and lawmakers to step up and demand more than evidence of AI’s harms — we need robust and enforceable laws to protect us from the harms that AI is already causing.

Authors

Jonathan Mehta Stein
Jonathan Mehta Stein is a long-time democracy advocate and civil rights attorney. He spent five years as the Executive Director of California Common Cause, during which time he founded the California Initiative for Technology and Democracy (CITED). Jonathan was previously the head of the Voting Righ...
David Evan Harris
David Evan Harris is Chancellor’s Public Scholar at the University of California, Berkeley, Senior Policy Advisor at the California Initiative for Technology and Democracy, Senior Fellow at the Centre for International Governance Innovation, and Senior Advisor for AI & Elections at the Brennan Cente...

Related

Perspective
The Big Beautiful Bill Could Decimate Legal Accountability for Tech and Anything Tech TouchesMay 27, 2025
Podcast
An Interview with California's New Chief Technology Innovation OfficerMay 29, 2025

Topics