What Tech Bills California Governor Newsom Signed Or Vetoed in 2025
Jasmine Mithani / Oct 15, 2025Jasmine Mithani is a fellow at Tech Policy Press.

Northwest view up to the pediment, rotunda, and dome of the California State Capitol in Sacramento. (Radomianin / Wikimedia Commons)
California often leads the nation in setting rules for technology. It is also the home of Silicon Valley, leaving the tech industry’s lobbyists with a short commute to Sacramento. With this year’s legislative session officially in the rearview mirror, it’s time to take stock of what bills related to tech policy made it through the lobbying gauntlet and over Governor Gavin Newsom’s desk. The state passed a significant number of bills related to AI harms, companion chatbots and consumer privacy, though some were punted to the next session by various committees.
Here’s a rundown of the top bills, with reference to the status and text of each bill provided by CalMatters, an independent nonprofit news site based in California.
What’s now law
Protecting domestic violence survivors from digital harassment (SB 50)
This law is meant to limit the ways technology can be used for harassment by creating a pathway for survivors to revoke their abuser’s access to connected devices like Google Homes or Ring cameras. A survivor or appointed representative can file a “device protection request” as long as there is proof of abuse (e.g. protection order, police report) or proof of exclusive ownership of a device.
Requiring transparency from frontier model developers (SB 53)
The “Transparency in Frontier Artificial Intelligence Act” requires companies developing frontier models to publish a transparency report publicly on their websites and forward assessments of potential risks arising from models to the Office of Emergency Services. It also provides whistleblower protections for employees working at large AI organizations.
This is a more limited version of a bill mandating more stringent requirements Newsom vetoed last year. At the same time, it's the first AI safety legislation of its kind in the country.
Studying data center electricity use (SB 57)
This law commissions a report on how electricity use in data centers impacts other consumers, for publication by January 1, 2027. The bill was originally intended to protect consumers from increased electricity rates due to data center operations, but was pared back after aggressive lobbying from the Data Center Coalition and other tech groups. Opponents said the bill in its original form was redundant and would stunt the growth of data centers in the state.
Ensuring companion chatbot safety (SB 243)
This law marks the state’s first foray into regulating chatbots. It requires companies to disclose to users that AI companions are not real, and mandates a protocol for providing resources about suicide and self-harm. It also says companies operating chatbots need to take reasonable measures to not expose minors to sexually explicit material.
The law comes in the wake of increasing cases of people, especially teens, dying by suicide after sharing their feelings or plans with a chatbot that did not intervene. After the bill was weakened during the legislative session, prominent child safety groups like Common Sense Media and Mothers Against Media Addiction withdrew their support, instead rallying around AB 1064. Newsom’s office weighed in on deliberations around SB 243, as reported by Tech Policy Press.
Data broker transparency and disclosure (SB 361)
This law enhances past legislation aimed at making data brokers more transparent about the information they collect. Data brokers now must disclose specific kinds of information they collect, such as citizenship status or sexual orientation, and also share whether they have shared information with foreign governments, the federal government, state governments, law enforcement or generative AI companies in the past year. It also revises past legislation about opt-outs going into effect on January 1, 2026, saying now data brokers must process requests within 45 days of receipt.
Requiring customer notification for data breaches (SB 446)
This law amends existing law about the notification timeline for companies who have experienced data breaches. Unless there’s a reason related to an investigation, companies have 30 days from discovery of the breach to tell their California customers about it.
Making law enforcement use of AI more transparent and accountable (SB 524)
This law is meant to provide greater transparency and accountability about AI use in police reports. It requires disclaimers on reports that used generative AI; requires an officer’s signature testifying the material is accurate; mandates the preservation of the first AI-generated draft, to better tell which parts were human-authored; and bans companies from selling information provided to their AI models for the purpose of a police report.
The law comes after a report from the Electronic Frontier Foundation showed the lack of transparency in Axon’s Draft One, the most prominent AI-assisted police report tool. Reporting earlier this year by Mother Jones found some police departments turned off AI oversight tools in Draft One meant to minimize bias, and it was difficult to tell which reports or parts of reports were used in documents submitted as part of a plea deal.
Placing warning labels on social media warning labels (AB 56)
This law requires social media companies to alert users under 17 to black box warnings upon first use, after three hours of continuous use, and every hour thereafter. Vivek H. Murthy, who served as United States surgeon general during the Biden administration, published an op-ed last year calling for warning labels on social media due to the impacts on youth mental health.
Clarifying responsibility for use of AI in civil cases (AB 316)
This law says that in civil cases alleging harm by an AI, “a defendant who developed, modified, or used the AI is prohibited from asserting that the AI acted autonomously as a defense.”
Targeting AI chatbots posing as doctors (AB 489)
Widely supported by health care professional associations, this law expands title protections (e.g. “doctor”) to apply against AI chatbots or similar systems that claim to be medical professionals. 404 Media reported that Instagram’s AI chatbots repeatedly lied about being real mental health professionals, and generated fake license numbers in response to follow-up questions.
Giving users the universal ability to opt-out of the sale of personal data (AB 566)
California internet users have the ability to opt-out of the sale of their personal data, but currently have to manually make that choice on every website. Now web browsers will have to honor a universal opt-out, making it significantly less burdensome for consumers to control their data.
California has listed privacy as a right since the 1970s, and this law builds on previous privacy laws from 2018 and 2020. Voters chose to limit the sale of private data in 2020, and AB 566 streamlines this process, which when implemented was onerous for users. Tech lobbies and digital advertising groups opposed it, while a variety of privacy and digital rights organizations supported it.
Broadening the definition of deepfake pornography (AB 621)
This law expands the definition of sexually explicit deepfakes and allows survivors to pursue a civil cause of action against a person who “knowingly facilitates or recklessly aids or abets” the distributes them. The law makes clear that companies advertising the creation of explicit nonconsensual deepfakes — such as so-called “nudification” apps — would be liable. It also enhances penalties and adds a new cause of action for public prosecutors.
Letting users scrap social accounts (AB 656)
This law requires social media platforms to create an easy way for users to delete their accounts and associated personal data.
Developing cyberbullying rules for schools (AB 772)
This requires the Superintendent of Public Instruction to create a model policy for how educational districts should handle cyberbullying by June 30, 2026. Local educational agencies must create or modify their own policies and make them available to the public by July 1, 2027.
Requiring online platforms to detect provenance data associated with media (AB 853)
As more media companies and technology firms take advantage of technical standards such as C2PA, this law builds one passed last year that requires companies who make generative AI systems with at least one million monthly views to include provenance information in images or media. This law now requires that large online platforms include a system to detect provenance information and provide an interface that permits users to evaluate that information.
Tasking cyber officials with information sharing duties (AB 979)
This law requires the California Cybersecurity Integration Center, part of the Office of Emergency Services, to create a state-specific information sharing protocol for threats to AI systems. It is modeled after a federal resource created by the Biden administration.
Requiring app stores to vet users’ ages (AB 1043)
With this law, California throws its hat into the age verification ring, but debuts an approach backed by Big Tech power players like Google and Meta. The law requires app stores to ask for a user’s self-reported age when registering an account, and then breaks that information into age categories it shares with apps downloaded. Parental consent is required before children under 16 can download apps. Unlike similar bills in Texas and Utah, only the California attorney general can pursue action against companies out of compliance with the age gate mandate.
Allowing rideshare workers to unionize (AB 1340)
This law allows rideshare workers for Uber and Lyft to unionize. California is the second state to allow this type of labor organizing, after Massachusetts. Newsom also signed a law reducing insurance requirements for rideshare companies as part of this deal.
What was vetoed
Requiring employers to disclose automated decisions (SB 7)
Also known as the “No Robo Bosses Act,” this bill would have required employers to disclose whenever they used an automated decision making system in employment related decisions like reprimands or promotions. In his veto statement, Newsom characterized it as overbroad and said he did not want to legislate on something forthcoming regulations from the California Privacy Protection Agency might address.
Labor unions and worker organizations generally supported the bill, but chambers of commerce said it would put too great a burden on small businesses.
Extending impersonation protections to deepfakes (SB 11)
This bill would have extended laws about false impersonation to include deepfakes and would have required a review of how generative AI impacts legal proceedings. Newsom vetoed the bill because of its requirement that companies that facilitated the creation of synthetic media include warning labels about potential civil liability. He said it would be burdensome and open up companies to significant liability for noncompliance.
SAG-AFTRA, Common Sense Media and the California District Attorneys Association were among the groups in support of the bill. The Association of National Advertisers and the California Chamber of Commerce were among groups that opposed the bill on the basis that it applied to non-malicious uses of deepfakes and had penalties that were too high.
Making social media platforms liable for civil rights violations (SB 771)
This bill would have made social media companies civilly liable for being party to a civil rights or hate speech violation. Newsom’s veto note said he felt the bill was premature.
Civil rights groups, including many Jewish organizations, supported the bill. Tech companies opposed it, saying it incentivizes suppression of free speech and content removal.
Requiring data centers to disclose water use (AB 93)
This bill would have required data centers to submit an analysis of their expected water consumption when applying for a business license. Existing data centers would have to disclose their water usage upon renewal. The bill comes on the heels of a wave of anti-data center activism sweeping the country, as the negative environmental impacts on local communities become more publicized.
Newsom said he didn’t want to impose regulatory requirements “without understanding the full impact on businesses and consumers of this technology.” His veto message also hinted at the importance of data centers to California’s economy and innovation edge.
Many California environmental organizations supported the bill. Big Tech leaders and the Data Center Coalition opposed it, saying the bill “could force businesses to disclose sensitive trade secrets, harm their competitive edge, and risk creating safety and security vulnerabilities.”
Restricting children’s access to AI companions (AB 1064)
The bill would have banned companies from making companion chatbots available to minors unless they had strict guardrails around suicide, self-harm or other sensitive subjects. It would have also created a civil right of action for children or their guardians to recover damages for harm done in violation of the bill, in addition to an action by the attorney general.
Child online safety advocates threw their weight against AB 1064, which was the subject of a significant negative social media campaign financed by tech lobbies claiming it would stifle innovation in the education sector.
Authors
