How US States Are Shaping AI Policy Amid Federal Debate and Industry Pushback
Cristiano Lima-Strong / Jul 13, 2025Audio of this conversation is available via your favorite podcast service.
In the United States, state legislatures are key players in shaping artificial intelligence policy, as lawmakers attempt to navigate a thicket of politics surrounding complex issues ranging from AI safety, deepfakes, and algorithmic discrimination to workplace automation and government use of AI. The decision by the US Senate to exclude a moratorium on the enforcement of state AI laws from the budget reconciliation package passed by Congress and signed by President Donald Trump over the July 4 weekend leaves the door open for more significant state-level AI policymaking.
To take stock of where things stand on state AI policymaking, I spoke to two experts:
- Scott Babwah Brennen, director of NYU’s Center on Technology Policy, and
- Hayley Tsukayama, associate director of legislative activism at the Electronic Frontier Foundation (EFF).
What follows is a lightly edited transcript.

The Assembly Chamber at the California State Capitol. Ben Franske / Wikimedia / CC BY-SA 3.0
Cristiano Lima-Strong:
Scott, Hayley, thank you both so much for joining us. We're speaking just over a week after the Senate ultimately opted to keep a moratorium on state AI rules out of the reconciliation package that was just signed over the 4th of July weekend. Now that bill, certainly a version of it could come back and there was a lot of debate about what specific laws that have been passed would've been potentially blocked by it.
But I wanted to put that aside for a bit and I thought this would be a good moment to check in on what states have been up to when it comes to putting some of these rules into place, especially as more and more of them take effect and are being implemented, and then also look ahead a little bit to what trends we could see on the horizon. Scott, I wanted to start with you. You've been publishing these reports annually. Looking at the state of tech policymaking, including particularly around AI, what have been some of the top line trends that we've seen in terms of what's actually on the books when it comes to states setting rules around AI?
Scott Babwah Brennen:
Most of the attention on state level AI regulation has focused on some of the biggest kind of blockbuster bills. But while that happens, states have been passing a whole raft of smaller, more narrow sectoral bills that cover a narrow industry or just one sort of thing. Last year we saw many bills that did things like establish AI commissions that deal in appropriations to get money to universities for AI programs.
We saw a whole lot of bills doing things like requiring labels on political ads that contain deceptive or misleading generative AI. We saw some efforts to ban things like NCII, Non-Consensual Intimate Imagery, or CSAM, Child Sexual Abuse Material that is generated by AI, and these are the sorts of things that states have been actually focusing most on. That being said, we've also seen some of these larger sort of more comprehensive efforts, most notably the Colorado, what's been called comprehensive bill that was enacted last year that really focused on algorithmic discrimination, but across a bunch of different sectors. And then, well, I guess the last month we saw the New York Legislature, it passed the RAISE Act, which would be regulation of frontier models, that hasn't, as of today, has not been signed or vetoed by the governor of New York. We're sort of waiting to see what she's going to do.
Cristiano Lima-Strong:
Yeah. So I want to circle back on that one because a biggie and it gets into some of the more high profile battles that we've seen, especially last year in California. But Hayley, just want to start off, what are some of your top line trends, biggest things that you've been seeing as far as what states are actually being able to get passed and sign into law?
Hayley Tsukayama:
Yeah, I mean, definitely underscoring a lot of what Scott just said, right? We're seeing a lot on deepfakes, on deceptive media, on that kind of stuff. I think we've also seen an uptick in sort of bills around workers AI and AI use and automated decision-making use in the workplace, which is I think really the high concern to a lot of people, and it's been really interesting to see sort of the different groups that have been activating around those bills.
I think it's not a community that you always see on tech bills, but has been very interesting. I think adjacent to AI, but I kind of consider them together. We've also seen a lot of pricing algorithm bills show up, so really focusing on those ways that AI can affect people's pocketbooks, what they call kitchen table issues. So I think there have been a lot of interesting trends, and I think pulling in people into this conversation around AI, which is obviously, it affects all of us, but really seeing bills that are connecting threads that I don't know that I've seen before this year as much.
Cristiano Lima-Strong:
Hayley, I know you've been tracking a lot, legislation around AI in government and the use by government. I remember a couple years ago when there was a lot of discussion just ramping up at the federal level around what Congress could potentially do on this. Then some of the response that we heard, I remember Senator Gary Peters talking about, "Well, if we're going to try to set rules for AI, we should set some guard rails for our own use first." Do you see a lot of states taking a similar approach and what do you make of that?
Hayley Tsukayama:
Yeah, I mean we've certainly seen a lot of what I think of as public sector AI and automated decision-making bills come up. A couple of good examples in states overall, I think it's a really important area to focus on. I think when we're specifically thinking about, again, AI and automated decision-making systems at the government level, often they kind of frame it as procurement, which is one of the most boring words people hear in a discussion.
But when you're talking about something that makes a decision, it's not like buying a printer. That's essentially, we kind of say here, procurement, when you're thinking about AI is rulemaking, right? You're really thinking about the process by which people are getting flagged for extra review or in some cases actually getting decisions made by AI that may or may not have to be reviewed by a person. So I think we're seeing a lot of those conversations pop up, and I think that's a really good thing. And honestly, I think government should be held to a high standard when it comes to reporting when they're using these systems and thinking about how they're collecting the information that goes into those systems, making sure that they're being equitable is there.
Cristiano Lima-Strong:
Yeah. Scott, you talked about some of the working group bills and what's jumped out to you in terms of the AI in government and some of the activity we've seen at the state level on that.
Scott Babwah Brennen:
To me, those sort of headlines of the past six months have been less about what has passed, and more about some of the actions behind the scenes and some of the sorting that we've seen. So most importantly, I think this year we're starting to see a partisan sorting in the sorts of bills that are being introduced and championed by bipartisan. I think in previous years we saw a lot more bipartisan collaboration, a lot more bills that had supporters from both parties behind them.
I think this year we're increasingly seeing, we're seeing less of that. We're seeing battle lines being drawn, and I actually think that the government, the AI and government has been drawn into that a little bit. I think the best example of this is what happened in Texas where Rep. Giovanni Capriglione (R-TX98) introduced basically a version of the multi-state working group bill on algorithmic discrimination that was more or less aligned with what Colorado did after some pushback largely from members of his own party and many civil society groups, he rewrote that bill, kept the same name, and made it basically just focus on AI and government, taking out the provisions about AI discrimination across all sort of critical decision making. And so we've seen this sort of retreat from some of the more kind of far-reaching efforts back to some of these more government-oriented ones as well.
Cristiano Lima-Strong:
What are some other examples you've seen of the AI debate becoming a little more partisan in terms of the bill states are pursuing?
Scott Babwah Brennen:
I think the algorithm really around algorithmic discrimination is probably the sort of key there where I'm not sure if it's because the phrase discrimination pings the concern about woke ideology on the right, but yeah, wherein algorithmic discrimination bills had received some bipartisan support, that has sort of gone away. And so we've seen pretty much the other collapse of these multi-state working group bills, at least in non-blue states.
And even in blue states, some of them have actually passed so far, but some of these consumer protection-oriented laws as well, I think have fallen victim to this where we continue to see in blue states interest in some of these consumer protection provisions. While the red states seem to be more focused on, while there is some continued interest in things like deepfakes on disclosure, but far more focused on government use of AI, more focused on AI commissions, that sort of thing.
Hayley Tsukayama:
And I think just to respond, to be clear, I think it should be a both and right? I mean, I care a lot about government use of AI. I obviously also care a lot about private use of AI. I think Scott's absolutely right though, I think we are seeing more polarization around the issue, and certainly for a while it looked like kind of everybody was going to be interested in these bills and concerned about the issues that were coming up. And I agree. I do wonder also if it's that discrimination language that's sort of raised a flag.
Cristiano Lima-Strong:
Certainly you could see that pinging some of the DEI concerns that we've heard in Washington, and so it makes sense that we would see some of that at the state level as well. So we've talked a little bit about the Colorado legislation, which most view as sort of the first comprehensive AI legislation. There's been some debate about whether we are likely to see more efforts like that or to continue to see states pursue more of this sort of sectoral or piece meal approach. Is there a lot of sign of other states trying to pick up on the comprehensive bandwagon? Hayley, any thoughts on that?
Hayley Tsukayama:
I don't own an accurate crystal ball, so a little hard to say. I mean, I think there's certainly interest. I have heard a little hesitation from folks because Colorado's law is under some threat of rollback, I guess is what I would say. When it was signed, the governor was saying, "Well, I want you to go back and look at this." There's some question of there were attempts this year to sort of make some amendments to that bill that I think would've made it less protective for consumers. The clock kind of ran out on that.
There was a question of whether if Colorado calls a special session, whether we could revisit it and soon. So I get a little bit of sense that folks are kind of waiting to see what happens overall with that bill. But I do think that the interest in the issue is still there, so it's a little hard for me to say, yes, it will be in these states and they will address it this way. But I think, as I said, the interest is there, but there's a little bit of weariness I think, about what's going to happen to some of these state bills and also frankly, what the federal government is going to do.
Scott Babwah Brennen:
That sounds exactly right. I'll just add that after Virginia, actually, the legislature passed one of these bills and then it was vetoed by the governor in the early part of the year. And then when you add to that, basically what happened, the sort of weird politics with the Multistate Working Group and their host organization, FPF, where basically there was a breakup, a well-publicized sort of breakup there after—I think it was, was it Senator Ted Cruz (R-TX), I think? Yeah. Called out the Multistate Working Group and FPF. It's created this sort of odd uncertain situation. And then when you add to that, the fact that the bill that had gotten the farthest this year was vetoed, yeah, it doesn't seem super promising, but I know that there is still, as Hayley said, a lot of interest in these bills in some states.
Cristiano Lima-Strong:
We've mentioned a couple different instances of there being vetoes and of course the specter of the moratorium last year with SB 1047 in California, which would've required AI companies to conduct basically safety tests before rolling out some of their most advanced models. That was something that passed but then was vetoed by the governor. There was a lot of discussion about the time, is this going to have a chilling impact on states pursuing more aggressive legislation? Now there's a question of how the moratorium will unfold. At the same time, as Scott, you've alluded to, we have seen New York take up a similar proposal in the RAISE Act. So I guess this is my long-winded way of asking. Does it seem like states are letting up or are they forging ahead in face of some of the political dynamics here? What do you think, Scott?
Scott Babwah Brennen:
Yeah, they absolutely do seem to be forging ahead. So it's funny, I just hosted a panel on the RAISE Act yesterday, so it's very top of mind, but I mean you're right. After 1047 was vetoed last year, there was a lot of uncertainty about what would happen. My understanding is that Assemblymember Bores (D-NY73) and Senator Gounardes (D-NY26) really tried to calibrate the RAISE Act to address some of the biggest concerns from 1047, and so it did not include some of the more unpopular provisions. The RAISE Act still faces a lot of pushback from industry and industry groups, but I think what's really interesting is since the Raise Act was passed, we've now seen Scott Wiener, who was the sponsor of 1047, revise his current bill, which was SB 53 to basically include some of these provisions from the RAISE Act. He doesn't go all the way back to 1047, but it's like a step closer.
And he of course didn't quite say, in the press release, he didn't say this is because of the RAISE Act, but he actually said it was because of the findings of Working Group, released a report a couple of weeks ago or a month ago. But we've also seen another similar bill introduced in, I think it's Michigan that would do a similar sort of set of requirements. So after the success of the RAISE Act, yeah, we are seeing a sort of renewed interest in these frontier model bills, though I'll just say none of them have actually been fully enacted. So this could all look very different if Hochul vetoes the bill and Scott Wiener's bill doesn't go anywhere.
Cristiano Lima-Strong:
And these of course are some of the most aggressive or sweeping or protective measures in the country, and I think that's why a lot of people look at them as sort of litmus tests for where legislators are on this. Hayley, what are your thoughts on how some of the opposition that those bills have incurred and then also the specter of this moratorium, will Congress pass this down the line? Do you see that having an impact at the state level? Do you think that the trend of legislators forging ahead is going to keep up?
Hayley Tsukayama:
Yeah, I mean I certainly think we're going to continue to see legislators forging ahead and to me, what's important about making sure this is a multi-state conversation where most of the action is happening at states is that you see people take these big swings and then you get to kind of feel out where the traps are or where the opposition is going to come from or what those talking points are going to be, and then you see in another state, oh, they've adjusted it to be this way or whatever.
So I think that's really important and I think we are going to continue to see some of that tuning, some of that fine-tuning across states, especially as legislators pay attention to each other. The moratorium conversation, it's certainly going to be influential. I mean, I think people are always going to be thinking about preemption and whether federal laws are going to override state laws, but I also think, I certainly think that in the formulation that we just saw, which is, we're going to override all your laws and replace it with nothing, that there was a huge pushback there from state legislators. They know that these issues matter to their constituents, and so I don't think that kind of formulation is going to be particularly popular again.
So I do think that Congress is going to have to come to the table with something, with a proposal, and I've done a lot of privacy work, so it's hard for me. I think about states a lot, but I've done a lot of privacy work. I used to be a reporter for the Washington Post, and in 2010 I remember people being like, oh, this is definitely the year we're going to get a federal privacy bill. So I don't know that Congress is going to come up with something in a timely way that's actually really going to stop some of these state laws from emerging. But I do think there will be a conversation between states and the federal government about what this proposal could look like and maybe states will try and carve off smaller pieces. Maybe they won't be quite as ambitious with broad bills, but it's hard to say. And then it also varies state by state quite a bit. It kind of depends what parties in power and and which state about how far they're willing to push certain pieces of legislation.
Cristiano Lima-Strong:
This is a good point to just disclose though we are both former Washington Post reporters, although we did not overlap, but on your point, as someone that also has covered privacy for a long time, it's notable to me that there's been this specter of federal preemption on privacy for a long time, and at the same time, we've seen states pass dozens of privacy laws now, and so maybe that's something to think about as we look at how this moratorium debate plays out.
I wanted to just hit on a couple more different sort of buckets of AI bills that we've seen at the state level. So there's of course been a lot of activity around child online safety, and to some extent states also just got a green light around age verification with the recent Supreme Court ruling on the Free Speech Coalition vs Paxton case, but we've seen sort of a trickling of more bills around sort of chatbot safety and algorithmic amplification of content to kids. How are we seeing lawmakers address concerns about AI and the potential implications for kids? Scott, do you want to take that?
Scott Babwah Brennen:
Yeah, I mean, kids online safety has now in the past few years been the second most present issue, I think in the minds of a lot of lawmakers, along with AI, and so it's like no wonder that we're seeing them kind of intersecting here. I think it is really important to distinguish between the bills that have been introduced and the laws that have been passed. So especially this year, folks have made a lot out of the fact that we've now seen more than a thousand bills introduced, but only in reality we've only seen a couple dozen paths, and I think a lot of the child safety stuff actually falls into the introduced but not passed bucket, especially this year.
Absolutely, there's a huge amount of interest across states and different ways to protect kids. I'm not sure that much has actually passed squarely on the kids and AI front. We've seen some, it's a passage of things like the Age Appropriate Design Act in one or two states I think this year, but beyond that, I'm not. Maybe Hayley has a better sense of what's actually been enacted so far this year.
Hayley Tsukayama:
I think the past one is my recollection and I do feel like there's another one, but it's just not coming to mind right now.
Cristiano Lima-Strong:
A lot of states, a lot of bills.
Hayley Tsukayama:
I know, I'm practically looking through my spreadsheet in front of me, but I'm not going to get there in time. I should say it's a little tangential to the conversation. We have a lot of concerns about the speech implications and the censorship implications of a lot of these bills. I do think though, that it's true that they're becoming, well, they are a huge part of the AI conversation, and I do expect, again, I can't make, don't bet on my predictions, but I do expect that with that case being resolved that we are going to see, if people were waiting to see what the result of that case was going to be, then they might have been sitting on legislation, and so we might see a spike in it again year.
Scott Babwah Brennen:
Yeah, I'll just say, I mean, I think it's sort of an open question of how that case, the Supreme Court gave a green light to age verification for pornography online. It's unclear as of now what that means for other types of age verification online, how that translates to questions about age verification for social media is the next big question. We've seen some of these bills passed last year that try to do that, impose age verification for social media. I believe they've all been enjoined by the courts right now. So yeah, there's a lot of open questions.
Cristiano Lima-Strong:
We've touched on AI-generated non-consensual intimate imagery. Of course at the federal level, there was the passage and signing of the Take It Down act, which criminalizes the distribution of this material, but we've seen states take up a whole host of different bills around digital replicas and different types of deepfakes. Scott, talk us through a little bit what you've seen as far as states grappling with these different types of digital replicas, forgeries, and ways to use AI to generate that type of material.
Scott Babwah Brennen:
Yeah, you're right. This big bucket encompasses a lot of different approaches here. You're right that the NCII sort of approach has maybe even the most common that we've seen across states, although now it probably won't continue because we now have federal laws on it. We've also seen a number of bills, and this is more last year I think, but about publicity rights. So this is actually a really important build. The Elvis Act that was passed last year in Tennessee, which was probably one of the main reasons that the moratorium didn't go, which was Senator Blackburn didn't want to see it preempted, but that law sort of set restrictions on use of artists' likeness without their explicit permission. I think Illinois and California had passed similar laws, and so we've seen from that side of things, the publicity side, and then we've also seen, yeah, so as I alluded to before, the deepfakes in election, that has been one of the other really most common threads here, which is requiring labels on ads that contain deepfakes that are deceptive or in a couple of cases outright prohibiting that content.
Cristiano Lima-Strong:
Another bucket of bills that we've seen in many cases past deal with various forms of transparency in AI. Hayley, I think you were talking a little bit earlier about transparency in AI in hiring. What are some of the different types of transparency-focused bills that we've seen actually signed and that lawmakers try to get signed?
Hayley Tsukayama:
Yeah, I mean we've certainly seen some bills trying to get at issues of provenance, so labeling whether something is AI or not or watermarking is also another term that comes up. Just making sure that folks are labeling stuff that's AI generated. Again, I should say, we have some speech concerns there as well, just in terms of how likely a label is going to chill expression, and that kind of stuff. But it's certainly a popular topic that we've seen pop up and at least in California, these bills are also under litigation right now, but it's definitely another area kind of provenance and modern market.
Cristiano Lima-Strong:
There's a whole other host of directions we could go in here with AI as we've talked about. We haven't touched on copyright or other issues, but as we wind down the conversation, wanted to just hear from both of you a little bit about, we've talked about some of the trends that we've seen, what's been passed, what are you looking out for as we look ahead and now that states have, again, at least for now, are not facing this threat of a moratorium as what are you looking for trends? Will they continue, will they extend? What types of bills are you most interested in seeing their fate at the state level moving ahead? Scott, do you want to start?
Scott Babwah Brennen:
Sure. So I mean this is influenced based on by the pieces that I'm working on. So I'm working on this piece on consumer protection laws and AI, and so one of the types of bills that we've seen a lot of this year has to do with insurance, medical insurance in particular. Basically prohibiting AI in utilization review for medical insurance or prohibiting AI is like the sole determinant for utilization review. Well, I have a lot to say about that that may, the fact that that may or may not already be covered actually in a lot of state law, but that's sort of beside the point, but as far as I understand, none of these laws I don't think have actually passed or maybe only a couple of them have passed. So I'm really curious to see how that plays out.
The other big area that I'm looking at is about data centers, and again, this is because I'm doing some work on data centers, but the same thing happened last year where last year and this year we've seen a ton of bills that try to, I'll just say, offer a different regulatory approach to data centers than we've seen in the past. So going back now 10, 15 years, the main way that states have dealt with data centers is to basically throw tax incentives at them, especially sales and use tax, they give them tax breaks. In the past couple of years, we've seen legislators introducing these bills that do things like require audits about energy use or the impact on grids or the water use or the value that states are actually getting in return for these sort of tax breaks. Again, my understanding is none of these bills have actually passed, but they keep being introduced as data centers really are becoming more and more of a central sort of topic in the AI policy debate. I'm really interested to see what's going to happen with some of these bills.
Cristiano Lima-Strong:
How about you Hayley? What are you looking out for?
Hayley Tsukayama:
I guess this is just because I take a multi-state perspective. Certainly companies that are most likely to be regulated by these bills have run a very effective lobbying campaign against sort of, in different states saying things at the federal level. It seems fairly clear to me from what they're saying that they want as little regulation as possible, and so I'm going to be on the lookout for introducing bills that sound good on paper, but has definitions or provisions or exemptions in them that really cut a lot of people out of getting covered.
It's not types of bills necessarily, but I am going to be looking pretty closely at what language gets introduced, whether those are bills that have industry backing, where they're saying like, "Oh, this is regulation that we can live with." Because often, certainly as we've seen in the privacy fights, when they are advancing those kinds of bills, they probably aren't going to change the business practices very much. And so I feel like we have had a lot of big ideas thrown at the wall, kind of figuring it out, and so I'm really going to be kind of looking into those definitions and making sure that we're not seeing language that is just artfully crafted by companies that are going to be regulated by these bills.
Cristiano Lima-Strong:
Well, I think the fact that we've even been talking about a moratorium for the past several months is also an indication of some of that lobbying against regulation being very successful.
Well, we've covered a lot of ground and there's a lot more that we could be talking about, but we'll all be tracking this in the months to come, so I'm sure we'll chat again soon. Thank you both so much for joining me and talking about all this today.
Hayley Tsukayama:
Thanks for having us.
Scott Babwah Brennen:
Yeah, thank you so much.
Authors
