To Craft Effective State Laws on Deepfakes and Elections, Mind the Details
Hayden Goldberg / Apr 22, 2025
The Hawaii State Capitol building in Honolulu. Shutterstock
2024 was the year of elections, and a significant fear throughout the year was that deepfakes would be used to undermine the information environment, harming voters and candidates. In states across the US, legislators felt some urgency to address this issue. Sixteen states passed laws mandating labels on political ads containing deepfakes. In addition to bills already passed, several others are still under consideration or are awaiting gubernatorial signature.
What risks, precisely, were state legislators trying to address with these laws? This article details the language used to describe risks in laws passed in 2024. Second, it summarizes key developments with regard to legislation passed in 2024, including related litigation, to provide recommendations for legislators in 2025. Given free expression concerns related to synthetic media, it is imperative that legislation is written narrowly to fit its intended purpose and give governments the power to enforce the law. Finally, this article evaluates how effectively these proposals address the intended risks.
Risks addressed by 2024 state legislation
Across the world, many legislatures have emphasized a “risks-based” approach to AI governance. Deepfakes pose numerous risks, including fraud, identity theft, impersonation, and potential defamation or wrongful use of one’s likeness. While many scholars and advocates have invested time in identifying, understanding, and mitigating the risks posed by technologies like deepfakes, there has been much less attention paid to the language that subnational legislators use to describe the risks they are trying to address when regulating deepfakes.
In 2024 legislative sessions, sixteen states passed laws prohibiting deepfakes without labels in political campaign ads and communications. Four states passed similar legislation in 2023 or earlier. Understanding the language used to describe risks in laws passed last year can help us identify places where these laws fell short and what can be done to improve them in 2025.
My research into ten state laws passed and enacted in 2024 or earlier found that legislators had a relatively clear idea of the risks they sought to mitigate. First, they were worried about the security of elections and wanted to ensure they were free from foreign and domestic threats. Second, they were worried that speech falsely attributed to them could harm their reputations and reelection prospects as legislators. The third and final risk – concerns about the information environment posed by false or deceptive information – received the most attention.
Legislators argued that the public has the right to know if speech is false and if it has been manipulated, and that the public has a separate right to access true information. When considering the information environment, legislators generally gave serious consideration to the First Amendment during committee deliberations. They were cognizant that these laws regulate political speech, which is highly protected. Indeed, fears of infringing speech rights are why the Governor of Louisiana vetoed two bills that passed the state's legislature.
To address these risks, states took similar but not identical approaches, as illustrated in the table below. Laws were on a spectrum regarding how broadly or narrowly they define important details that answer questions like “Who can enforce the law?”, “What is the legal standard for showing a violation of the law?”, and “What type of content does the law apply to?”
Taking the third question as an example, Hawaii specifically limits the application of its law to just political advertisements. Florida’s law is slightly broader, applying to political advertisements, electioneering communications, or “other miscellaneous advertisements of a political nature,” provided said advertisement or communication contains “images, video, audio, graphics, or other digital content.” In contrast, Alabama’s law applies to “media falsely depict[ing] an individual engaging in speech or conduct in which the depicted individual did not in fact engage.” Finally, Oregon’s laws apply to any campaign communication that contains synthetic media. This demonstrates the variation in the spectrum of prohibited content.
Spectrum | Who can enforce the law? | Standard | What the law applies to |
---|---|---|---|
Narrow | Indiana: The impacted candidate can bring a complaint. | Florida: for a criminal penalty, proof beyond a reasonable doubt showing 1) the party paying for the ad failed to include a required disclaimer, which requires, in part, 2) that the ad was made with the “intent to injure a candidate or to deceive.” | Hawaii: political advertisements only. |
Middle Ground | Hawaii: The depicted individual (including candidate), organization representing voters can bring a complaint. | Arizona: for a civil remedy, clear and convincing evidence of 1) knowledge of falsity 2) “intent to injure the reputation of the candidate” and 3) intent to mislead a personable person. | Idaho: electioneering communications. |
Broad | New Mexico: The Attorney General, candidate, local prosecutor, depicted individual, and organizations representing voters can bring a complaint. | Utah: for a civil remedy, clear and convincing evidence of “intent to influence voting.” | Oregon: any campaign communication. |
Lesson from California: be narrowly tailored
Based on the risks legislators tried to address and some of the variations in the laws, what should the public focus on in 2025? What aspects of the laws should legislators emphasize?
For one, legislators must be conscious of how “narrowly tailored” the laws are. This is a legal standard for laws that restrict speech, and it boils down to this: because free speech should usually be prioritized, any restriction should be written to achieve a specific, well-defined purpose. In other words, it must be “narrowly tailored.” Unfortunately, in a lawsuit over the constitutionality of California’s 2024 law to address deceptive media in political advertisements (AB 2839), a court found the law was not narrowly tailored to withstand Constitutional scrutiny and paused its enforcement.
When considering whether to temporarily pause the law’s enforcement, the US District Court for the Eastern District of California was persuaded by the plaintiff’s argument that the laws were an unconstitutional infringement upon political speech. Specifically, the court took an expansive view of the scope of content covered under the law. While certain political speech restrictions are allowed because there is a “compelling interest in protecting free and fair elections”, the court held that the scope of this interest was more limited than the scope of the law. In the court’s eye, since the law was wider than the exception, that meant the law was unconstitutional.
Both sides are currently making arguments as to why the law should or should not be permanently paused, and they are emphasizing this gap between narrow tailoring and the government’s interest in conducting free elections. Kohls argues the law is unconstitutionally broad because it prohibits speech based on its content and because it compels speech in the form of a disclaimer. Meanwhile, California argues that the law fits squarely within its interest in protecting elections due to its purpose (preventing voter confusion), the high standard of proof, and its limitation on when it applies.
However, California’s third reason is where I believe their argument starts to show cracks, providing a lesson for other states. Most state laws limit the prohibition to a specific time period around an election, usually 60 or 90 days, to help narrow the scope. However, AB 2839 applied within 120 days of an election and 60 days after it. Here’s an example of what that looks like in practice:
Type of election | Date of election | 120 days before is | 60 days after is |
---|---|---|---|
General | November 5, 2024 | July 8, 2024 | January 4, 2025 |
Primary | February 5, 2025 | October 8, 2024 | April 6, 2025 |
Special | April 29, 2025 | December 30, 2024 | June 28, 2025 |
That is, just one general election cycle and an ensuing special election and primary for it covers nearly the entire calendar year. Never mind that the February special election wasn’t called until December 10, 2024, well into the period in which the law would be applicable. Or that several counties have elections in March, April, June, August, and November 2025.
Even beyond the legal reasoning, it is difficult to argue that a law that functionally applies for nearly the entire election year is terribly precise.
A more precise period for the enforcement of such laws would be 60 days in advance of an election. This is enough time to preserve information integrity ahead of the election, but sufficient time to instigate civil action if necessary. Additionally, it ensures that there is time for the media to correct misrepresented content in ads in the window where the law is not applicable.
Locals and the importance of standing
The media’s role in correcting false speech epitomizes the current jurisprudence on election speech. The role of journalism is to inform the electorate and serve as a check on candidates. This is all the more important in local races, where there is less attention from larger news outlets and the general public. Moreover, there are an estimated half a million elected positions, so national media cannot possibly begin to cover them all.
Thus, the decline in local media is concerning because it creates a gap in the information environment. When voters have less information overall, the impact of a single piece of information may have a larger impact on them, compared to if they had more information. Hence, there is reason for particular concern about the potential impact of deepfakes on local races, where there is little or no local news media to correct information in the lead-up to an election. Moreover, local campaigns have significantly smaller budgets, so securing legal representation to obtain injunctive relief in a deepfake case can be a substantial expense.
Local government is also more closely tied to local campaigns and should play a greater role in mitigating the harms of deepfakes. Specifically, local prosecutors at the municipal and county levels need to be able to bring cases on behalf of voters who are harmed by deepfakes. Compared to state entities like the Attorney General, they are more knowledgeable about the local information environment and local campaigns.
As a best practice, laws introduced in 2025 should ensure that local actors, such as District Attorneys, City Attorneys, and County Prosecutors, explicitly have standing to bring cases. This would follow in the path of Michigan and Hawaii, which widely grant standing to enforce their laws and address this issue.
Ensuring legislators adequately mitigate the risks they intend
Ensuring laws are narrowly tailored and provide standing sufficient to ensure the law can be widely enforced helps ensure the effectiveness (and constitutionality) of labeling laws. However, these actions still only address some of the risks. Which actions taken by legislators are actually going to address the risks?
California, once again, is illustrative. AB 2839 contained a (now enjoined) provision prohibiting deepfakes of election officials. In theory, this should decrease the strength of a “Liar’s Dividend” claim. The Liar’s Dividend allows someone to claim a video is a deepfake to reject its contents by saying, “This is not real.” But if deepfakes of local election officials are illegal, then someone claiming a video is a deepfake would be admitting to violating the law. This means videos of election officials should be real, thereby increasing trust in the election system because people would know a statement from their election official was authentic. In that respect, the law addresses the protection of election integrity, which is one of the risks legislators envisioned.
However, restrictions on speech that target the content of speech (i.e., the subject of the speech) are generally unconstitutional by default. By prohibiting deepfakes of a specific type of employee, California’s attempt to protect the election information environment likely infringes on free speech. While the goal of protecting the information environment should be strived for, the current hostility towards regulating speech from a specific person suggests that labeling is not the best method of increasing trust in local election officials, and so other methods should be used if legislators are exclusively focused on this risk. Instead, to increase trust in local election officials, state governments should consider mandating the use of .gov domains, in alignment with cybersecurity best practices, to enhance trust.
On the other hand, the laws do not address the right to access true information that legislators articulated. The labels do not require a disclosure saying if the content is false; instead, they only require stating that the content was manipulated. For the most part, they do not require indications of content provenance, such as with community-driven disclosures.
Nor do they require a disclosure stating which timestamp in an ad was generated with AI, which I previously proposed to allow readers to better break an ad down into chunks and place each chunk on a spectrum. That spectrum runs from objective truth (“the sky is blue”) to objective falsity (“the sky is purple”), but between them runs things that have been slightly modified by perspective (“the sky is darker” because I’m wearing sunglasses) and things that require context (“the sky is orange” because of a nearby wildfire). Providing voters with information sufficient to place an ad somewhere along this spectrum is a better way of helping them make informed decisions. However, it is still ultimately a voter’s prerogative to make this placement.
Civil and criminal penalties, and the harms being addressed
Finally, one critical difference for the 2025 legislative session is the type of punishment and relief the laws authorize. Some states treat violations as criminal matters, while others treat them as civil matters. When states treat the law as a criminal matter, they are prohibiting this type of conduct from the outset, suggesting legislators believe the creation (not just the sharing) of deepfakes to be a harm worthy of prevention.
In contrast, civil remedies, most commonly the authorization of a preliminary injunction, emphasize the spread of the harm among voters. These injunctions can force an ad or political communication to be taken down. This is intended to address the informational risks legislators articulated: an ad or communication without a label does not adequately inform voters if the speech is false or if it has been manipulated. Therefore, to stop additional voters from being harmed, an injunction can stop the ad from being aired. This protects the information environment.
However, this creates a Catch-22 for plaintiffs, which should prompt legislators to think more critically about their approach going forward. Since campaigns run on a strict timeline, they will likely request injunctive relief (nearly) contemporaneously with initiating their lawsuit, which occurs before both parties exchange evidence with each other. Therefore, plaintiffs will be forced to choose between two options: 1) waiting for the necessary evidence to obtain an injunction, and 2) being harmed by the content they are seeking to prevent. While one could argue this is consistent with the extraordinary nature of injunctive relief, it is inconsistent with legislative testimony, which emphasized the urgency and timeliness of harms and the need to grant relief quickly.
Authors
