The Ongoing Fight to Keep Evidence Intact in the Face of AI Deception
Riana Pfefferkorn / Aug 14, 2025
Behavior Power by Bart Fish & Power Tools of AI / Better Images of AI / CC by 4.0
Last week, television news host Chris Cuomo fell for a deepfake video of Representative Alexandria Ocasio-Cortez (D-NY) despite its being prominently labeled as AI-generated. Two weeks before Cuomo’s error, Senator Mike Lee (R-UT) posted to X a clearly fake resignation letter from embattled Federal Reserve chair Jerome Powell. Lee (who routinely amplifies misinformation) quickly deleted the tweet, claiming he didn’t know whether the letter was legitimate or not. Cuomo likewise deleted his post, though he bizarrely demanded that AOC disavow all the words the fake video had put in her mouth.
In an opinion column in the New York Times, Princeton professor Zeynep Tufekci references the Cuomo incident to highlight both the difficulty and necessity of verifying what’s real in the age of high-quality AI-generated images, audio, and video. As Tufekci notes, deepfakes are a demand-side issue, not just a supply-side issue: Many people (including, evidently, Cuomo and Lee) who consume and share fake content don’t care if it’s fake so long as it confirms their existing beliefs.
Nevertheless, as Tufekci points out, there are many contexts, from private interactions between individuals to the financial markets, where “truthiness” won’t do and we still need some way to establish content authenticity. Fortunately, that’s what many people in both policy and technical domains have been working on for years now.
On the policy side, lawyers, judges, lawmakers, and legal scholars have long been attuned to the deepfake threat. As Tufekci notes, in the deepfake era, a wrongdoer caught on camera could claim the video is a deepfake, or manufacture their own fake evidence to frame someone else — “Hey, it’s your word against theirs” — meaning ordinary people will need ways to “disprove false claims and protect our reputations,” though she warns about the incentives for increased surveillance.
These concerns were covered back in 2019 in a landmark law review article by law professors Danielle Keats Citron and Bobby Chesney, which coined the term “Liar’s Dividend” to describe claims of deepfakery leveled at real evidence. It’s a phenomenon that’s part and parcel of the tendency Tufekci identifies among authoritarian governments to be the sole arbiters of what’s true and what’s “fake news.” From Myanmar to Gabon to India to Spain, governments and politicians have contested the veracity of politically damaging audio, video, and images.
It’s worth noting that attempts to invoke the Liar’s Dividend have proved unsuccessful in the courts, where Tufekci warns that deepfakes risk unleashing “chaos.” In the United States, many in the justice system, which has long dealt with fakes and forgeries, are working to keep that from happening. As a former litigator and judicial clerk, I was one of the very first people to talk about how deepfakes would affect courts’ evidence rules, principally in a 2020 law journal article. Since then, I’ve spoken about this topic to law students, journalism students, lawyers, congressional staffers, and state-court judges, and to the general public via podcasts and NPR. Other scholars, particularly law professor Rebecca Delfino, have now published far more than I ever did — and the pieces just keep on coming.
A common thread throughout my and others’ writings is a focus on Rule 901 of the Federal Rules of Evidence, which governs authentication. (State courts have their own versions, typically with very similar wording to the federal rule.) In light of the deepfakes phenomenon, the federal courts’ advisory committee on the federal evidence rules has been debating since October 2023 whether and how to amend Rule 901. At its most recent meeting in May, the rules committee decided a rule change wasn’t necessary at this time, citing “the courts’ existing methods for evaluating authenticity” (which I’ve long argued are sufficient) “and the limited instances of deepfakes in the courtroom to date,” though it proposed a draft new Rule 901(c) to have on hand should circumstances change.
That is: a group of judicial experts has been keeping an eye out, but the feared “tsunami” of deepfakes in court cases has yet to materialize; still, they’ve drawn up a plan just in case circumstances change and the existing rules cease being up to the task. So far (potentially because of legal ethics rules), efforts to sneak deepfakes into evidence seem to be rare and readily unmasked, whereas there have been several attempts to take advantage of the Liar’s Dividend to impugn inculpatory videos — and from D.C. to Pennsylvania to California, those have failed. All of this should be reassuring to anyone worried about chaos in the courts.
Notably, the White House deemed the issue of synthetic media in the legal system important enough to include in last month’s AI Action Plan. The plan calls on the Department of Justice to file comments on proposed evidence rule changes and contemplates federal agency adoption of something akin to the proposed Rule 901(c). States, too, have been working to guard society and democracy from the threat of deepfakes. Though legislation governing deepfakes in elections has a hit-or-miss track record in the courts, starting next year California will require platforms of a certain size to give users a free AI detection tool and an option for labeling the AI content they generate. (TBD how reliable the former and how popular the latter will be.)
Technical experts, for their part, have been diligently working on digital content provenance and authentication, as Tufekci briefly mentions. The AI Action Plan also calls on the National Institute of Standards and Technology (NIST) to “consider developing NIST’s Guardians of Forensic Evidence deepfake evaluation program into a formal guideline and a companion voluntary forensic benchmark.” This nominally signals support from the Trump administration for NIST’s work to help preserve trust in the legal system, media, and business by promoting advances in forensic technologies. That said, it’s hard to know how seriously to take this passage, given that the administration has also canceled hundreds of grants for fundamental research on topics including misinformation and deepfake detection, caused dozens of layoffs at NIST, and tried to massively slash NIST’s budget.
The private sector is stepping in as well. Founded in 2019, the Content Authenticity Initiative, whose members include Adobe, Canon, Leica, Microsoft, Nikon, NVIDIA, and Panasonic, promotes an open-source industry standard for content provenance developed by the Coalition for Content Provenance and Authenticity (C2PA). There are now several C2PA-compliant cameras on the market. That is not to say the C2PA is immune from critique: besides the usual industry capture concerns, the cameras that currently utilize the standard are expensive enough that they are likely used primarily by professionals, and there are human rights implications to the C2PA system overall. Meanwhile, organizations like WITNESS and OpenArchive have been working on tools for helping people (especially in at-risk settings such as conflict zones) use their phones to document and preserve evidence — not just to safeguard the truth, but also to seek accountability.
From human rights activists to camera manufacturers, from academics to public servants, a lot of people are working very hard to keep it possible for society to tell what’s real from what’s fake. We’re making progress, though not without setbacks, and the work can never be done so long as the state of the art in AI continues to advance.
The problem, as Tufekci identifies, is that many people simply do not care about a shared reality and ground truth anymore — fertile ground for authoritarians to flourish, as she warns. Today, a politician or TV personality can dictate what’s real to a significant portion of the population, reaping the Liar’s Dividend even after they themselves fall for deepfakes. That’s not something court rules and technical standards can cure — but it makes them all the more crucial as tools to push back against authoritarianism. And that makes the work all the more important.
Authors
