Home

AI Experts, Officials, and Survivors Talk Policy Solutions in First Ever Global Summit on Deepfake Abuse

Kaylee Williams / Mar 22, 2024

The last five years have seen an explosion in the creation, distribution, and sale of non-consensual, sexually explicit deepfakes, said Sophie Compton, documentary filmmaker and co-founder of the My Image My Choice Foundation, in her opening remarks at the first ever global summit dedicated to mitigating deepfake sexual abuse on Tuesday.

Compton kicked off the two-day virtual conference by defining deepfake abuse, often referred to in the mainstream press as “deepfake pornography,” as a form of image-based sexual violence where somebody’s likeness is digitally inserted into sexually explicit content without their consent. Studies have shown that this particular form of image-based sexual violence is disproportionately used to sexualize, humiliate, and otherwise abuse women online.

The emerging phenomenon has received widespread coverage in the American press recently, especially after a series of AI-generated images featuring singer-songwriter Taylor Swift went viral on X back in January.

The event, which was organized by My Image My Choice in collaboration with the Reclaim Coalition, Bumble, and other partners, brought together a cross-sector speaker roster of AI and machine learning experts, lawyers, tech investors, government officials, journalists, activists, and survivors from all over the world to discuss the consequences of image-based sexual abuse, as well as possible methods to combat the growing threat of sexually explicit deepfakes.

“It deprives us of our right and our capacity to self-determine in this world,” said Noelle Martin, a pre-doctoral researcher in the University of Washington’s Tech & Policy Lab, in the summit’s opening panel. “And it has an impact for life—in perpetuity—for your employability, for your future earning capacity, your interpersonal relationships, your romantic relationships. Every single part of your life is impacted.”

Across various presentations and panel discussions, several experts noted that in 2024, deepfake abuse has been “mainstreamed” thanks to the rise of publicly available AI-powered image generator tools and “Nudify” apps. In years past, generating a realistic-looking deepfake required large datasets of high-quality images of a particular victim, along with a working knowledge of machine learning techniques. But now, these widely available and low-cost tools enable anyone—including those with little to no technical expertise—to create hyper-realistic, sexually explicit material, in some cases using only a single authentic image.

As a result, the total known number of sexually-explicit deepfakes available on the open internet has skyrocketed from just over 14,000 in 2019 to more than 270,000 today, an increase of roughly 1,780 percent, according to a forthcoming study commissioned by the My Image My Choice Foundation. According to the researchers, these videos have collectively been viewed more than 4 billion times.

The majority of that traffic is driven by search engines such as Google and Bing, which NBC and other news outlets have reported readily surface abusive deepfakes—as well as instructions for how to create them—at the click of a button.

Multiple event speakers, including UC Berkeley Professor Hany Farid and disinformation researcher Nina Jankowicz, called for Google and other major technology companies to prevent websites and apps dedicated to abusive deepfakes and other non-consensual pornography from showing up in search results.

“They just have to enforce their terms of service more aggressively,” Farid explained. “Google controls everything, and we should be putting pressure on them to just simply enforce their rules.”

Other panelists stressed the importance of government intervention in the effort to mitigate the spread of deepfake abuse. Lawyer Carrie Goldberg, whose law firm specializes in victims’ rights, explained that only about a dozen US states currently have laws on the books which prohibit the creation or distribution of sexually explicit deepfakes. This means that throughout much of the country, victims have very few legal options if their images are used to generate sexually explicit deepfakes.

“Most laws that criminalize image-based sexual abuse are not expansive enough to also include deepfakes,” said Goldberg. “So one of the things that we’re seeing is a lot of states are slowly, one by one, modifying their non-consensual pornography laws to include deepfakes as well.”

Dr. Ann Olivarius, an anti-discrimination lawyer, argued that nationwide restrictions on online speech—a notoriously hard sell in the United States—might provide an alternative regulatory approach.

“We’ve given out warnings, but I would argue that because it was about women—and 99 percent of deepfakes are about women—nobody’s done a damn thing,” said Olivarius. “We’ve got to come to terms with actually looking at the First Amendment and putting some restrictions on the First Amendment…If we don’t, then I would suggest our society is going to be destroyed for everything of the values that we hold dear.”

Others turned their attention to Section 230 of the Communications Decency Act, which shields platform companies like Meta, X, and others from legal liability for potentially harmful content published by users.

Farid, for example, argued that removing this legal immunity for platform companies would “change the financial incentives” which he suggests currently discourage those companies from taking meaningful action against deepfake abuse.

“We have to reform Section 230 of the Communications Decency Act, so that it says you don’t get off the hook just because you’re a tech company, and watch how smart and how fast these companies will pivot,” said Farid.

“In any other industry, if you are harmed by a company, you can have your day in court to hopefully remedy the situation,” said Norma Buster, the chief of staff at Goldberg’s Brooklyn-based law firm. “The tech industry is the only one where we don’t have the opportunity to have our day in court.”

One fact on which nearly every speaker seemed to agree was that “AI watermarking,” a technical intervention recommended by the Biden-Harris Administration’s “Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence” is insufficient for mitigating the harms experienced by victims of image-based sexual abuse.

Fatima Anwar, a researcher, filmmaker, and lawyer at the human rights nonprofit WITNESS, explained that this is because most abusive deepfakes are advertised as such, or posted on websites dedicated to AI-generated media, which suggests that the creator likely never intended to pass them off as real footage. “Everyone could know that it’s a fake video, and it’s still a violation, and it still destroys someone’s life,” Anwar said.

In a special address, Senior Advisor to the White House Gender Policy Council Cailin Crockett said that technology-assisted gender-based violence has been a “top priority” for the Biden-Harris Administration. Crockett pointed to several efforts made by the Administration to mitigate these harms, including the 2022 establishment of the White House Taskforce to Address Online Harassment and Abuse of Women, and several federal grants recently awarded to the Cyber Civil Rights Initiative as well as other organizations advocating on behalf of victims.

“From doxxing, to targeted disinformation campaigns, to deepfakes and nonconsensual intimate images, all of these threaten women’s participation and leadership, which in turn, undermines democracy,” Crockett said. “And at a time when democracy is under attack around the world, we can’t let online harms that impede women and girl’s participation in all facets of society go unchecked.”

Authors

Kaylee Williams
Kaylee Williams is a Ph.D. student at the Columbia Journalism School. Her research focuses on the impacts of journalism, mass media, and technology on American politics. Before pursuing her Ph.D., Kaylee served as a research fellow at Harvard University’s Shorenstein Center for Media, Politics & Pub...

Topics