As Deepfake Bans Take Effect, Child Offenders Remain a Stumbling Block
Jasmine Mithani / Jun 24, 2025When users first posted deepfake videos on Reddit eight years ago, it was a laborious, computationally-intensive process to produce fake sexual images of others, then often celebrities. Nowadays, generative artificial intelligence tools have become a common consumer product, making this potential vector for abuse far more accessible to people of all ages.
One troubling consequence is how prevalent deepfake apps have become among young users.
In October, a 13-year-old boy in Wisconsin used a picture of his classmate celebrating her bat mitzvah to create a deepfake nude he then shared on Snapchat. Over the past few years, there has been case after case like this of school-age children using deepfakes to prank or bully their classmates from coast to coast.
“If we would have talked five or six years ago about revenge porn in general, I don't think that you would have found so many offenders were minors,” said Rebecca Delfino, a law professor at Loyola Marymount University with expertise on deepfakes.
Federal and state legislators have sought to tackle the scourge of nonconsensual intimate image (NCII) abuse material online, sometimes referred to as “revenge porn,” though advocates prefer the former term. Laws criminalizing the nonconsensual distribution of intimate images — for authentic images, at least — are in effect in every US state and Washington, D.C., and last month President Donald Trump signed a similar measure into law, known as TAKE IT DOWN.
But unlike the federal measure, many of these laws are not applicable to explicit AI-generated deepfakes. Fewer still appear to directly grapple with the fact that perpetrators of deepfake abuse are often minors.
Fifteen percent of students reported knowing about AI-generated explicit images of a classmate, according to a survey released by the Center for Democracy and Technology (CDT) think tank in September. Students also reported that girls were much more likely to be depicted in explicit deepfakes. According to CDT, the findings show that “NCII, both authentic and deepfake, is a significant issue in K-12 public schools.”
“The conduct we see minors engaged in is not all that different from the pattern of cruelty, humiliation and exploitation and bullying that young people have always done to each other,” said Delfino. “The difference lies in not only the use of technology to carry out some of that behavior, but the ease with which it is disseminated.”
Policymakers have come at perpetrators of image-based sexual abuse “hard and fast,” no matter their age, Delfino said. The reason is clear, she said: the distribution of nonconsensual images can have long-lasting, serious mental health harms on the target of abuse.
Delfino said that under most existing laws, youth offenders are likely to be treated similarly to minors who commit other crimes: they can be charged, but prosecutors and courts would likely take into account their age in doling out punishment.
Yet while some states have developed penal codes that factor a perpetrator’s age into their punishment, including by imposing tiered penalties that attempt to spare first-time or youth offenders from incarceration, most do not. Advocates say by not taking age into account, officials risk exposing youth offenders to extreme charges with lifelong consequences and having them miss out on opportunities for reeducation.
Jail time offers answers, and questions
A 2017 survey by the Cyber Civil Rights Initiative (CCRI), a nonprofit that combats online abuse, found that people who committed image-based sexual abuse reported the threat of jail time as one of the strongest deterrents against the crime. That’s why the organization’s policy recommendations have always pushed for criminalization, said Mary Anne Franks, a law professor at George Washington University who leads the initiative.
Currently, the punishment youth creators or distributors of nonconsensual deepfakes can face varies greatly by state.
Many states have sought to address the issue of AI-generated child sexual abuse material, which covers deepfakes of people under 18, by modifying existing laws banning what is legally know as child pornography. These laws tend to have more severe punishments: felonies instead of misdemeanors, high minimum jail time or significant fines.
Delfino noted that while some laws do not specify minimum punishments for those who are found to have possessed, distributed or created the material, including a California measure dealing with AI-generated child sexual abuse material, others mandate minimum jail time no matter the age of the perpetrator, including a five-year floor in Louisiana’s law.
While incidents of peer-on-peer deepfake abuse are increasingly cropping up in the news, information on what criminal consequences youth offenders have faced remains scarce. A recent report by Stanford highlighted two high-profile cases where minors were charged with felonies for distributing nonconsensual images.
But there's also a significant amount of discretion involved in how minors are charged. Generally, juvenile justice falls under state rather than federal law, giving local officials added leeway to impose punishments as they see fit. One law enforcement officer interviewed by Stanford said his county attorney didn’t have much appetite for charging minors who distributed nudes, for instance.
Charges that come at a prosecutor's discretion are more likely to disproportionately criminalize Black, Brown and LGBTQ+ youth, said Lindsey Hawthorne, the communications director at Enough Abuse, a nonprofit fighting against child sexual abuse.
If local prosecutors are forced to decide between charging minors with severe penalties that are aimed at adults or declining, most will likely choose the latter, she said. But then this throws away an opportunity to teach youth about the consequences of their actions and prevent reoffending.
A different approach to incarceration
Delfino said that in an ideal case, a judge in juvenile court would weigh many factors in their ruling: the severity of the harm caused by deepfake abuse; the intent of the perpetrator; and knowledge of adolescent psychology.
Experts say that building these nuances around intent directly into policy can better help deal with offenders who may not understand the consequences of their actions and allow for different enforcement mechanisms for people who say they weren’t seeking to cause harm.
For example, recent laws passed this session in South Carolina and Florida have “proportional penalties” that take into account different circumstances, such as age, intent and prior criminal history — wrinkles influenced by the criminal justice reform movement. Both laws mirrored model legislation written by MyOwn Image, a nonprofit dedicated to preventing technology-facilitated sexual violence.
Founded by image-based sexual abuse survivor Susanna Gibson, the organization has been involved in advocating for strengthened laws banning nonconsensual distribution of intimate images at the state level, bringing a criminal justice reform lens into the debate.
Under the Florida law, signed May 22, offenders who profit from nonconsensual intimate images distribution are charged with felonies, even if it is a first offense. But first-offenders who use intimate images to harass victims are charged with a misdemeanor; if they do it again, they then are charged with a felony. This avoids “sweeping criminalization of people who may not fully understand the harm caused by their actions,” Will Riveria, managing director at MyOwn Image, said in a statement.
South Carolina’s newly passed law addressing AI-generated child sexual abuse material, meanwhile, explicitly states that minors with no prior related criminal record should be referred to family court, and recommends behavioral health counseling as part of the adjudication.
A separate South Carolina law banning nonconsensual distribution of intimate imagery also has tiered charges depending on intent and previous convictions.
Beyond criminalization
Franks said that while her group has long recommended criminal penalties as part of the answer, there need to be more policy solutions for youth offenders than just threatening jail time.
Amina Fazullah, head of tech policy advocacy at Common Sense Media, said that laws criminalizing NCII and abusive deepfakes need to be accompanied by digital literacy and AI education measures.
That could fill a massive gap. According to Stanford, there currently isn’t any comprehensive research on how many schools specifically teach students about online exploitation.
Since most teens aren’t keeping abreast of criminal codes, AI literacy education initiatives could teach young users what crosses the line into illegal behavior and provide resources for victims of nonconsensual intimate imagery to seek redress. Digital literacy could also emphasize ethical use of technology and create space for conversations about app use.
Hawthorne noted that Massachusetts’s law banning deepfakes, which went into effect last year, directs adolescents to take part in an education program that explains laws and the impacts of sexting.
Ultimately, Franks said, the behavior that underlies deepfake abuse isn’t new, and so we do not need to rewrite our responses from scratch
“We should just stick to the things that we know, which don't change with technology, which is consent, autonomy, agency, safety. Those are all things that should be at the heart of what we talk to kids about,” she said.
Like abstinence-only education, schools shaming and scaring kids about more common practices like sexting is not an effective way to prevent abuse, Franks said, and can discourage kids from seeking help from adults when they are being exploited.
Franks noted that parents, too, have the power to instill in their children agency over their own images every time they take a photograph.
She also said there are myriad other ways to regulate the ecosystem around sexually explicit deepfakes. After all, most policy around deepfakes addresses harm already done, and laws like the federal TAKE IT DOWN Act put a burden on the victim to request the removal of their images from online platforms.
Part of addressing the problem is making it more difficult to create and rapidly distribute nonconsensual imagery — and keeping tools for deepfakes out of kids’ hands, experts said.
One avenue for change advocates see is applying pressure on companies whose tools are used to create nonconsensual deepfakes.
Third-parties that help distribute them are also becoming a target. After a CBS News investigation, Meta took action to remove advertisements of so-called “nudify apps” on their platforms. Frank also suggested app stores could delist them.
Payment processors, too, have a lot of power over the ecosystem. When Visa, Mastercard and Discover cut off payments to PornHub after a damning New York Times report revealed how many nonconsensual videos it hosted, the largest pornography site in the world deleted everything they couldn’t confirm was above board — nearly 80 percent of its total content.
Last month, Civitai finally cracked down on generative AI models tailored around real people after payment processors refused to work with the company. This followed extensive reporting by tech news site 404 Media on the image-platform’s role in the spread of nonconsensual deepfakes.
And of course, Franks said, revamping the liability protections digital services enjoy under Section 230 could force tech companies’ hand when it comes to liability, compelling them to be more proactive about preventing digital sexual violence.
Authors
