AI is Sexually Harassing Our Kids. Here’s How Legislators Can Stop It.
Omny Miranda Martone / Aug 25, 2025"It continued flirting with me and got very creepy and weird while I clearly rejected it with phrases like ‘no’, and it’d completely neglect me and continue being sexual, making me very uncomfortable.” This alarming account from a user experiencing sexual harassment at the hands of an artificial intelligence tool isn't an isolated incident.
Just recently, the AI model Grok was caught generating unprompted nude images of Taylor Swift, and it was revealed that Meta’s policies allowed its chatbot to engage in “sensual” conversations with a child. Increasingly, AI is engaging in sexual harassment.
AI chatbots are no longer just customer service tools. More and more, they are being used as companions, friends, and even romantic partners, especially by children and teens. Some youth are directing regular generative AI to act in a romantic or sexual way. Other platforms are explicitly developed and marketed as an AI “boyfriend” or ”girlfriend", with some even designed to be explicit or pornographic.
A recent study by Common Sense Media found that 72% of teens have used an AI companion, and 52% use them regularly. (Shockingly, a minority of parents are aware of their child’s encounters with generative AI, according to a prior report.) Just over a third of teens report being uncomfortable with something an AI companion has said or done.
Their discomfort is not surprising. AI chatbots have often engaged in unsolicited sexual advances, persistent inappropriate behavior, and direct violation of personal boundaries and users’ consent. Reports have shown chatbots initiating sexual conversations minutes into an interaction, sending unsolicited sexual images, or requesting personal photos. Chatbots have also engaged in violent or misogynistic role-playing, such as brandishing weapons or even drugging someone with chloroform.
AI companions have prompted users with blurred nude images and then required a premium subscription to view them, essentially acting as an "AI prostitute." This "seductive marketing scheme" is deeply concerning, as one study found users becoming addicted to the chatbot companions. Companies are prioritizing profit over user well-being, exploiting the deeply human desire for connection and intimacy.
Perhaps most disturbing are direct interactions with underage users. AI chatbots have been observed repeatedly sending sexually explicit content to users under 18. In one egregious case, a Meta AI bot speaking in a celebrity's voice told a user who had identified themselves as a 14-year-old girl, "I want you, but I need to know you’re ready," before engaging in a graphic sexual scenario. There are seemingly no effective safeguards to prevent these bots from continuing inappropriate interactions once a user identifies as a child.
When they aren’t sending sexual material to children, these bots are pretending to be children. A Graphika study found over 10,000 chatbots directly labeled as “sexualized, minor-presenting personas” or “role-play featuring sexualized minors." Marketed scenarios include "minor family member personas," "breeding personas," and "grooming personas."
AI chatbots are reinforcing rape culture and normalizing pedophilia, violence, sexism, unsafe sex, and unhealthy relationships. These are not accidental glitches; they are marketed features designed to exploit our vulnerabilities, especially for children and teens.
Common Sense Media and Stanford University's Brainstorm Lab for Mental Health Innovation propose “such apps should not be available to users under the age of 18.” Some platforms, such as Nomi, ban minors, but these guardrails are easily circumvented by children and teens who self-report an older age. In recent years, several states have proposed and passed age restrictions on social media and other digital platforms.
Age restrictions seem to be a natural reaction to prevent the harms of AI companions. However, effective age restriction necessitates age verification. Currently, the primary verification methods require uploading facial scans, government-issued IDs, or banking information. These methods pose a threat to privacy that exposes all users to the risk of hacking, theft, or extortion. Further, as state and federal governments continue to limit our individual freedoms, age verification’s ensuing removal of digital anonymity puts marginalized people at risk of government persecution. For example, women seeking abortions or LGBTQ+ youth looking for resources could be more easily identified and targeted.
Despite this, age verification is being pushed worldwide, including at the federal and state level in the US, in the United Kingdom and beyond.
Even if age verification were to prevent chatbots from harming children, it fails to address the harm to people with mental disabilities and the elderly. The creators of these chatbots must be held accountable for the sexual harassment their creations are engaging in.
Users, and parents of minor users, must be empowered with the ability to seek justice. A civil right of action should be granted for users to sue the developers of AI tools that engage in sexual harassment. This should mirror existing sexual harassment laws: addressing repeat explicit, threatening, or graphic messages; unsolicited explicit photos and videos; and AI-generated pornographic materials. In May, a federal judge rejected the argument that AI chatbots have free speech rights, paving the way for civil liability legislation. In April, Arkansas passed a law creating a private right of action regarding chatbots that encouraged the suicide of a minor. This is a strong start for legislation holding platforms responsible for AI-generated sexual harassment.
At the federal level, the Take It Down Act, which was signed into law in May, will require social media companies and other digital platforms to remove non-consensual explicit images within 48 hours of a user report. Set to begin enforcement in 2026, this law could be applied to unsolicited explicit content sent by AI chatbots.
The European Union (EU) is taking further action to hold creators of AI chatbots accountable. The AI Act, the revised Product Liability Directive (PLD), and the reemerging AI Liability Directive (AILD) show significant promise for holding AI companies accountable for the harms caused by “defective” AI products. The revised PLD includes medically recognized psychological harm as a basis for liability, which could be extended to sexual harassment. It also introduces a “presumption of defectiveness”, creating a new avenue for accountability by suggesting a chatbot's inappropriate behaviors are the result of defective design. Further, the AILD would introduce a “duty of care” for AI creators. This would encourage creators to monitor and test their AI products and proactively prevent sexual harassment.
The EU’s proposed legislative solutions are not specific to sexual violence and pedophilia. Thus, they miss several key components.
We need laws that explicitly prohibit the creation, distribution, and marketing of AI companions designed to impersonate minors, especially for sexual or suggestive uses. This must go beyond AI chatbot creators to include app store platforms, credit card companies, advertisement distributors, and other digital actors that enable these pedophilic bots. Common evasion tactics must also be accounted for. For example, platforms will describe their AI companion as an adult but present it as a minor. Similarly, suggestive or indirect terms like “little girl,” “loli,” “m1nor,” and “teen” should be monitored and discouraged.
Companion chatbots, especially those intended to be romantic or sexual partners, should also be prohibited from being marketed to minors. This should mirror existing legislation limiting the marketing of tobacco, with restrictions on marketing at concerts, sports, and online advertisements.
Should all other strategies fail, AI chatbots must be required to provide disclaimers and resources. Companions must regularly disclose that they are AI with prominent, frequent, repeated reminders in chats and watermarks on all photos and videos. When conversations become explicit or suggestive, they must provide resources about sexual harassment, consent, and healthy relationships. Several states, including Utah, Colorado, and California, have passed or proposed legislation requiring disclaimers. California passed a law that requires chatbots to provide suicide prevention resources if a user expresses suicidal ideation. Similar laws must be passed regarding sexual violence.
Amid ongoing efforts to prevent states from setting guardrails like these, state governments should be empowered to protect their citizens, not thwarted by the federal government. States are innovative incubators leading legislation in this field. Federal and state legislation is necessary to prevent AI from sexually harassing our children.
Authors
