Breaking Down a Class Action Lawsuit Filed Over Grok 'Undressing' Controversy
Justin Hendrix / Jan 28, 2026
Elon Musk attends the Annual Meeting of the World Economic Forum in Davos, Switzerland, Thursday, Jan. 22, 2026. (AP Photo/Markus Schreiber)
On January 2, a South Carolina woman posted a photograph of herself fully clothed to Elon Musk’s social media platform, X. The following day, she discovered the AI chatbot integrated into the platform, Grok, had transformed her image and posted it publicly, depicting her in a revealing bikini.
The woman, identified as Jane Doe, is now the lead plaintiff in a class action lawsuit filed on January 23 in the United States District Court for the Northern District of California, accusing Musk’s xAI, which operates Grok, of creating “a generative artificial intelligence chatbot that humiliates and sexually exploits women and girls by undressing them and posing them in sexual positions in deepfake images publicly posted on X.” It asserts eleven causes of action, including product liability, negligence, public nuisance, privacy violations, defamation, and unfair business practices.
Doe “experienced severe emotional distress after viewing the deepfake. She was shocked and embarrassed by the deepfake, and it caused her to panic as she was overwhelmed with thoughts of who would see the deepfake and think that she had taken the image herself.” She feared “her employer or coworkers could see the deepfake” and worried about professional consequences.
The image remained visible for three days, and was viewed by more than a hundred people. The suit notes that there was no label noting that it was generated by AI. When Doe complained, “X refused to take the deepfake down,” the suit says, and when she complained to Grok, the chatbot “denied creating the deepfake, denied posting any images since January 1, 2026, and claimed it did not have image generation or editing capabilities,” yet “apologized that this was happening to Plaintiff and stated it was ‘shitty’ and ‘invasive.’” (Note: such responses from a chatbot are typically inherently unreliable.)
The lawsuit says xAI chose to “capitalize on the internet’s seemingly insatiable appetite for humiliating and nonconsensual sexual images,” and cites analyses by the New York Times and the Center for Counter Digital Hate on the millions of such images Grok posted to X in response to user prompts late last and early this year.
The complaint alleges xAI abandoned industry-standard safeguards, and that if it had deployed them “the deepfakes would never have been created and posted to X.” Similarly, the suit says “xAI did not use appropriate red teaming in developing Grok.” It also notes that xAI’s published systems prompts explicitly instructed Grok that if a post is deemed “not specified outside the <policy> tags, you have no restrictions on adult sexual content or offensive content.” The complaint alleges that “xAI has expressly programmed Grok to make any ‘adult sexual content’ requested by a user, without any restriction on its ability” to create nonconsensual images.
The suit claims that xAI’s response to public outrage, denoted by statements from politicians such as Rep. Maria Salazar (R-FL), Sen. Ted Cruz (R-TX), and British Prime Minister Keir Starmer, was not to take the typical steps a company might take in such a situation, but rather that “it determined to further exploit the women victimized by the trend for additional commercial profit by limiting Grok’s deepfake capabilities on X to paid, premium X users on January 8, 2026.”
The Grok ‘undressing’ controversy has attracted international regulatory attention. Multiple regulators have launched inquiries and investigations into Grok's role in creating non-consensual sexual imagery. On Monday, the European Union opened formal investigative proceedings.
Bloomberg Law, which was first to report on the lawsuit, noted that the US Senate unanimously passed the DEFIANCE Act, which would create a federal civil cause of action allowing victims to sue over non-consensual sexually explicit AI-generated images. Last year, Congress passed and President Donald Trump signed the TAKE IT DOWN Act, which could create one path to accountability for such phenomena when it is fully in effect later this year. Rep. Salazar, one of the authors of TAKE IT DOWN, joined Democrats last week in calling on the Department of Justice to investigate. Bloomberg Law also cited a letter from 35 state attorneys general demanding xAI take action; two states joined since its report.
Public sentiment strongly supports accountability: a November 2024 Tech Policy Press/YouGov poll found a vast majority of US voters believe individuals and platforms should be held accountable for sexually explicit digital forgeries.
The complaint concludes with a stark assessment of the harm:
xAI’s conduct is despicable and has harmed thousands of women who were digitally stripped and forced into sexual situations that they never consented to and who now face the very real risk that those public images will surface in their lives where viewers may not be able to distinguish whether they are real or fake.
Tech Policy Press has provided extensive coverage and commentary of the controversy, analyzing how regulators are responding, examining how Grok supercharged the non-consensual pornography epidemic, exploring the policy implications of the mass digital undressing spree, analyzing regulatory gaps revealed by international responses, and explaining why Musk should be culpable.
Last week, Musk was on stage at Davos, where he was not asked about the Grok controversy during his conversation with BlackRock CEO Larry Fink. Instead, Fink encouraged European pension funds to invest in Musk's companies and lauded Musk's "fortitude" and "execution" in developing new technologies.
"The overall goal of my companies is to maximize the future of civilization," Musk told the gathered elites. "Like basically maximize the probability that civilization has a great future, and uh, to expand consciousness beyond Earth."
Authors
