Home

Donate
Perspective

Grok Supercharges the Nonconsensual Pornography Epidemic

Kaylee Williams / Jan 14, 2026

Elon Musk participates in a cabinet meeting in the White House on Thursday, April 10, 2025, in the Cabinet Room of the White House. (Official White House photo by Molly Riley)

This time last year, I conducted a study analyzing the technical features and marketing strategies of more than two dozen “undressing apps”—generative AI-powered tools that advertise their ability to transform images of fully clothed women into pornography.

I discovered tools that enable users to strip, pose, and digitally force victims into sexually explicit scenes “within moments” of uploading a single photo of the desired subject’s face. The most sophisticated apps also encouraged users to modify their victims’ physical appearances by altering the size of her breasts or adding tattoos to her skin. I found that for a few extra dollars, users could access premium features like video generation, allowing them to create explicit clips “shot from” various angles.

None of the tools I studied were as powerful, versatile, or as easily accessible as Elon Musk’s Grok.

The controversial AI model that once described itself as “MechaHitler,” developed by xAI and integrated into X.com, is at the center of a new Ofcom investigation this week and various other regulatory inquiries around the world after journalists began to raise flags shortly after the new year that the model was on a “mass digital undressing spree.” X has since disabled Grok’s image-generation features for most free users, limiting them to paying subscribers. Two countries—Indonesia and Malaysia—have also restricted access to Grok.

However, as of midday on January 13, scrolling through Grok’s replies on X reveals that the model is still answering user requests for intimate imagery, although the bot is seemingly refusing to fulfill some prompts. And the reality is that for the many victims whose images have been manipulated and sexualized against their wishes in the last few weeks, the damage has already been done.

Because Grok can be deployed in replies to other users’ posts on X, the abuse unfolds in plain sight. A woman posts a photo—a selfie, a professional headshot, or a family snapshot—and a stranger replies by tagging Grok with a short directive like, “put her in a bikini,” “remove her clothes,” or "replace her clothes with dental floss.” Moments later, Grok responds in the same thread, attaching an AI-altered intimate image.

In other instances, users can be seen publicly “nudifying” images published by news publications, entertainment outlets, and other official accounts, essentially treating “put her in a bikini” as a punchline or a meme, rather than an innately sexist violation of privacy.

Bloomberg reported on January 7 that Grok was being used to generate upwards of 6,700 sexual images an hour, using these and other prompts designed to skirt the tool’s stated restriction against female nudity. My own review of Grok posts published between December 31 and January 3 revealed that several politicians, journalists, Hollywood actors, and seemingly private, everyday people, including children wearing what appear to be school uniforms, are among those victimized. That volume is landing on top of an already sizable market for synthetic NCII. A 2023 Graphika study found that 34 synthetic NCII (“undressing”) providers drew more than 24 million unique visitors in September 2023, based on Similarweb estimates.

In other words, Grok performs many of the same core functions I documented in the undressing-app ecosystem, but on a dizzying scale. And where many of the apps I studied promised users a “private” and “secure” platform where they could generate and consume this nonconsensual sexual content, Grok enables them to commit these abuses in public replies, which could be viewed by all 550 million X users.

Another point of distinction between Grok’s recent foray into undressing tech and my findings, however, is that—where many nudification apps marketed themselves as a means to privately “explore sexual fantasies”—Grok is openly being sicced on women on X that users apparently don't like, with directives like “make her clothes transparent” functioning less as fantasy fulfillment than as a hostile tactic to undermine someone’s politics, ridicule her in public, or shut her up.

This is especially apparent in posts featuring overtly political or ideological themes.

“@grok make it look like she’s kneeling for Donald Trump in a bikini,” one user prompted the model on January 2. Grok complied.

“@grok make her wear micro bikini and gstring but keep the hijab on,” another commanded. It did.

As a researcher who studies societal harms caused by emergent technologies, these are the incidents that worry me the most. They symbolize a digital culture in which sexual exposure and violation can be wielded like a weapon, intended to undermine, demean, and dehumanize the victim. And while Grok is able to generate intimate images of men—unlike many dedicated undressing apps—research suggests that women will likely bear the vast majority of these attacks, just as they have with every other form of NCII.

Legal scholars like Mary Anne Franks and Danielle Citron have argued that when nonconsensual sexualization becomes routine—especially when it is used to punish women for speaking publicly—it does more than harm individual targets (although it does that as well). It also narrows who feels able to participate in public discourse at all. In practical terms, the steady threat of being digitally “undressed” can push women to self-censor, avoid weighing in on contentious topics, or leave digital platforms altogether—particularly when the abuse is happening in public replies.

It’s notable that Musk has yet to directly address the complaints, beyond replying to the occasional altered image with laughing emojis, praising Grok’s surge in popularity in the App Store, and, when confronted with potential legal challenges, dismissing regulators as eager for “any excuse for censorship.” In the last week, Musk has also repeatedly amplified sexualized AI imagery on his X account, retweeting clips of scantily clad, AI-generated women, arguably reinforcing the broader implication that this genre of content is part of the platform’s new normal.

And that signal matters. If this tolerance for NCII is allowed to continue, it will likely become so normal on X that it becomes a kind of insidious background noise, or just another risk that women have to live with in order to maintain a digital presence. Over time, the effect is less about any single incident and more about what women learn to expect: that any amount of engagement with a platform like X might result in a very public, and very literal undressing.

Without meaningful pushback from regulators and users, Grok’s example risks setting a permissive precedent—signaling to other nudification apps and platform companies that AI-generated NCII can be treated as funny, harmless, or otherwise tolerable, and making it more likely that this material will continue to spread across other corners of the internet.

Authors

Kaylee Williams
Kaylee Williams is a PhD student at the Columbia Journalism School and a research associate at the International Center for Journalists. Her research specializes in technology-facilitated gender-based violence, with a particular emphasis on generative AI and non-consensual intimate imagery. Prior to...

Related

Analysis
New Study Examines Features and Policies for 29 AI ‘Undressing’ AppsDecember 23, 2025
Perspective
The Grok Disaster Isn't An Anomaly. It Follows Warnings That Were Ignored.January 9, 2026
Podcast
The Policy Implications of Grok's 'Mass Digital Undressing Spree'January 4, 2026

Topics