OpenAI Closing Its One-Stop AI Slop Shop Sora Is a Cautionary Tale
Sarah Barrington, Hany Farid / Mar 31, 2026
The Sora 2 introduction is displayed on a mobile phone with the OpenAI company's branding seen in the background, in this photo illustration in Brussels, Belgium, on October 19, 2025. (Photo by Jonathan Raa/NurPhoto via AP)
When it first launched in November of 2025, OpenAI’s Sora was in high demand, thanks in part to the clever invitation-only roll out and the ground-breaking technology that created highly realistic videos. A unique feature allowed anyone to share their face and voice so that others could deepfake them, having them say and do just about anything.
At first, Sora looked to some like a legitimate threat to traditional social media. But with the announcement this week that OpenAI was shuttering the app just six months after its launch, the party is over.
As digital-forensic researchers who deal every day with the fall-out of AI slop, we were relieved at the news of OpenAI's shuttering of the Sora app. This app was a one-stop AI slop shop that combined the addictiveness of social media with the power of generative AI to make highly compelling and viral fake videos.
Given its hyper-realism and ease of use, it was not surprising that Sora quickly became responsible for swarms of war-zone disinformation and a tidal wave of AI slop that has overwhelmed our inboxes with requests from reporters, fact-checkers, law enforcement, and every-day citizens.
Although Sora's downfall may be a welcome relief for the two of us, it is a cautionary tale for the technology sector and its multitrillion-dollar AI behemoths. What happened, and what does it mean for the broader sector?
Why Sora failed
Like a bad Hollywood pitch, OpenAI made a big bet that "AI meets social media" would be a big (profitable) hit. It wasn't. Why?
- Unlike traditional social media that puts the burden and cost of content generation on its users, the cost of generating Sora videos was exclusively on OpenAI, making the business model shaky from the start. It is estimated that Sora video generation alone was costing OpenAI $15 million per day, with no clear path to profitability.
- Although OpenAI had to throttle user sign-ups in the opening months, the novelty and interest rapidly wore off, with user engagement dropping by millions month after month. This is consistent with our own experiences: after teasing each other with a handful of gimmicky videos, our interest quickly waned.
- Shortly after its release, hardly a day went by when we didn't read about harmful content being produced on Sora, from a fabricated Gaza attack to an Epstein Island themed children’s toy. Surely this bad press and potential liability was an unwelcome headache for OpenAI. Video generation is an inherently risky business; even with user-generated content liability protections, generating millions of videos each day exposed OpenAI to enormous liability, and, of course, a range of devastating downsides for the public ranging from child abuse material to financial fraud.
At the same time, traditional social media is being flooded with AI slop ranging from the silly Shrimp Jesus to the harmful child sexual abuse material and non-consensual intimate imagery (NCII), dangerous war-time disinformation, and fraud.
The downfall of Sora—and the lackluster reception of other AI-slop feeds like Meta's Vibes—should be a wake up call to social media firms. We have now learned that users have their limits and will eventually vote with their feet. Sora’s demise proves that a pure AI-slop feed is not only bad for the public, but bad for the bottom line.
A learning moment
It is time—in fact, it is long overdue—for Big Tech, from the new AI giants to the traditional social media giants, to get their houses in order. Based on our combined decades of academic research and practice in the field, there are effective and practical steps that can be taken today to help them do so.
Every AI company should:
- Add content credentials (C2PA) to every piece of AI-generated content. These cryptographically-signed labels provide information to the consumer or hosting service as to the origin of a piece of content.
- Add invisible watermarks to all of their content as an added layer of protection to make downstream identification easier (e.g., Google's SynthID).
- Strengthen guardrails on user-specified prompts and on the resulting content to prevent the generation of harmful and illegal content.
And, every social media company should:
- Read content credentials and watermarks and properly label all AI-generated content on their platform, in the same way that all food at the grocery store is labeled with a nutrition label; these credentials don't put a value statement on a piece of content, but instead simply inform the user what they are consuming.
- Give users the ability to opt-out of AI slop in the same way that users can control other aspects of their social-media feeds, such as muting certain accounts or content themes.
- Design their services while remembering that the goal of social media was to facilitate and foster human connections, not simply monetize every morsel of human attention.
Finally, US policymakers can help matters by:
- Mandating the generation and distribution guardrails enumerated above;
- Clarifying that the Section 230 shield does not protect AI companies from the harms created by their product (as opposed to user-generated content that is protected by 230); and
- As a way to combat fraud, disinformation, and NCII, pass federal regulation—as Denmark recently did—that gives consumers the rights to their likeness.
If traditional social media firms—-and the larger technology sector—ignore Sora's demise and the public's growing exhaustion with AI slop, they may soon repeat OpenAI's costly miscalculation.
Authors

