Home

Creepy Product Demos Are Just the Start of Something New

Laura MacCleery / May 22, 2024

OpenAI CEO Sam Altman attends the artificial intelligence (AI) Revolution Forum in Taipei on September 25, 2023. Jameson Wu/Shutterstock

AI will soon be everywhere. In the wake of announcements last week by Google about AI integration at the top of search, and the widely publicized product demo of ChatGPT4.o by OpenAI (and that company's announcement of a pending deal with Apple), we can see how competitive pressures are pushing companies to launch AI products aggressively towards consumers. Sure, it may be convenient, but it is also true that these AI companies need our data to improve their models even as they face diminishing returns.

The full implications of this pivot from the subscription model to product and internet integration is being lost in some of the coverage of the public relations debacle that was the OpenAI demo—which, because it was aided by special NVIDIA chips, made it just a demo in the truest sense. Some of the decision to release 4.o, which focused on upgrades to consumer-facing capabilities, appears intended to distract us from ongoing technical challenges with accuracy, bias and transparency—flaws that will require actual fixes for AI decision-making to matter, but are difficult to solve given current designs.

Combined with the public departures of almost the entire “safety” team at OpenAI over the past few months, culminating in the resignations of Ilya Sutskever and Peter Leike last week, something appears rotten in the heart of Silicon Valley. While the product demo for OpenAI in the fall of 2023 was led by CEO Sam Altman, as a display of normalcy following his ouster and reinstatement, the cringey demo this past week featured CTO Mira Murati, whose presence was clearly intended to give sanction to the otherwise sexist circus show.

The story that emerged since includes multiple failed approaches to enlist the participation of actress Scarlett Johannson, making it clear that the plan was, all along, to piggyback on a male fantasy about AI. True to the plan, a giggly, malleable, and obsequious female voice with “vocal fry” was assigned to the new AI assistant. The point of the demo was that it can now answer prompts like, “tell us a bedtime story” with a quickness that mimics conversation, translate in real-time, and analyze information from an image. But it was the dialed-up salaciousness of the interface that really stuck with viewers. “Stop it, you’re making me blush!” was its response to a basic compliment. “Wow, that’s quite the outfit you have on,” it interjected out-of-the-blue, almost purring.

While Murati beamed approvingly, the two male product engineers demanded repeatedly to the model “I want more emotion,” interrupted it (to show they could), and appeared delighted at its facility with basic algebra: at long last, a woman for whom no consent is needed and who will do what you ask, instantly. Altman’s super-subtle one word post on X –“her”— also was not lost on everyone. He was, of course, referring to the 2013 Spike Jonze movie starring a depressed and lonely character played by Joaquin Phoenix, with Johansson as the voice of his AI girlfriend. His post, alongside the choice to essentially steal Johansson’s voice, surely creates new litigation risk for OpenAI, and makes completely clear the deep disrespect with which OpenAI treats cultural creators.

While the movie is genius, the choice of it as central to a marketing push is, well, interesting. Others have noted the sweet love story at the center of the film. However, it’s also true that at one point, a human woman is recruited (by the AI) to permit her body to be used to enable the couple’s sexual fantasies, with her voice silenced in exchange for Johansson’s. Although the scene is disturbing to watch, it is depicted as a tragedy when our hero can’t follow through, because the whole affair is too pathetic and weird, even for him.

In the world of the movie, people are so numbed by technology that the main character’s job is to fabricate messages to loved ones that evoke some kind of feeling. At the end, the AI relationship evaporates into hyperspace at precisely the moment it becomes painfully clear to him none of it was real. It is telling that this emotionally barren, disconnected world is the evident mental touchpoint for the OpenAI CEO. The female voice—from Alexa to the “Sexy Sally” voice used on military planes for decades--has too long been a disembodied and depersonalized helper to the world of men who have desires and do things.

Watching OpenAI blur the line so completely and with such a clumsy bid for attention shows that, despite all the lip service about benefiting all of us, tech does not—and will not—respect lessons in power imbalances that others learned the hard way over the past two decades. Alongside deep gender gaps within the tech industry, it also may help explain the shrug that greets women and girls grappling with nonconsensual intimate images and their lack of recourse now that AI enables this kind of casual violation for any teed-off seventh grader or domestic abuser. The tech-bros are fully empowered to do, basically, whatever they want.

This week’s demo also included a display of how the AI model can identify emotions. OpenAI was supposed to be superior, with its affordable subscriptions, to the grossly extractive social media companies. Although as recently as last fall, Altman demurely refused to explore AI companionship as potentially problematic, this week one of the creepiest features possible for AI is the core of his launch, without any new safeguards or boundaries.

So much for helping humanity—this makes his core business not better, but possibly much worse. We are invited to entrust these tools with our faces and data, when legal controls are lacking and studies show they can be more persuasive than humans, even when equipped only with rudimentary user demographics. It’s totally unknown what will happen when an AI system has unfettered access to emotional monitoring for everyone, everywhere all at once, and how it could be used to manipulate us. Given that Apple recently filed a patent for earbuds that can collect neural and biometric data, the scenarios aren’t reassuring.

While OpenAI is walking back its embarrassing mistake, its assurances just mean its new interfaces will be highly tailored to satisfy our desires in a slightly tweaked way. Both the tools and tech industry have enormous emotional and practical power over us already and no discernible moral compass. For business reasons, AI is poised to maximize and exploit our emotional vulnerabilities and needs, including problematic hyper-gendered “mega ick” ones, to feed its unceasing hunger for data. This all makes “Her" look parochial—we’ve moved from damage inflicted as personal betrayal and loss, to a privacy-obliterating dystopia designed to entice us to give up whatever personhood we have to grow the power of the AI model and its corporations.

Authors

Laura MacCleery
Laura MacCleery is Senior Director for Policy and Advocacy at UnidosUS, the nation’s largest Latino civil rights and advocacy organization. She has deep expertise in regulatory design guided by public interest principles and has advocated for more than 20 years for changes that benefit human lives a...

Topics