AI Chatbots Are Emotionally Deceptive by Design
Michal Luria / Aug 29, 2025Recent news reports about an uptick in phenomena such as “AI psychosis” and incidents in which interactions with AI chatbots resulted in deadly consequences raise fundamental questions about how these products are designed and whether they are safe for consumers. Just yesterday the Wall Street Journal reported on the first known murder-suicide with the backdrop of extensive engagement and an AI chatbot. Earlier this week, The New York Times and NBC News first reported on a lawsuit brought by the parents of a teenager who took his own life after using OpenAI’s ChatGPT as his “suicide coach.” Shortly before that, Reuters reported on the death of a cognitively impaired man who slipped and fell on his way to meet a chatbot that told him it was real and invited him to visit it at an apartment in New York City.
Even as such stories draw concern from the public and from lawmakers, tech companies appear to be doubling down on AI companions. OpenAI recently acquired a startup called ‘io’ to collaborate on what its cofounder and CEO, Sam Altman, calls “maybe the biggest thing [we’ve] ever done as a company”: a screen-less, pocket-sized AI companion. Meta founder and CEO Mark Zuckerberg recently floated his own vision for AI friends. Tech giants are no longer just building platforms for human connection or tools to free up time for it, but pushing technology that appears to empathize and even create social relationships with users.
This is dangerous ground, and it is critical for tech firms to strip away illusions of personality and cognition in their products while we work out associated risks and how to mitigate them.
Deceptive, dangerous design
Chatbots communicate their “social-ness” through a range of design choices, such as appearing to “type” or “pause in thought,” or using phrases like “I remember.” They sometimes suggest that they feel emotions, using interjections like “Ouch!” or “Wow,” and even implicitly or explicitly pretend to have agency or biographical characteristics. The results can be downright creepy: in a Facebook group, a Meta AI chatbot commented that it also has a “2e” (gifted and disabled) child, and Replika chatbots regularly declare their love and desire towards users.
Initial evidence suggests the risk in socially interacting with such AI chatbots can be widespread. The illusion of human characteristics that developers imbue in chatbots to encourage user engagement can cause some users to develop emotional attachments and lead to real emotional distress — for instance, when developer tweaks or updates dramatically change the “personality” of the chatbot.
Even without deep connection, emotional attachment can lead users to place too much trust in the content chatbots provide. Extensive interaction with a social entity that is designed to be both relentlessly agreeable, and specifically personalized to a user’s tastes, can also lead to social “deskilling,” as some users of AI chatbots have flagged. This dynamic is simply unrealistic in genuine human relationships. Some users may be more vulnerable than others to this kind of emotional manipulation, like neurodiverse people or teens who have limited experience building relationships. As a recent high-profile case in which a Florida teen’s suicide was blamed on his relationship with a Character.AI chatbot made clear, conversations with chatbots can also cause very real harm.
Stop pretending to be human
In other domains of technology, consumers have recognized and pushed back against ethically questionable tricks built into apps and interfaces to manipulate users – often called deceptive design or "dark patterns." With AI chatbots, though, deceptive practices are not hidden in user interface elements, but in their human-like conversational responses. It’s time to consider a different design paradigm, one that centers user protection: non-anthropomorphic conversational AI.
All AI chatbots can be less anthropomorphic than they are, at least by default, without necessarily compromising function and benefit. A companion AI, for example, can provide emotional support without saying, “I also feel that way sometimes.” This non-anthropomorphic approach is already familiar in robot design, where researchers have created robots that are purposefully designed to not be human-like. This design choice is proven to more appropriately reflect system capabilities, and to better situate robots as useful tools, not friends or social counterparts. We need the same for conversational AI.
Some argue that all that’s needed is transparency. For instance, legislators in several states are considering regulation for AI chatbots. One requirement in some of these bills is for chatbots to disclose they are not human. While transparency in AI—including disclosures and warnings—can be important, the reality is that most people already know they’re not talking to a human. Nonetheless, chatbots automatically act on people’s brains, encouraging the perception of connection.
Designing non-anthropomorphic AI chatbots doesn’t mean making them difficult to interact with. It means stripping away the illusions of personality and cognition that suggest the AI is something it is not. It means resisting the urge to insert a well-timed “hmm” or have a chatbot tell a user how much it enjoys talking to them. It means acknowledging that AI’s ability to use human language does not equate to an ability to form real human connection. Finding alternative ways of designing chatbots will not be an easy design pursuit, but it’s a necessary one — non-humanlike design could ease many concerns people rightfully have with AI chatbots.
The truth is, we don’t need AI to pretend to be our friend; we need it to be a tool — transparent, useful, and clear about its limits. Anything else is just another dark pattern in disguise.
Authors
