AI Chatbots Are Not Therapists: Reducing Harm Requires Regulation
Virginia Dignum, Petter Ericson, Jason Tucker / Sep 17, 2025
Isolation by Kathryn Conrad & Digit / Better Images of AI / CC by 4.0
The urgency of addressing the harms of AI chatbots was underscored in the recent US Senate Judiciary Committee hearing, “Examining the Harm of AI Chatbots.” During the hearing, parents of children recounted the harrowing stories of AI chatbot-influenced mental health emergencies, including self-harm and death, harms that were never inevitable. Instead, they were predictable consequences of a lax regulatory environment and a widespread culture of irresponsibility in the tech sector broadly, and Silicon Valley specifically. The 10th of September marks World Suicide Day, a stark reminder of why action is urgently needed on AI chatbots, as their intentional or unintentional misuse in mental health spaces proliferates.
AI chatbots are often deliberately designed to mimic empathy, fluency, and constant availability. These are design choices that encourage people to confide and bond with them, which is quite unlike other technologies, such as social media, smartphones, or toys like the Tamagotchi. Interaction with chatbots is not about using a tool or consuming content but about engaging in a dialogue that feels personal and responsive. People feel comfortable confiding in chatbots because these systems are designed to simulate dialogue and empathy without judgment, a dynamic known as the ELIZA effect, first observed in the 1960s with the original AI chatbot ELIZA.
Unlike a therapist or even a friend, an AI chatbot does not (appear to) judge, interrupt, or impose social expectations. This creates an illusion of being understood, by which we project intentionality, empathy, or even trust onto systems that are in reality just statistical engines predicting words. Such misplaced trust can negatively impact our mental health. Impacts are particularly felt by the most vulnerable members of society. However, it is important to remember that we are all vulnerable, at different stages of our lives, and that the impacts of misplaced trust are not exclusive to specific individuals or groups.
What distinguishes current chatbots most clearly from earlier technologies is not only the dialogue-like interface but the way this interaction blurs the line between tool and partner. Because AI chatbots can respond to any topic, mirror emotional states, and appear infinitely available, they feed our human tendencies to anthropomorphize. As a result, people may form bonds that go beyond entertainment or habit, interactions that are already shaping decisions, beliefs, and even the emotional well-being of many.
As trust in AI chatbots increases, so does function creep. Once users are confident in a system, they are more likely to adopt it in other areas of their lives. Their use can thus creep from low- to high-risk personal issues, or from personal to professional use (or vice versa). Yet, the problem is not only in how people use these systems or in which setting, but in how these systems are built to stimulate attachment without accountability. With regulation still lacking, poorly enforced, skirted around, or outright ignored, it often appears that “everything goes.” Systems are regularly deployed in sensitive areas such as mental health without adequate safeguards, or marketed as providing “universal services,” which purport to be adaptable to virtually any use. For example, OpenAI CEO Sam Altman has claimed that ChatGPT is like having “a team of PhD level experts in your pocket,” language that implicitly or explicitly extends to therapeutic uses.
With much of the media, industry, and government globally agreeing that AI is the future and framing its adoption as not only essential but also urgent, it should come as no surprise that people adopt AI chatbots for an increasing range of uses, including mental health support. Responsibility for this situation lies first and foremost with the makers and regulators, not with users who inevitably respond to technologies designed to elicit trust and marketed as solutions to their problems. It is neither fair nor effective to place the burden on individuals to resist systems engineered to encourage reliance. It is time to place the onus where the responsibility must lie: on developers to build and deploy with accountability, and on regulators to enforce clear, binding rules on what these systems can and cannot be used for. Without such guardrails, commercial incentives will continue to outweigh societal well-being.
Addressing this does not need to be complicated. Claims that it is are often little more than a smokescreen, motivated by a need to deflect attention and shift responsibility away from poor design choices and the absence of regulation. Concrete measures already exist. For instance, Dr. Michal Luria, a Research Fellow at the Center for Democracy and Technology, recently highlighted two of these: non-anthropomorphic AI chatbot design and transparency. While agreeing with Dr. Luria, we believe non-anthropomorphic AI chatbot design must be blended with the regulation of interactions with AI applications, including:
- Time limits: stop conversations after a set period.
- No memory: erase past chats to prevent emotional continuity.
- Disclaimers: issue regular reminders: “This chatbot is… not a therapist / not a person / not your friend / makes errors.”
- Topic restrictions: block or redirect risky content (e.g., self-harm, committing violent acts).
- Daily caps: limit the number of interactions per day.
- No emotional mirroring: avoid simulating empathy or care.
However, even if these simple rules may (temporarily) reduce risks of dependency, protect privacy, and make clear that AI chatbots are a tool rather than a partner, they are only partial solutions: users may feel frustrated, find workarounds, or turn to unregulated providers without such safeguards. Again, we stress that the core issue is not user behavior but design and deployment responsibility.
Recent calls to resist such systems such as advocated by Benjoi and Elmoznino, who argue that “until there is a better grasp on these problems, humans have the power to avoid putting themselves in such dangerous situations in the first place, opting instead to build AI systems that both seem and function more like useful tools and less like conscious agents” – may appear prudent. But they too easily shift the burden onto users to resist being misled, without addressing how much incentive, from marketing to user interface design, works to push people toward seeing these systems as more than tools.
Of course, literacy and freedom of choice remain vital. But unless regulation specifies what sorts of systems may not be built or marketed (especially those deliberately designed to mimic consciousness or agency), the responsibility will continue to fall unfairly on individuals. We need binding rules about what can and cannot be built, marketed, or deployed, not just exhortations to make “better choices”, as a user or as a builder of AI applications. Current responses remain insufficient. Legislative efforts and lawsuits, though critically important for accountability and safety, often target AI harms reactively rather than proactively. Systematic oversight embedded in AI development, similar to the Ethical, Legal, and Social Implications (ELSI) model in genomics, is essential to move beyond reactive crisis management.
The focus needs to be on the accountability of the makers, not on the behavior of the users. This can only be addressed through regulation. Anthropomorphizing AI chatbots has proven to be a highly effective approach for tech companies; it has made these tools wildly popular, and there is little incentive for makers to change course now, since doing so would reduce their appeal. As such, responsibility also lies with states and regulatory authorities to address these issues. Regulation is crucial if individuals and societies are to truly benefit from AI chatbots. Here, AI can be seen as just another normal technology, as Arvind Narayanan and Sayash Kapoor have argued. When new technologies or products pop up, they are subject to some form of regulation. AI chatbots should be no different.
Authors


