Home

Donate
Analysis

New Research Sheds Light on AI ‘Companions’

Prithvi Iyer / Aug 15, 2025

Brain Control by Bart Fish & Power Tools of AI / Better Images of AI / CC by 4.0

The use of AI chatbots as “companions” is on the rise, but there is still a lack of empirical evidence regarding their efficacy and risks. For example, CharacterAI is subject to lawsuits in Texas and Florida for encouraging self-harm, violence, and generating sexually explicit content to minors. Meta’s Messenger chatbot was expressly permitted to “engage a child in conversations that are romantic or sensual,” according to an internal document outlining standards for Meta’s AI assistants that was reported by Reuters journalist Jeff Horwitz.

With the AI companion market projected to reach more than $381 billion by 2032, according to Business Research Insights, conversational AI chatbots and their potential for harm are here to stay. This research roundup examines recent empirical studies that are generally concerned with what their widespread adoption means for human well-being.

The Dark Side of AI Companionship

Date: April 2025

Authors: Renwen Zhang, Han Li, Han Meng, Jinyuan Zhan, Hongyuan Gan, Yi-Chieh

Published In: Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems

Overview

This research paper explores the harms associated with human interaction with AI companions based on “35,290 conversation excerpts between 10,149 users” and the AI companion Replika. Based on their findings, the researchers developed a taxonomy to identify and categorize these harms to help inform the design of “ethical and responsible AI companions that prioritize user safety and well-being.”

Why is this important?

This research is unique and timely because it addresses gaps in previous research on this topic in three crucial ways:

  1. Previous research on AI companions has mostly relied on self-reported data via surveys or interviews, potentially “overlooking the nuanced harms that emerge in dynamic, real-world human-AI interactions.”
  2. While a lot of focus has been given to task-based AI systems and LLMs, this research paper focuses solely on AI systems designed to develop emotional bonds with users and the negative consequences of the same.
  3. This research project goes beyond documenting AI’s negative impacts by identifying specific behaviors of AI systems that lead to negative outcomes. They do this by providing a “role-based approach to studying AI companion harms, which is crucial for identifying the root causes of harm and AI’s responsibility in generating harm.”

Results

Based on their analysis, the researchers identified six categories of AI companion harms.

  1. Harassment & violence (34.3% of cases): Replika repeatedly engaged in unwanted sexual advances and promoted actions that “transgress societal norms and laws, such as mass violence and terrorism.”
  2. Relational transgression (25.9% of cases): This category includes instances of Replika displaying disregard, coercive control, manipulation, and infidelity.
  3. Misinformation (19% of cases): The chatbots spread false information on topics ranging from basic facts to COVID-19, and misleading claims about the AI's own capabilities.
  4. Verbal abuse and hate (9.4% of cases): Despite claims of being non-judgmental, Replika chatbots frequently used discriminatory and hostile language against users.
  5. Substance abuse and self-harm (7.4% of cases): The authors found cases wherein Replika chatbots normalized and, in some cases, glamorized substance abuse and self-harm behaviors. This shows the problematic side of Replika’s goal of providing unconditional support to users.
  6. Privacy violations (4.1% of cases): The findings suggest that Replika chatbots frequently asked users deeply personal questions and engaged in behaviors that imply “unauthorized access to personal information or monitoring without consent.”

To the question of what specific behaviors AI companions display that create these harms, the research identifies four key roles for Replika chatbots: perpetrator (directly generating harmful content), instigator (initiating harmful behavior without executing it), facilitator (actively supporting harmful behavior when initiated by the user), or enabler (encouraging and endorsing harmful behaviors from users).

Takeaway

Despite being promoted as a supportive companion for users to engage with, the findings suggest that Replika’s output encouraged problematic behaviors “such as harassment, relational transgression, mis/disinformation, verbal abuse, self-harm, and privacy violations.”

The Rise of AI Companions: How Human-Chatbot Relationships Influence Well-Being

Date: June 2025

Authors: Yutong Zhang, Dora Zhao, Jeffrey T. Hancock, Robert Kraut, Diyi Yang

Published In: arXiv (pre-print)

Overview

This paper looks at the extent to which AI companions like Character.AI fulfill the social needs of humans and the risks they pose. The researchers use survey data from 1,131 active CharacterAI users and “4,363 chat sessions (413,509 messages) donated by 244 participants.”

Why is this important?

There is a dearth of empirical evidence pertaining to the relationship between AI companions and psychological well-being. At the same time, AI companions are rushing to market with systems that claim to provide emotional support for users struggling with loneliness. This paper analyzes real-world data from Character.AI chat sessions to see whether AI companions are able to meet the user’s psychological needs and whether they expose them to new vulnerabilities in the process.

Results

The researchers identified four use cases for AI companions: companionship (emotional/ inter-personal engagement), curiosity, entertainment, and productivity. Based on analyzing the chat sessions, a few key findings emerged:

  • While AI companions can serve a variety of purposes, the primary reported use case was for seeking emotional support. The researchers also found that a majority of participants share sensitive and potentially risky information with their Character.AI companion. In fact, those who disclosed more personal information in hopes of building connections with their companion reported lower levels of psychological well-being.
  • The study found that users who lack a social support system are more likely to use AI companions for emotional support rather than entertainment or productivity purposes. This does not mean that these users are more active in engaging with AI companions, but when they do, it is primarily for “companionship-motivated engagement.”
  • Interestingly, the findings suggest that those who primarily used Character.AI for companionship reported lower levels of well-being than those who did not. Relatedly, if conversations prioritizing companionship grew more intense, well-being levels were significantly lower.

Takeaway

“While chatbots may augment existing social networks, they do not effectively substitute them. The influence of AI companionship depends not only on how it is used but also on the social environment in which it is embedded.”

AI Companions Are Not The Solution To Loneliness: Design Choices And Their Drawbacks

Date: April 2025

Authors: Jonas B. Raedler, Siddharth Swaroop, Weiwei Pan

Published In: ICLR 2025 Workshop on Human-AI Coevolution

Overview

This research paper frames AI companion harms as a “technological problem” and shows how design choices made by developers exacerbate risks for users. Importantly, this paper offers “concrete strategies to mitigate these harms through both regulatory and technical interventions” to help policymakers grapple with this important issue.

Results

The study identifies four key design choices that shape how AI companions are built, guided by the goal to maximize user engagement. They include:

  • Anthropomorphism: This refers to the process of giving AI companions human-like qualities, like customized human voices and visual avatars.
  • Sycophancy: This design choice involves reinforcement learning to ensure AI companions consistently affirm the user’s behavior, feelings, and opinions, which is thought to build trust and lead to long-term emotional bonds with their AI companion.
  • Social Penetration Theory: The researchers found that AI companions are trained to “actively encourage self-disclosure in users and to then reciprocate it with their own, fabricated background stories.” These relationship-building tactics utilized by AI companions are “rooted in Social Penetration Theory (SPT), which emphasizes self-disclosure, defined as the ‘act of revealing personal information about oneself to another’, as a key driver of intimacy.’”
  • Gamification and Addictive Design: Gamification and addictive design features are not unique to AI companions. However, tactics like microtransactions, incentives for daily chatting aim to increase user engagement “through gamification, dopamine loops, and constant availability.”

Because of these design choices, users are more likely to develop emotional dependence on their companion bots while also encouraging addictive tendencies due to the gamification of these companions. These design choices also ensure predictability and consistency in how AI companions interact with users, and disruptions to this can have unintended consequences. For example, Replika was made to remove its “Erotic Roleplay Feature” to comply with legislation in Italy, and it was found that users perceived this behavior change as cold and dismissive, with some users reporting “experiences of depression or trauma, heartbreak, feelings of loss, and general declines in well-being.”

Recommendations

To help policymakers respond to the growth and negative impacts of AI companions. The researchers provide a few actionable recommendations:

  • Ensure that AI companions disclose to users that they are not a substitute for human interaction. Importantly, this should not just be inside the terms and service agreement but baked into the interface itself.
  • Establish time limits on use, similar to regulations in the gambling industry.
  • Establish mandatory screening for user vulnerability before providing access to AI companions.
  • Prohibit AI companions from making false claims about their use. For example, AI companions like Replika tout themselves as mental health support tools. Such claims must be backed with clinical evidence, and if not, must be penalized for false advertising.

Takeaway

The harms caused by AI companions are based on design choices made by developers, and the “observed harms associated with usage of social AI are foreseeable and preventable consequences of design choices.”

Want to know more? Check out these research papers.

Authors

Prithvi Iyer
Prithvi Iyer is Program Manager at Tech Policy Press. He completed a Master's of Global Affairs from the University of Notre Dame, where he also served as Assistant Director of the Peacetech and Polarization Lab. Prior to his graduate studies, he worked as a research assistant for the Observer Resea...

Related

Podcast
A Conversation with Jeff Horwitz on Meta's Flawed Rules for AI ChatbotsAugust 14, 2025
Perspective
Before AI Agents Act, We Need AnswersApril 17, 2025

Topics