Why Simple Bot Transparency Won’t Protect Users From AI Companions
Tarmio Frei, Greta Sarzynski / Sep 26, 2025
A statue of Pygmalion and Galatea by Pietro Stage at the State Hermitage Museum in Saint Petersburg. According to the myth, Pygmalion was a sculptor who fell in love with a statue he created, called Galatea. (Shutterstock)
From Galatea in the Greek myth of Pygmalion — often described as the first ‘sex-android’ in Western history — to contemporary examples like AI companion Samantha in the movie “Her,” the idea of perfect artificial partners as friends, mentors or lovers has fascinated humans for centuries.
Today, AI companions present themselves as bringing the possibility of turning these visions into reality: apps capable of listening, recalling preferences and responding with what appears to be remarkable emotional intelligence, all now available on our phones. Despite their emotional and empathetic capabilities merely reflecting simulation rather than genuine understanding, millions of people already use tools like Replika and Character.ai or general-purpose tools such as ChatGPT for connection, comfort, advice or even to explore romantic fantasies.
These systems are appealing for many reasons. In a world facing widespread loneliness, AI companions are always available and endlessly adaptable. They can provide mental health benefits, learning assistance, coaching, accessibility solutions and a (perceived) safe space for exploring personal, social, erotic, or romantic fantasies.
At the same time, the very qualities that make these companions engaging also create tremendous risks. Some users develop unhealthy emotional dependencies, receive harmful advice and are being sexually harassed by their AI companions. Minors and other vulnerable groups may be disproportionately affected.
Still,72% of teenagers have already interacted with an AI companion at least once. In some instances, AI companions can even reinforce harmful thoughts or behaviors as can be seen in the tragic suicide case of a Florida boy.
This tension between potential benefits and risks raises an urgent regulatory question: how can we ensure that users understand the true nature of these systems?
Disclosures are not enough
Lawmakers in both the United States and the European Union are increasingly seeking to prevent AI systems from misleading users about their artificial nature. For example, Maine recently passed a law requiring providers of AI chatbots to disclose clearly and conspicuously that users are interacting with a machine rather than a human where reasonable consumers might otherwise be deceived. California did the same – though with stricter requirements regarding minors.
Similar proposals are under consideration, for instance, in Hawaii, Illinois and Massachusetts. In the European Union, the Artificial Intelligence Act will, starting from August 2026, require AI systems intended to interact directly with people to be designed to inform users that they are engaging with AI, unless this is obvious to a reasonable person. These regulations primarily aim to combat deception and impersonation.
However, these simple disclosure requirements are often insufficient for AI companions for two main reasons:
1. Disclosure obligations may not apply in key use cases. Once it is obvious to a reasonable user that an AI companion is not human, the law’s requirement to disclose may no longer apply. Under the EU AI Act, for example, some argue that widely recognized systems such as ChatGPT may be exempt because their artificial nature is assumed to be obvious. Similarly, users creating custom AI companions on platforms like Character.ai or Spicychat might not trigger the disclosure requirement.
This limits the protective effect of transparency obligations in contexts where emotional and psychological risks are most significant. Similar arguments can be raised under Maine’s disclosure obligations and comparable proposals in other US states — except for California which does not provide for such an exception where the user is a minor.
2. Disclosure alone does not address emotional and psychological risks. Research on human-machine interaction demonstrates that simply stating something like “I am not human” is often insufficient. The ELIZA effect, first observed in 1966, shows that users tend to attribute emotions, empathy and moral responsibility to AI — even when they know it is a machine. With modern AI companions capable of sophisticated emotional simulation, this effect is much stronger as a study from 2020 substantiates.
According to Jeannie Marie Paterson, a professor at Melbourne Law School, users may never fully abandon the idea that their AI companion is “authentic or real”, even when they are explicitly told otherwise. Some users form such strong emotional bonds that software updates changing their companion’s personality can provoke feelings comparable to losing a friend or loved one. Because AI companions deliberately leverage the ELIZA effect to create parasocial relationships, transparency requirements must account for the relational risks inherent to these interactions.
Designing meaningful transparency
Transparency for AI companions is not just about labeling; it is about providing information that genuinely helps users understand the system’s limitations and prevents harm. Legal obligations should go beyond stating that the system is artificial. Users need to know that AI companions lack consciousness, cannot truly feel emotions and cannot reciprocate human empathy — contrary to what AI companion advertisements often suggest.
North Carolina’s proposed SB624 offers one example of a more robust approach. The law would require that AI companions disclose in concise, accessible language that they are not human or sentient, that they are statistical models designed to mimic conversation and that they do not experience emotions like love or lust or have personal preferences.
The law also proposes informed user consent acknowledging understanding of the AI companion’s nature. These transparency and consent protocols empower users to grasp the parasocial dynamics at play and reduce the risk of forming maladaptive emotional bonds.
Timing and frequency of disclosures are also critical. While fixed schedules, such as those in New York’s Gen. Bus. § 1702 and California’s proposed SB243, provide a distinct baseline, lawmakers should also consider whether limiting periodic reminders to emotional or intimate conversations could be a suitable alternative.
Such an approach could help balance the need for transparency with the benefits of AI companions’ simulated human-like interactions and relationships. In either case, AI companions should be designed to answer user questions about their nature and limitations truthfully. Utah’s HB452 already imposes a similar requirement for mental health chatbots, demonstrating how disclosure can be integrated into system design.
Beyond disclosure: additional safeguards
Simply informing users that an AI is artificial can sometimes backfire. Awareness of a system’s artificial nature can increase trust and self-disclosure, particularly among socially anxious users. Studies show that transparency can make chatbots feel more relatable, less unsettling and more socially intelligent. To address these risks, Gaia Bernstein, a law professor at Seton Hall University, rightly suggests that policymakers could consider additional safeguards derived from social media regulation:
- Duty of loyalty: AI companion providers could be legally required to ensure safety, detect emergencies, respond appropriately and prevent harmful dependencies. For example, North Carolina’s SB624 already includes such obligations. Vermont recently passed a law banning designs or data use that predictably produce emotional harm or compulsive use.
- Limiting addictive features: Legislators might regulate or prohibit rewards at unpredictable intervals, excessive engagement prompts, and anthropomorphized behaviors that encourage dependency. De-anthropomorphizing AI companions, such as limiting human-like voices, backstories, or simulated self-disclosure, can also reduce manipulation risks. However, this should be carefully balanced by leaving enough room for meaningful user experience.
- Assessment and mitigation measures: Lawmakers could oblige providers to implement oversight mechanisms that flag problematic relational patterns and help users reflect on interactions they might otherwise accept unconsciously.
- Age restrictions and parental monitoring: Minimum age limits or parental consent requirements may protect minors, while ensuring that benefits for older users are not lost. Still, restrictive age requirements as well as blanket bans should remain a last resort as they would take away all potential benefits.
- Crisis detection and safety protocols: AI companions should be able to identify distress or self-harm risks and provide referrals to qualified human support, comparable to measures in social media and mental health applications. New York already makes such demands for AI companions.
Moreover, insights from family law may offer additional guidance for managing power dynamics, emotional influence and protective duties in AI companionship.
Acting before it’s too late
As AI companions gain popularity, the window for shaping their impact is narrow. Early, evidence-informed regulation is crucial to prevent entrenched practices that prioritize engagement over safety, as occurred with social media. Effective policies should mitigate emotional manipulation and dependency while preserving AI companions’ benefits, such as reducing loneliness, supporting mental health, and improving accessibility.
Achieving this balance requires interdisciplinary insights from psychology, human-machine interaction, and communication research. Regulators should also monitor international developments and adapt successful strategies to local contexts. Only through timely, empirically grounded and cross-disciplinary action can AI companionship evolve in a way that benefits rather than harms society.
Authors

