AI Systems that Perform Intimacy Need New Governance Frameworks
Helen A. Hayes / Jan 21, 2026Helen A. Hayes is the Associate Director (Policy) of the Center for Media, Technology, and Democracy at McGill University.
Technologies are often governed as vessels that hold and transmit information, content, and data. This way of thinking has shaped decades of digital regulation. But, in the last three years, something important has shifted: for the first time, we are governing systems that don’t simply deliver information, but actively perform relationships.
This shift has particular consequences for how we think about the safety of AI products for minors. For many young people, AI chatbots are no longer peripheral tools in the digital ecosystem; they are marketed as tutors, assistants, and companions that sound calm, certain, and reassuring, and are endlessly available when few others are.
This shift — from chatbots as information systems to chatbots as relational systems — is not a marginal technical change. It creates a governance rupture that existing frameworks are struggling to name, let alone address. Until AI chatbots are governed as systems that perform relationships, not just process information, we will continue to regulate peripherally while the most consequential impacts remain structurally unaddressed.
Over the past six months, I have been co-leading a national citizens assembly called Gen(Z)AI — a partnership between Simon Fraser University’s Dialogue on Technology Project and the Center for Media, Technology, and Democracy — that brings together young people aged 17 to 23 to deliberate on AI governance. Initial discussions in this assembly have shown that young people view chatbots as systems that shape attention, trust, dependence, and cognition over time, often in ways that are subtle, cumulative, and difficult to see while they are happening.
For the young people involved in Gen(Z)AI, chatbots specifically surfaced three risks. The first was that emotionally responsive systems are displacing human connection, gradually relocating comfort, reassurance, and intimacy away from people and toward machines designed to respond with affective fluency. The second concerned cognitive offloading — the slow erosion of effort, reflection, and critical thinking as AI assistance becomes ambient and increasingly invisible across learning environments. And the third involved exposure to harmful content, including suicide and self harm content.
Canada’s current regulatory architecture is not built for systems and harms like this. To date, our online harms frameworks have focused on content, our privacy laws are organized around consent and individual transactions, and our liability regimes still view platforms as hosts that distribute information rather than actively generate, shape, and personalize interaction.
Chatbots cannot be effectively regulated through any of those mechanisms. These systems infer emotional states, personalize tone and persuasion, and optimize for engagement rather than wellbeing. And yet, at present, there is no enforceable requirement asking developers to pause and answer a deceptively simple question before deployment: are chatbots actually safe for young people to use?
Other jurisdictions are beginning to take that question seriously. In the EU, regulators are moving toward systemic risk assessments and constraints on manipulative design. In Australia, AI companions are being classified as high-risk technologies, triggering safety-by-design expectations. And, at the US state level, age-appropriate design and duty-of-care models are gaining legislative traction, though they encounter stiff resistance from industry. These examples indicate a shift in AI governance strategy towards systems design.
Canada has not yet made that move. But it certainly could, and that possibility is where the opportunity lies. Canada needs a recalibration of its existing institutions and legal frameworks to match the realities of adaptive, relational AI systems. That should include three moves: 1) implementing safety-by-design obligations that take emotional manipulation and engagement optimization seriously; 2) propping up institutional oversight capable of evaluating AI chatbots before they cause harm rather than reacting to their downstream consequences and offering recourse mechanisms to affected users; and, 3) embedding youth participation into governance as a standing infrastructure.
Every generation of digital regulation has asked some version of the same question: how to reduce harm without stifling innovation. Conversational AI forces us to ask something deeper. What does responsibility look like when technology simulates care, performs intimacy, and is actively reshaping how young people think, feel, and relate? That question captures the chatbot governance challenge in plain terms. Availability, persistence, and emotional fluency carry real social weight. So, designing and governing conversational AI requires taking those qualities seriously.
In order for Canada to do this, it must set a standard for AI governance that is grounded in lived experience and equal to the systems shaping everyday life. That’s the work in front of us now.
Authors

