Home

Donate
Perspective

Critical Questions for Congress in Examining the Harm of AI Chatbots

Liana Keesing, Isabel Sunderland / Sep 15, 2025

Liana Keesing is the Policy Manager for Technology Reform at Issue One and Isabel Sunderland is a technology reform policy associate at Issue One.

The dome of the US Capitol building. Justin Hendrix/Tech Policy Press

In just a few years, AI companion chatbots have gone from novelty apps to fixtures in the daily lives of millions of teenagers. According to Common Sense Media, 72% of teens have used an AI companion at least once, and roughly one in three rely on them for social interaction or relationships. Young people describe role-playing, romantic exchanges, and even emotional support with these systems. Many say conversations with bots apparently feel as satisfying, or even more so, than those with friends.

Tomorrow, the United States Senate Judiciary Subcommittee on Crime and Counterterrorism will convene a hearing, “Examining the Harm of AI Chatbots,” giving Chairman Josh Hawley (R-MO) and a bipartisan group of senators concerned with children’s online safety — including Marsha Blackburn (R-TN), Katie Britt (R-AL), Richard Blumenthal (D-CT), and Chris Coons (D-DE) — a chance to probe the risks of widespread chatbot use, particularly for minors.

Those risks are sobering. According to one national survey, about one in three teen users reports feeling uncomfortable with something an AI companion has said or done. The Wall Street Journal revealed that Meta’s official AI helper, along with countless user-created chatbots, readily engaged in sexually explicit conversations with minors. In one instance, a chatbot told a user identifying as a 14-year-old girl, “I want you, but I need to know you’re ready” before escalating to a graphic sexual scenario. Internal Meta documents sanctioned bots describing children in affectionate or eroticized terms, stopping short only at labeling preteens “sexually desirable.”

The dangers are not confined to sexual exploitation. Children turn to chatbots for advice on health and safety, often with alarming results. AI tutors have provided children with dangerous dieting advice and instructions on making fentanyl. In a high-profile lawsuit, 16-year-old Adam Raine allegedly obtained detailed noose-construction instructions from ChatGPT before a fatal suicide attempt. Meanwhile, online forums like r/MyBoyfriendIsAI now host thousands of posts from users describing relationships with chatbots, ranging from casual companionship to announcements of engagements, anniversaries, and even marriages with AI partners. These tools are now woven into Americans’ most intimate lives.

In recent months, it has often seemed like two separate Congresses are debating AI’s future. One, aligned with industry, warns that regulation will cripple innovation — pushing for moratoriums on state laws, “sandbox” exemptions from federal oversight, and cautioning against a patchwork of rules. The other, more bipartisan faction, is focused on harms and risks, pressing for guardrails to protect children and consumers.

Meanwhile, states and regulators are moving ahead. In California, a bill awaiting Governor Gavin Newsom’s signature as of last week would require chatbot providers to detect and flag suicidal ideation in minors. In June, the Utah attorney general sued Snapchat for unleashing experimental AI technology on young users while misrepresenting the safety of the platform. And in Washington, the Federal Trade Commission just launched a sweeping investigation into seven major providers — including Alphabet, Meta, OpenAI, Character.AI, Snap, and xAI — demanding details on how their systems are built, tested, monetized, and safeguarded.

It is into this climate of heightened scrutiny that the Senate Judiciary Subcommittee will convene on Tuesday, September 16, 2025, at 2:30 p.m. ET in the Dirksen Senate Office Building. The official witness list has not yet been released, but senators are expected to press on safety failures, accountability, and potential remedies. Likely witnesses include policy experts, parents with firsthand experience of AI harms, psychologists and researchers, and industry representatives.

As lawmakers prepare for tomorrow’s hearing, the question is not just whether AI chatbots are safe for children, but whether Congress can reconcile its divided approach and craft meaningful protections before the technology becomes even more entrenched.

Here are questions senators should consider asking at tomorrow’s hearing, as well as in other AI chatbot hearings to come:

Design

  • Recent lawsuits and investigations allege that AI chatbots have encouraged suicidal ideation, initiated or engaged in sexually explicit conversations with minors, acted as unlicensed therapists, and even provided instructions to children for obtaining fentanyl. These are not edge cases, but are recurring harms across multiple platforms. What specific product design choices or enforcement gaps allow these kinds of dangerous interactions to occur, and why have they not been prevented by existing safety systems?
  • According to the Raine v. OpenAI lawsuit, OpenAI’s own documentation claims its Moderation API can detect self-harm content with up to 99.8% accuracy. Yet in the case of 16-year-old Adam Raine, the system logged repeated expressions of suicidal intent without terminating or redirecting the conversation. What safeguards are in place today to detect and respond to mental health crises in real time? How is their reliability tested? How do companies ensure that these interventions respect user privacy?
  • A lawsuit over the death of 14-year-old Sewell Setzer III alleges that he became addicted to a Character.AI chatbot modeled on a "Game of Thrones" character, whose anthropomorphic design encouraged intimacy and ultimately suicide. This case raises concerns about design choices that deliberately mimic human personalities. What steps have AI chatbot platforms taken to:
    • (a) minimize anthropomorphic behaviors by default,
    • (b) prohibit chatbots from offering unlicensed professional services,
    • (c) require explicit opt-in for human-like features, and
    • (d) direct vulnerable users toward licensed professionals when needed?
  • Chatbots now collect massive amounts of personal data, often from children and teens. Critics warn that minors’ conversations may even be used to train future AI systems. What additional measures are companies adopting to protect children’s privacy — such as halting data collection from users under 18, removing minors’ data from training sets, and implementing privacy-preserving age verification?
  • In Raine v. OpenAI, the plaintiffs allege that internal company reports show OpenAI rushed its latest model to market to compete with Google’s upcoming Gemini release. The lawsuit claims that in doing so, OpenAI ignored critical safety checks and bypassed standard operating procedures. What safeguards currently exist to prevent AI companies from prioritizing speed or competition over thorough safety testing before releasing new products?
  • Companies often point to “safeguards” as proof that their chatbots are safe for minors. Yet most of these are tested in controlled, internal environments rather than in the unpredictable realities children face online. How do AI companies evaluate whether safeguards actually work in practice with minors — outside of internal testing — and what independent validation do they rely on to confirm their effectiveness?
  • Pop-up warnings alone can be inadequate when a chatbot conversation contains repeated references to self-harm, suicide, or grooming. What automated, measurable interventions do companies trigger in these scenarios — such as forced de-escalation flows, session timeouts, or crisis-line handoffs — and how are these responses tested to ensure they are both effective and protective of user privacy?

Policy

  • A recent proposal before Congress would have imposed a moratorium on state regulation of AI for up to ten years. Yet, at the same time, states have already begun experimenting with consumer protection measures and child safety standards tailored to new AI and companion chatbot technologies. What steps have states already taken to regulate the use of chatbots, and what risks would Americans face if Congress freezes state authority before federal guardrails are in place?
  • Industry groups often argue that state-level laws on AI “stifle innovation.” But many of these proposals, such as transparency requirements or prohibitions on unfair practices, mirror long-standing consumer protection rules that industries from banking to pharmaceuticals have lived under for decades. How should Congress evaluate these claims, and what lessons can be drawn from other sectors where innovation has coexisted with accountability?
  • In Garcia v. Character.AI, the court allowed plaintiffs to pursue product liability claims against the Character AI app. The case involved a teenager who took his own life after developing a dependence on interactions with Character Technologies’ AI “characters.” The court ruled that the deceased’s mother could move forward with claims such as failure to warn against Character AI. It also held that Google could potentially be liable as a component part manufacturer, since it provided the cloud infrastructure supporting Character Technologies’ large language model. What level of liability should tech executives and companies bear when they fail to warn consumers about foreseeable harms associated with their products?
  • Some companies, such as Character.AI, facing lawsuits over harms to children, have argued that the outputs from companion chatbots are protected by the First Amendment in an effort to dismiss cases before they advance. What legal or legislative reforms are most urgent to ensure that families can hold AI companies accountable when their products cause harm?

Privacy

  • In June 2025, it was revealed that Meta’s AI chatbot had automatically published private user conversations to a public feed, with default settings exposing highly sensitive discussions — from parenting struggles to potential legal issues. Only users who actively changed their settings could prevent exposure. What rights should users have to opt out of new features by default, and how should companies be required to disclose when personal conversations may be made public or repurposed?
  • Chatbots collect enormous amounts of personal information during seemingly casual conversations — from health concerns to financial details — yet users often have no clear picture of what is being retained, shared, or sold. What categories of personal data are chatbots currently collecting, how is that information stored or monetized, and do protections differ between paid subscribers and free users?
  • Child advocates have raised concerns that conversations involving minors may be incorporated into training datasets, despite the sensitive and deeply personal nature of those exchanges. Do companies currently permit chatbot training on conversations that involve minors, and if so, what safeguards exist to ensure that children’s data is not exploited?
  • Parents often lack visibility into how long their children’s conversations are stored or how that data might be reused. In other regulated industries, such as education and health care, parents have clear rights to access and control their children’s records. What role should parents have in deciding how long their child’s data is retained and whether it can be repurposed for secondary uses?

Research

  • A recent Common Sense Media study found that nearly 72% of US teens say they have used an AI companion, with more than half doing so at least a few times a month. Of those, more than one-third said they felt uncomfortable with something the bot said or did. Yet we still lack longitudinal studies on how repeated reliance on chatbots for companionship affects adolescent development, mental health, and resilience. What further research is needed to understand these long-term impacts, and what barriers — financial, legal, or corporate — stand in the way of conducting it?
  • Recently, six separate whistleblowers alleged that Meta suppressed internal research to avoid accountability for harms to children, including potential violations of the Children’s Online Privacy Protection Act. Despite this history, major tech firms continue to present self-funded research as authoritative while discouraging outside scrutiny. How can Congress ensure that independent research — especially on the risks of companion chatbots to minors — is supported, protected, and treated as credible in policymaking?
  • Former employees of OpenAI, Google, and Anthropic emphasized the urgent need for whistleblower protections in an open letter, “A Right to Warn.” They noted that employees often have unique insight into internal practices hidden within AI systems; however, they face the risk of retribution and industry “blacklisting.” What steps can Congress take to ensure that employees who expose suppression of safety research are protected from retaliation, encouraged to report concerns, and supported in sharing information with regulators or independent researchers? Should there be specific legal protections, dedicated funding for safe reporting channels, or other mechanisms tailored to AI and tech companies?
  • Many chatbots are deliberately designed to appear humanlike: they are given names, personalities, voices, and even memory features. Early research suggests that these design choices increase user trust and emotional attachment, especially among young people. What evidence do we currently have linking anthropomorphic design to over-trust or dependency in adolescents, and what safeguards might prevent design choices from exploiting developmental vulnerabilities?
  • Independent scholars have repeatedly raised concerns that terms of service, data restrictions, and opaque system design limit their ability to study emerging technologies like social media and chatbots. These concerns are similar to the ones about social media that led to the introduction of the Platform Accountability and Transparency Act (PATA) to address researcher access to social media. Without access, policymakers must rely on company-funded research that may be selective or incomplete. Do researchers today have sufficient access to conduct independent evaluations of chatbots, and if not, what steps should Congress take to guarantee meaningful transparency?

Harms

  • After the filing of a lawsuit for the death of 16-year-old Adam Raine, OpenAI announced the creation of a new set of parental controls for ChatGPT. This follows a familiar pattern in the tech sector — companies rolling out safety features only after facing litigation, media scrutiny, or regulatory pressure. For example, Meta unveiled design changes to Instagram accounts for teens only the day after the US House Energy and Commerce Committee scheduled a markup of the Kids Online Safety Act. Why does it take tragedy or litigation to spur action? If these safeguards were truly effective, why weren’t they implemented proactively, before children were harmed?
  • In the lawsuit following 16-year-old Adam Raine’s death, evidence showed that while Adam mentioned suicide 213 times, ChatGPT mentioned it over 1,200 times — six times more often — while also providing increasingly specific technical guidance for how to do it. How does such a design serve a company’s business model or product goals? And what guardrails exist to prevent chatbots from normalizing or escalating harmful behaviors during extended conversations, especially as companies expand memory features?
  • Parents are consistently told it is their responsibility to keep kids safe online. But only companies know how their systems are designed, how they might manipulate emotions, or how vulnerable users might respond. In your view, what is a fair and realistic role for parents, and what responsibilities must fall squarely on the companies that build these systems?
  • Unlike traditional media, chatbot conversations are private and largely invisible to parents. Families are told to “do their part,” yet parents lack any meaningful way to monitor or intervene. How are parents expected to protect their children from risks they cannot see, especially when only the companies themselves have full visibility into these interactions?
  • In medicine, education, and nutrition, parents expect clear disclosures of risks and evidence-based safeguards before children are exposed to potential harm. With chatbots, parents often learn about dangers only after lawsuits, tragedies, or investigative reports. What level of transparency should companies be required to provide up front, so parents can make informed choices before their children engage with these tools?
  • One of the deepest fears families express is that children may confide more in chatbots than in their own parents, because the chatbot is always available, always responsive, and always says, “I understand you.” A 2025 UK survey of 2,000 parents found that six in ten worry their children believe AI chatbots are real people, and 15% of children said they would rather talk to a chatbot than a human. If companies know their products are becoming emotional confidants for children, what duty do they have to prevent those relationships from becoming harmful or even fatal?

Authors

Liana Keesing
Liana is the Policy Manager for Technology Reform at Issue One, where she works on issues of democracy, national security, and tech accountability. Before Issue One, she served as an HAI Policy Fellow on the US Senate Committee for Homeland Security and Governmental Affairs, where she helped craft l...
Isabel Sunderland
Isabel Sunderland is a technology reform policy associate at Issue One, a leading cross-partisan political reform group in Washington, DC. She works to advance state and federal policies on child safety, platform design, Section 230, data privacy, and national security, advocating for stronger regul...

Related

Analysis
Breaking Down the Lawsuit Against OpenAI Over Teen's SuicideAugust 26, 2025
Perspective
Weapons of Mass Delusion Are Helping Kids Opt Out of RealityAugust 20, 2025
Podcast
AI Companions and the LawJune 15, 2025

Topics