FTC Opens Inquiry Into AI Chatbots and Their Impact on Children
Ben Lennett / Sep 11, 2025
Sign on a doorway at the Federal Trade Commission in Washington, D.C. Shutterstock
The US Federal Trade Commission (FTC) has launched an inquiry into consumer-facing AI chatbots, with a particular focus on how these technologies affect children and teenagers.
On Thursday, the agency issued orders to seven companies – Alphabet, Character Technologies, Instagram, Meta Platforms, OpenAI, Snap, and X.AI. The orders enable the FTC to conduct broad studies without pursuing a specific law enforcement action.
According to the FTC announcement, the inquiry aims to understand how companies 1) “evaluate the safety of their chatbots when acting as companions,” 2) “limit the products’ use by and potential negative effects on children and teens,” and 3) “apprise users and parents of the risks associated with the products.” Specifically, the FTC has requested detailed information on how companies:
- monetize user engagement;
- process user inputs and generate outputs in response to user inquiries;
- develop and approve characters;
- measure, test, and monitor for negative impacts before and after deployment;
- mitigate negative impacts, particularly to children;
- employ disclosures, advertising, and other representations to inform users and parents about features, capabilities, the intended audience, potential negative impacts, and data collection and handling practices;
- monitor and enforce compliance with Company rules and terms of services (e.g., community guidelines and age restrictions); and
- use or share personal information obtained through users’ conversations with the chatbots.
In the agency’s announcement, FTC Chairman Andrew Ferguson stated that, “Protecting kids online is a top priority for the Trump-Vance FTC, and so is fostering innovation in critical sectors of our economy…The study we’re launching today will help us better understand how AI firms are developing their products and the steps they are taking to protect children.”
The orders were approved unanimously, in a 3-0 vote, including Chairman Ferguson, and Commissioners Melissa Holyoak and Mark R. Meador. The agency remains without two Democratic appointed members, after President Trump removed Commissioners Rebecca Kelly Slaughter and Alvaro M. Bedoya earlier this year. Slaughter appealed her removal and briefly returned to the agency after a federal court ruled in her favor, but the Supreme Court stayed that decision this week.
The inquiry comes amid growing concern about the design of AI chatbots and how they can harm users. In one recent case, the parents of 16-year-old Adam Raine filed a lawsuit against OpenAI, claiming their son’s interactions with ChatGPT-4o led to a harmful psychological dependence, with the product providing explicit instructions and encouragement for his suicide. Experts argue that systems like ChatGPT are “emotionally deceptive” by design, built to appear personable or empathetic and create an illusion of a genuine relationship.
In addition to the FTC’s inquiry, states are also engaging in efforts to regulate AI chatbots with concern for their potential harm to children and teenagers. The California State Assembly passed SB 243 on Tuesday, which would “require chatbot operators to implement critical, reasonable, and attainable safeguards around interactions with… chatbots and provide families with a private right to pursue legal actions…” according to a press release from its author, Senator Steve Padilla (D-San Diego).
The FTC has not announced a timeline for when its inquiry will be completed.
Authors
