Unpacking The FTC’s Double-Edged Age-Verification Gamble
Danai Nhando / Mar 9, 2026
FTC Chairman Andrew Ferguson speaks at an online safety summit in November. (Screengrab)
In a significant policy statement issued on February 25, the Federal Trade Commission announced that it will not pursue enforcement actions under the Children’s Online Privacy Protection Act (COPPA) against organizations that collect, use or disclose individuals’ personal data solely for age verification purposes. In doing so, the agency said the practice “can play a critical role in protecting children online and helping parents as they monitor their children’s online activities.”
By casting age verification as an essential shield against harmful online content, the agency has effectively carved out a quiet safe harbor for tech platforms: comply with certain conditions, and they may gather personal data, even from children under 13, without first seeking parental consent as COPPA ordinarily requires.
The agency’s stated goal is laudable. The FTC aims to move beyond the embarrassingly porous practice of users self-declaring their ages to digital services, where a 10‑year-old can click a box claiming to be 18 and and sail unimpeded into platforms designed for adults. The FTC’s policy statement cites a white paper by children’s advocacy group Common Sense Media, which notes that this approach is “insufficient in terms of accuracy…very easy to circumvent…[and] clearly inadequate and inappropriate for use in high‑risk situations.” Clearly, something more robust is needed.
But the solution the FTC has carved out raises urgent questions about whether the policy is trading one risk to children for another, perhaps greater, one.
A safe harbor built on conditions
The mechanics of the policy are worth examining carefully. The FTC will refrain from enforcement against what it calls "Relevant Operators": mixed-audience and general-audience websites that collect personal information for the sole purpose of determining a user's age, provided they meet a series of conditions. Operators must not use age-verification data for any other purpose, must delete it "promptly" after use, must disclose their practices in privacy policies and must take "reasonable steps" to ensure the accuracy of their verification methods.
The word “reasonable” which appears four times in the FTC’s statement, may sound simple, but in law reasonable but in law it carries a weighty ambiguity and a long history of being tested, stretched and litigated, with its meaning ultimately shaped through interpretation and court challenges.
The most significant tension in the policy statement lies here: to know whether a user is a child, a platform must first collect sensitive personal information about that user. Biometric data, government ID details, device signals, behavioral inference are the raw materials of modern age verification.
The FTC's statement acknowledges that "general audience sites and services may obtain actual knowledge about the age of a user" through this process. What it underplays is that a child's age is itself deeply personal information, and the act of verifying it generates a data trail before any protection kicks in.
Who protects the data being collected to protect children?
The irony is that COPPA was designed to ensure that personal information collected from children is handled with extraordinary care, yet the FTC’s new framework creates a loophole by forming a category of personal information collected about children specifically for the purpose of protecting them that temporarily sits outside that framework.
The conditions placed on third-party age-verification providers are meaningful but limited. Operators must obtain "written assurances" that third parties will maintain confidentiality, use the data only for its stated purpose and "delete this information promptly after fulfilling the Age Verification Purposes."
Written assurances, however, are not audits and they are not regulatory oversight. There is no federal law in the United States that adequately regulates the data broker industry. The history of data broker ecosystems shows that promises to delete data are often aspirational. A Consumer Reports study found that most data removal services companies claiming to erase consumer information from people-search sites are largely ineffective.
The FTC is aware of this tension. The statement notes that "the Commission retains the right to investigate and bring actions for violations of the COPPA Rule in individual cases," a reminder that the safe harbor is not unconditional. But enforcement after the fact is cold comfort when a child's biometric or identifying data has already been harvested, shared or breached.
Children's rights in the balance
There is a children's rights dimension to this debate that tends to get overshadowed by the privacy-versus-safety framing. Children are not simply incomplete adults whose data requires temporary protection until they age out of COPPA's jurisdiction. They are rights-holders whose dignity, autonomy and safety are implicated every time their information changes hands.
According to the Common Sense Media white paper, children under two now spend over an hour per day on screens, while 5-to-8-year-olds average three and a half hours. Tweens average five and a half hours. These children are not peripheral users of the digital ecosystem; they are among its most active and vulnerable participants.
An age-verification regime that requires children to submit identifying information to access digital spaces they already inhabit raises serious questions about the power asymmetry between children and platforms. Who decides what constitutes a "reasonably accurate" result from an age-estimation tool? What recourse does a family have when a child is misidentified, is either admitted to age-restricted content or barred from age-appropriate services? The policy statement is largely silent on these questions.
A promising but incomplete step
None of this is to suggest the FTC has made the wrong move. The alternative, watching companies rely on self-declarations that continue to fail users while state legislatures pass a patchwork of inconsistent age-verification laws is not obviously better for children. The Commission is right that more accurate age determination "will in turn allow Relevant Operators to apply their child-protection measures to the fullest extent, thereby protecting more children online."
And there is meaningful promise in the FTC's stated intention: "In the coming months, the Commission intends to initiate a review of the COPPA Rule to address age-verification mechanisms." This enforcement policy statement, the commission explains, "will remain effective until the Commission publishes final rule amendments on this issue in the Federal Register, or until otherwise withdrawn." The statement is explicitly a bridge, not a final destination.
What the coming rulemaking must grapple with is the fundamental question the enforcement statement sidesteps: what rights do children have during the verification process, not just after it?
Robust age verification, done right, could be a genuine advance for children's safety online. Done carelessly, it becomes yet another vector through which the most personal information about the youngest users flows into systems they cannot see, cannot audit and cannot control.
The FTC has lit the runway. The question now is whether the plane it lands carries children's rights forward or merely repackages the risks.
Authors
