Home

Donate
Analysis

FTC Throws Its Weight Behind Age-Verification at Public Workshop

CJ Larkin / Jan 30, 2026

CJ Larkin is a Tech and Public Policy Scholar at Georgetown University.

Image of young girl using a smartphone.

Amid the growing interest and adoption of age verification technology, both in the United States and abroad, the Federal Trade Commission (FTC) hosted a workshop this Wednesday examining the technological and legal implications of these tools. The virtual workshop, held online due to inclement weather, featured experts from civil society, industry, and government to discuss emerging age verification technologies and their expanding global use.

Age verification requirements have become a focal point in debates over youth online safety. In December 2025, Australia’s “Online Safety Amendment Act” took effect, banning users under 16 from social media platforms and requiring platforms to implement age verification for all accounts. As a result, Meta removed over half a million accounts in the last month. Countries across the European Union, as well as the United Kingdom, have pursued similar measures. In mid-January 2026, TikTok announced plans to roll out new age verification technologies in the EU to comply with regulatory requirements.

The tech has also become a flashpoint in the United States. This summer, the Supreme Court upheld Texas’ HB 1118 in Free Speech Coalition v. Paxton, allowing the state to require age verification for websites containing at least one-third “adult content.” Many states have also pursued age-based social media restrictions, with mixed results. As of October 2025, eight states had enacted some form of social media restrictions for minors. Several laws remain tied up in the courts due to First Amendment challenges, such as Arkansas’s law, while others, such as Florida’s law, remain in effect as the legal fight plays out. Federal efforts have been more limited, with the most prominent proposal being the “Kids Off Social Media Act” (KOSMA), introduced by Senator Brian Schatz (D-HI) and Senator Ted Cruz (R-TX), which would bar minors under 13 from social media platforms.

In Wednesday's workshop, FTC officials weighed in on the debate, suggesting a potential pathway for implementing age-verification at the federal level: by amending the agency’s Children’s Online Privacy Protection Act (COPPA) rules. FTC Chairman Andrew Ferguson opened the workshop by highlighting what he described as a paradox in COPPA enforcement. While age-verification tools could strengthen protections for minors, many require the collection of personal data that may conflict with COPPA’s restrictions. Ferguson framed the workshop as an effort to identify “potential pitfalls” and said the agency should “push COPPA as far as we legally can to protect kids.” Christopher Mufarrige, the FTC’s Director of the Bureau of Consumer Protection, echoed the Chairman’s comments on amending COPPA rules in closing remarks for the conference, emphasizing that COPPA “should not be an impediment to the most child-protective technology to emerge in decades.”

Despite the emphasis on COPPA at the outset, much of the workshop focused less on the broader legal frameworks around age-verification and more on the technical implementation. Across several panels, participants emphasized the privacy-first approaches of their technologies. In particular, panel 2, “From Biometrics to Behavioral Signals: Age Verification Tools,” highlighted developments in decentralized and cryptographic-based approaches to age verification. “Double blind” age verification was presented as an increasingly common and preferred method of privacy-preserving age verification that would utilize third-party verifiers, and reportedly make it impossible for platforms to know a user’s identity, while the third party verifying the age would not be able to know what website is requesting the age signal.

Industry and civil society practitioners also highlighted advances in “age estimation” tools, which use AI and machine learning to estimate a user’s age based on their digital behaviors, rather than requiring submission of formal identification. In Panel 4, “Deploying Responsible Age Verification at Scale,” panelists from Google and Meta outlined their companies’ uses of these AI-based age inference tools as a default, with IDs often only being required if the inferred age needs to be disputed, or if a minor user attempts to change their age (in the case of Meta). Tokenized interoperability of a user’s age signal (which would allow users to not need to re-verify their age for every required use, but rather share a cryptographically stored age token) was also frequently referenced as a way to find a middle ground between privacy and usability.

Notably, panelists — especially those coming from industry — highlighted the ways in which age verification can be useful to platforms beyond ensuring youth safety online. One participant, who works at SuperAwesome, a youth-focused marketing firm, noted that accurate age assurance could improve compliance with COPPA, while enabling more precise ad targeting. Robin Tombs, the co-founder of popular age verification company Yoti, cited user safety outcomes following Yoti’s implementation on Yubo, a social media site targeted at youth, noting that 80% of users reported feeling safer on the website, due to assurances that they were interacting with vetted and real people. A Meta representative discusses the benefits — to both the user and the platform – of providing an age-curated experience online, though several panelists warned that regulatory approaches risk disincentivizing platforms from offering such features.

Wednesday’s workshop highlighted both the momentum behind age verification and its related technologies and some of the unresolved legal and technical questions surrounding their use. As lawmakers and regulators continue to grapple with online child safety, the workshop underscored the FTC’s growing interest in age verification as a regulatory option. Whether and how the agency moves to update COPPA in response to these technologies could shape the direction of federal policy and influence how platforms deploy age verification in the near term.

Authors

CJ Larkin
CJ Larkin is an MPP student and Tech and Public Policy Scholar at Georgetown University. Previously, CJ spent two years as a Govern for America Fellow working on broadband and technology ethics.

Related

Analysis
When Age Gating Puts User Privacy at RiskJanuary 14, 2026
News
The Drive For Age Assurance Is Turning App Stores Into Childhood RegulatorsJanuary 12, 2026
Podcast
What to Expect from US States on Child Online Safety in 2026January 11, 2026
Perspective
The Age of Age Restrictions Poses Policy Dilemmas for Kids Online SafetyDecember 22, 2025

Topics