Home

Donate
Perspective

The Future (and Past) of Child Online Safety Legislation: Who Minds the Implementation Gap?

Sofya Diktas, Anabel Howery, Robyn Caplan / May 15, 2025

A young child starting at a smart device.

Increased concerns over child online safety have led to a significant push for legislatures around the world to restrict access to online platforms by age. On the surface, this seems like a noble goal; many of these bills are being passed or proposed because of (controversial and hotly debated) claims being made about the relationship between social media and teenage mental health. And yet, as critics have pointed out, a key method to implement these bills—through age verification methods that have yet to be decided—could lead to other harms for all internet users, including risks to privacy, speech, search, and access to the internet and association.

The future of federal legislation on online child safety – the Kids Online Safety and Privacy Act (which evolved from combining the Kids Online Safety Act and COPPA 2.0, and was just reintroduced this week in the Senate) – is unknown. But most of the bills necessitating, or at least implying, age verification have been proposed and passed at the state level. The bills generally fall into three broad categories, which tend to also align with partisan lines. The first set of bills, which require identification to access obscene content such as online pornography, have typically been passed in Republican-led states. Louisiana’s Act No. 440, which establishes platform liability and civil remedies for distributing materials harmful to minors, has become a model for similar legislation; Texas, Virginia, Arkansas, Mississippi, Utah, Montana, and North Carolina have enacted copycat laws, and dozens of others have introduced similar bills in state legislatures. The second set of bills, aimed at increasing privacy protections for children, is mainly being passed in Democrat-led states, such as California, Connecticut, and Vermont. Although not necessarily required, these laws imply some form of age verification (performed by the platform or website) as a means to differentiate between the privacy protections offered to children and adults.

However, there is a swath of other bills more directly aimed at child online safety that seem to bridge the partisan divide. At the federal level, KOSPA has bipartisan support. At the state level, both red and blue states are proposing and passing bills aimed at limiting the harms of social media to the well-being and mental health of children and teens. Texas, Utah, and Arkansas have all passed legislation aimed at controlling broad access to information online using age verification. Many of these bills also require teens to obtain parental consent before creating a social media account, which necessitates additional verification practices to confirm the adult is the child’s parent or guardian. However, the Democratic Governor of New York, Kathy Hochul, also signed legislation regulating social media for teens.

Regardless of the ideological leaning of the bill, they typically have one thing in common: they provide very few details on how platforms and other entities will age-verify their users. Even if we were to ignore the ongoing debate at the center of these bills, stemming from growing discord in the research community as to whether social media actually causes mental health issues in children and teens, proposals for implementing the bills fall short. Many bills, including the majority at the federal level, fail to specify any mechanisms for verifying users’ ages. And when they do specify mechanisms, proposals are often vague, exclusionary, and unenforceable, which could lead to many undesirable outcomes for all internet users. As they stand, there is a significant implementation gap between the ambitions of state and federal age verification laws and the current technological and legal infrastructures required to enforce them. Below, we outline the most common mechanisms state bills provide for verifying age and the potential challenges that may emerge.

IDs as a form of verification

Across all bills that mentioned specific mechanisms for verifying identity, most recommend using government-issued IDs, such as a driver’s license or digitized ID card, as one of multiple options. However, this method of verification poses multiple potential issues. First, most minors do not have a government-issued ID. This implies that the burden of age verification would not fall on the minor to prove their age, but rather, on adults to prove they are not children. While this may seem like a subtle change, it highlights how legislation intended to focus on children will, in practice, have a direct impact on adults.

Assuming that the implementation requires adults to verify their age, using IDs to do so presents a second problem: a significant portion of the US adult population also lacks a valid ID. According to a University of Maryland study, “nearly 29 million voting-age US Citizens did not have a non-expired driver’s license and over 7 million did not have any other form of non-expired government-issued photo identification.” This means that millions of adult users who should otherwise be eligible to access restricted parts of social media may not be able to do so if their only option to prove their age is by presenting an ID.

The study further breaks down this population by demographics and found that young people, underrepresented racial minorities, and lower-income individuals were disproportionately likely not to have an ID. Therefore, not only would using ID as a mechanism for verifying age isolate a significant portion of the adult population from accessing the full internet, but it would also disproportionately restrict vulnerable populations.

“Commercially reasonable” age verification

Acknowledging the limitations of exclusively using ID as a form of verification, many state bills, including Montana, Louisiana, Arkansas, Utah, and New York, have left the door open for “commercially reasonable” age verification methods. However, they give very little clarification as to what should be considered “commercially reasonable”. For example, in Utah, they only specify that these options can, “[rely] on public or private transactional data to verify the age of the person attempting to access the material.” New York’s Stop Addictive Feeds Exploitation (SAFE) for Kids Act improves on this slightly by giving the state’s attorney general the power to provide guidance on what can be considered commercially reasonable, but even this is subjective.

Given the vagueness of this phrasing, it is likely that legislatures are leaving room for platforms to use third-party identity verification methodologies. These emerging technologies employ a range of techniques to verify age, some of which may be unreliable or invasive. One popular method is the use of an AI system for facial recognition and age estimation. This presents significant risks as facial recognition models have been proven to have significant racial and gender biases, and are likely inaccurate at determining age as a result. Other methods include requiring credit card companies to verify age, which presents similar risks to ID verification, as not everyone can qualify for a credit card, especially lower-income adults. There are more of these technologies that raise similar discriminatory concerns, which we plan to explore in more depth in Part II of this series.

Risks to data privacy and anonymity

Throughout all of these bills, there is no insight as to what type of data is permissible, how this data should be sourced, or any consent mechanisms for leveraging the data. By leaving a loophole open for undefined measures of age verification, there is a risk of potentially invasive and privacy-violating data, such as biometric data, being required of everyone who intends to access social media platforms. Not only could this potentially compromise people’s ability to remain anonymous on the internet, but it could also lead to the consolidation of uniquely identifiable sensitive data within the entities performing these verifications. To combat this, all bills with specifications for commercially reasonable age verification methods prohibit the data being used for verification from being stored or retained after verification is complete.

On the one hand, this provision sounds good because, in theory, it ensures that users can remain anonymous and will not be subject to harmful data breaches. On the other hand, implementing true age verification without storing user data would be challenging, if not impossible. To definitively verify identity using technology, two pieces of information are needed: real-time biometric data of a person and a previously authenticated form of identification to compare against. This is how TSA verifies travelers using facial recognition and a passport. If age verification technologies cannot store information about your identity to compare against, nothing is stopping a minor from using a fake ID or a parent’s credit card for verification. It also means that biometric data would almost certainly be required of everyone hoping to access these platforms.

As a result, implementing this legislation will either (1) not effectively deter children from accessing content because they will take advantage of loopholes in verification, or (2) it will require complete identity verification of everyone, thereby killing the right to be anonymous on the internet.

Internet policy history is repeating itself (again)

These are not the first attempts to design internet legislation to require age verification. In an early effort to protect minors online, the Communications Decency Act (CDA) was passed in 1996 to criminalize the known transmission of obscene, indecent, or patently offensive materials to minors (defined then as anyone below the age of 18). Persons or businesses could avoid prosecution under this law by making an effort to verify age. They could protect themselves from liability by requiring users to verify age through “the use of a verified credit card, debit card, adult access code, or adult personal identification number.”

But the Act immediately sparked controversy. The ACLU led a group of twenty plaintiffs in challenging its constitutionality. Their complaint (Reno v ACLU) alleged that the law was unconstitutional in criminalizing expression protected by the First Amendment, that it was overly broad, and that, though “the Government has an interest in protecting children from potentially harmful materials,” the CDA’s burden was not the least restrictive means of accomplishing this purpose. In the Supreme Court’s Opinion on the case in 1997, the majority noted problems, particularly with age verification, agreeing with the District Court’s conclusion that there “is no effective way to determine the identity or the age of a user who is accessing material through email, mail exploders, newsgroups or chat rooms.” The Court expanded on this argument to note that, even if it was “technically feasible to block minors’ access to newsgroups and chatrooms,” it would not be possible to block access to indecent material and still allow them access to the remaining content that was not indecent.

At least a portion of their decision was influenced by the state of age verification technology at the time. But, it’s unclear how, and whether, age verification methods have progressed in ways that would steer clear of the concerns noted by the Court in the late 1990s. For instance, both the Supreme Court and District Court concluded the use of credit cards for verification would “bar adults who do not have a credit card and lack the resources to obtain one from accessing any blocked material.” Additionally, as they also noted, none of the methods proposed at this time, such as credit cards or adult password systems, could “ensure the user of the password or credit card was over 18.”

Conclusion

It is easy to get caught up in demands to protect children from potential (whether proven or just perceived) harms of the internet. But, in the rush to pass this broad array of legislation, advocates of age verification, as well as legislators, are failing to consider what implementation of these laws would look like for all internet users. While there is a prevailing belief that technological advances have addressed the concerns raised in ACLU v. Reno regarding age verification, this has not yet been proven. Believing that hypothetical technologies will be the fix is too flawed, and too dangerous, without considering the unintended consequences to civil liberties.

.Our analysis shows many of today’s child online safety laws likely do not fit this standard. The vague language across bills to use “commercially reasonable verification methods,” despite specifying what that entails, could open a Pandora’s box of faulty verification technologies that do not effectively serve their intended purpose, but rather, serve others.

In Part II, we will explore the commercially reasonable technologies that have been proposed as verification methods. We will examine how they work, who is behind them, and whether they can effectively verify age as they claim to do.

Authors

Sofya Diktas
Sofya Diktas is an ethical technologist with a BS in electrical and computer engineering from Cornell University. During her time at Accenture, she was the technical manager for an emerging technology innovation lab. In this role, she worked with multiple clients to develop their tech strategy and o...
Anabel Howery
Anabel Howery is a Duke undergraduate student pursuing a BA in Public Policy with a minor in Economics and an MMS certificate. Her interest in public policy stems from her passion for helping the communities she is a part of, which has also pushed her to conduct policy research to better understand ...
Robyn Caplan
Dr. Robyn Caplan is an Assistant Professor at Duke University's Sanford School of Public Policy, and a Senior Lecturing Fellow in the Center for Science & Society at Duke University. She is also a Researcher Affiliate at Data & Society Research Institute, where she worked as a Senior Researcher. Her...

Related

Turning Grief into Action: Parents Leading the Charge for Kids’ Online Safety in 2025January 10, 2025

Topics