The Age of Age Restrictions Poses Policy Dilemmas for Kids Online Safety
Joan Barata / Dec 22, 2025
Social media apps on a phone. (Source)
The concept of online safety has been at the center of policy and regulatory debates since the beginning of the decade. The adoption of the Digital Services Act (DSA) in the European Union and similar initiatives under the auspices of online safety in countries such as the United Kingdom or Australia have become a global reference in this area. A main principle behind these efforts is the legitimate policy objective of keeping everyone safe within growingly impactful and popular digital services.
Online platform legislation in particular aims to tackle different types of harms, including those deriving from illegal content being posted online, or harms deriving from features, practices and design choices that would affect the rights of platforms’ users (such as unclear terms of service, covert advertising, lack of transparency and data privacy violations). The DSA also refers to the obligation to mitigate so-called “systemic risks” associated with the dissemination of certain types of content, including when it is not necessarily illegal but still broadly considered as socially undesirable.
Beyond this general approach, protection of minors from specific online harms is a specific legal category where legislators have already been intervening in different jurisdictions. Even more, it seems to have become a top digital policy priority in a moment when geopolitical tensions around platform regulation are increasing, particularly when it affects sensitive topics such as disinformation or hate speech.
However, even though this is an area where there is general agreement on the need to consider necessary measures, the risk of adopting overly simplified solutions may have a significant impact on fundamental rights of both children and adults, as well as hamper the proper preparation of our young generations for their digital future.
Protection of minors as a growing policy concern
Within the context of the EU, the Audiovisual Media Services Directive already established that audiovisual content that may be harmful to children is subject to specific restrictions, which aim in particular at preventing access to certain programs even though they must not be prohibited for adult audiences.
With the decline of traditional media and the growing consumption of digital content by younger audiences, policy and regulatory attention has shifted to the latter. In July, the European Commission published its guidelines on the protection of minors under the DSA to ensure a safe online experience for children and young people. The guidelines set out a non-exhaustive list of proportionate and appropriate measures to protect children from online risks, such as grooming, harmful content, problematic and addictive behaviors, cyberbullying and harmful commercial practices.
Since then, that approach to kids’ safety online has been increasingly dominating the debates on platform regulation not only in Europe but globally. The undeniability of risks children face when navigating social media, together with the need to show political determination in a socially sensitive area, has fueled a wave of regulations and proposals.
Such regulations are in many cases a good example of what has been called techno-legal solutionism: the practice of embracing a seductively simple solution to the online risks for youth by mandating technical fixes to social media platforms. Particular examples include reducing complex social problems to technical features, prioritizing age verification and access control as harm-prevention mechanisms and assuming platform design strictly determines user behavior and well‐being.
In this vein, the European Parliament recently adopted a non-legislative report “calling for ambitious EU action to protect minors online, including an EU-wide minimum age of 16 and bans on the most harmful addictive practices”. Contrary to previous approaches, legislators are advocating for straight bans as the main tool to guarantee children’s protection online, accompanied by a series of additional suggested measures combining already existing (yet still to be implemented) principles included in the DSA.
Proposals also include outright restrictions potentially affecting adults that are likely incompatible with European human rights standards, together with an ambiguous proclamation questioning intermediary liability exemptions sitting at the very core of DSA regulatory principles (“senior managers could be made personally liable in cases of serious and persistent non-compliance”).
This declaration appears to be connected with Australia’s own controversial ban, which took effect this month and restricts those under 16 from accessing social media platforms or other patchwork frameworks already developing in EU member states and across the Atlantic in the United States.
Without prejudice to the need to consider specific “technical” measures to protect minors from online risks,, it is also necessary for lawmakers and policymakers to properly understand the complexities and nuances associated with child protection in current societies, where the challenges posed by the online world cannot be easily separated from the broader issues affecting the offline sphere. That is not to mention the need to contemplate the risks of possible circumvention, or of laws inadvertently pushing vulnerable users away from better-moderated sites toward darker and less regulated ones.
In this context, authorities should not ignore the importance of literacy programs (both for children and adults), the need to eliminate social risk factors and inequality, the role of flexible and proportionate parental choices adapted to kids’ age and maturity, the protection of freedom and autonomy of kids and the need to guarantee a proper differentiation between perceived or presumed harms and actual risks.
Moreover, while a protective approach would be based on the need to prevent harms to children’s physical, moral and mental development, a human rights perspective would demand the consideration of the right to free expression, access to information, digital literacy, access to entertainment and youth communities and groups and the right to participate in civic affairs. When protections become unnecessary and disproportionate, they risk infringing core liberties, not only for minors but also for adults willing to engage in certain types of legitimate online activities.
The regulatory dilemmas
Due to these complications any plan to regulate kids’ protection online must adequately take into account the following elements and dilemmas:
- Which services are included? Existing proposals largely focus on risks associated with social media use. However, this is a broad category of services and platforms, which may incorporate different types of features, each with their respective levels and types of risk. While certain jurisdictions like Australia have also included services such as video sharing platforms, other types of digital services like gaming or messaging are usually excluded. These broad distinctions are not free from a certain level of arbitrariness and lack the necessary granularity to guarantee flexibility and proportionality in accordance with the profile of users and type of risks at stake. Also it should not be neglected that risks may not necessarily come from big and well-resourced platforms, but also from providers that may be smaller, with lesser internal controls and tools that are sometimes more difficult for competent authorities to properly oversee.
- Who are we protecting? Child online safety regulations aim to protect this specific category of users from risks that directly affect them. Yet it is important that any measure in this area does not constitute an unnecessary and unjustified restriction affecting the enjoyment of the service by adults or impose burdensome conditions for their enjoyment of digital services, including the right to privacy. In addition, platform regulation is currently subject to significant geopolitical tensions, particularly when it comes to European institutions exercising regulatory powers in the DSA to tackle relevant infringements by American companies. Discussions around cross-border speech regulation and alleged censorship have undoubtedly perturbed the necessary quietude to establish and implement regulations directly or indirectly affecting content. In this context, protection of minors may look like a safer playing field for legislators and regulators, which therefore may on one hand overregulate the digital space in the name of kids’ safety while at the same time neglect other relevant areas of public interest. The EU’s recent 120 million euro fine against X for infringing very basic DSA obligations shows the importance that the Commission properly enforces this regulation in the current transatlantic political climate.
- Should age restrictions be the main measure? Even though many regulatory and policy approaches appear to center around strictly enforced age restrictions, they must only be applied to cases where the risks to minors' privacy, safety or security cannot be mitigated by less restrictive measures, and when applicable legislation establishes a minimum age to access certain types of products and services. Furthermore, instead of age-restricting full services, more adequate and proportionate regulations would provide online platforms a principled framework to assess which content, features, or designs carry specific risks for minors and implement measures and restrictions to reduce those risks in tailored ways. Age restrictions and age assurance may, in many cases, complement other possible measures, such as those in the Commission’s guidelines.
- What is the role of parental consent? Even though many regulatory and policy approaches appear to center around strictly enforced age restrictions, they must only be applied to cases where the risks to minors' privacy, safety or security cannot be mitigated by less restrictive measures, and when applicable legislation establishes a minimum age to access certain types of products and services. Furthermore, instead of age-restricting full services, more adequate and proportionate regulations would provide online platforms a principled framework to assess which content, features, or designs carry specific risks for minors and implement measures and restrictions to reduce those risks in tailored ways. Age restrictions and age assurance may, in many cases, complement other possible measures, such as those in the Commission’s guidelines.
- Hard age verifications and the need to tackle the ID social divide: Age restrictions may in some cases require the use of hard age verification tools (i.e., government-issued documents). There are particular risks of using highly intrusive methods, particularly when, due to a lack of official IDs, minors may be forced to submit invasive biometric data, adults may be subjected to potential surveillance and censorship and certain enterprises may be forced to restrict access or compromise privacy under the threat of significant fines. In this context, regulations should be able to properly separate age determination from personal identification and thus establish the corresponding safeguards in this area as well. In any case, policymakers should be aware of the impact that the regulatory imposition of certain tools may have on segments of the population that do not use or have access to digital identification tools.
The adoption of legal and regulatory frameworks mentioned here must strike a proper balance between the risks and the rights at stake, with special consideration of the principles of flexibility, necessity and proportionality. The imposition of total bans or giving parents and guardians the general power to authorize minors’ social media access through data processing consent, instead of narrower and more specifically tailored measures, may politically seem to provide a strong answer to the risks that minors are currently facing online. However, they in fact create other and significant threats to fundamental rights of both children and adults without properly addressing the complexity of societal issues at stake.
If the ultimate goal of any protective policy is to prepare minors to successfully navigate the adult world, we should not neglect the real long-term consequences of attempting to hermetically seal them off from digital risk until the day they turn 18, only for them to face a flood of unmanaged online challenges they never learned how to handle.
Authors
