Europe’s Age Verification Push Raises Privacy Issues Beyond Data Confidentiality
Thijmen van Gend / May 5, 2026
February 13, 2026—Hamburg: Two pupils stand in a classroom at Goethe-Gymnasium and look at their smartphones. There is currently a debate in Germany about age limits for social media and possible stricter rules on cell phone use in schools. Photo by: Marcus Brandt/AP Images
Governments around the world are rushing to mandate age verification online, covering everything from social media harms to gambling and pornography, in the name of protecting children.
Critics have built a substantial case against these mandates. A coalition of 438 security and privacy scientists collectively called it “dangerous and socially unacceptable to introduce a large-scale access control mechanism without a clear understanding of the implications that different design decisions can have on security, privacy, equality, and ultimately on the freedom of decision and autonomy of individuals and nations.” And the non-profit European Digital Rights (EDRi) deems age verification “a short-sighted measure: it does not help young people navigating online spaces, it can disproportionately limit children’s rights, is invasive, excludes a large portion of the population beyond children, and ultimately lacks effectiveness due to easy circumvention.”
While this backlash is warranted and important, age verification tools are not an isolated case, but part of the wider field of digital “trust technologies” gaining rapid momentum.
The EU’s initiatives for trust technologies
Trust technologies include and refer to technologies that aim to increase trust in interactions by undergirding them with mathematical and cryptographic guarantees. For instance, encrypting an email reduces the odds that an adversary can eavesdrop on or tamper with the message, fostering trust that both sender and receiver can vest in the message’s confidentiality and integrity. Similarly, age verification technologies let service providers ascertain users’ ages with greater confidence. This trust extends to interaction amongst users: they may assume that other users were similarly verified to be within a certain age range.
Different age verification methods exist. Service providers may ask their users for a copy of their government ID; ask a third party that has more certainty about the user’s age (e.g., a bank) to attest it on their behalf; estimate their age based on a face scan or their behavior on a platform; or use the European Commission’s ‘blueprint’ age verification app.
The Commission argues that the app is “completely anonymous, works on any device, and is fully open source.” The same day it preliminarily found Meta in violation of the Digital Services Act for failing to adequately prevent children under 13 from accessing Instagram and Facebook, it also urged Member States to roll out the app to their citizens before the end of 2026.
But where does the EC’s promise of “complete anonymous age verification” derive from? The age verification app uses ‘zero-knowledge proofs’. In such an application, the user does not share their date of birth with the ‘relying party’ (i.e., the party asking for the proof), but only a cryptographic proof attesting that they are in a certain age range. Zero-knowledge proofs are generally touted as a sound way of achieving selective disclosure. This can indeed be considered a data protection improvement over sharing passport copies or facial and behavioral analyses, though cybersecurity experts quickly pointed out other security vulnerabilities.
The Commission’s blueprint is fully interoperable with the European Digital Identity (EUDI) Wallets that Member States must make available to their citizens by the end of 2026, and can be developed by public or private organizations. Beyond identity documents, an EUDI “wallet” can contain permits, diplomas, employer certificates, driver’s licenses, loyalty cards, and more. Eventually, the wallet can also be used to pay and prove your age in online and offline transactions. The wallet and these proofs would be stored on your own smartphone so that you have control over whom you share your ‘attributes’ with. In short, wallets increase trust in a myriad of interactions while empowering users. Though a recurring critique of the EU’s eIDAS (electronic identification, authentication and trust services) regulation, which defines what EUDI wallets must minimally look like, is that it lacks an obligation for zero-knowledge proofs.
Privacy-enhancing, right?
Crucially, while legitimate, this concern with data confidentiality is only one subset of privacy. Paradoxically enough, privacy-enhancing initiatives can infringe on user privacy from other angles. Concretely, because trust technologies enable such seamless and allegedly privacy-preserving ‘proving’ of all kinds of attributes, we may end up being asked to reveal parts of our identities everywhere we go online and offline. While sharing proofs remains within the user’s agency, little of this agency may remain if service providers simply restrict access when not receiving said proofs. The more seamless it becomes for relying parties to request such proofs (e.g., when digital identity wallets become widely adopted across populations, and when software to query proofs from wallets becomes easy to use for relying parties), the more often we may be asked to prove our attributes. Thus, age verification regimes – and digital identities more generally – normalize using mobile phones to restrict people's access to digital and physical spaces. This future resembles the digital EU covid certificate during the Covid-19 pandemic: access to all kinds of places was restricted to those who could show proof of a Covid-19 vaccination, recovery, or negative test. As the Institute for Technology in the Public Interest articulated, this induced policing amongst peers, fundamentally changing community relations while lacking checks and balances or redress mechanisms.
Furthermore, zero-knowledge proving makes it more challenging to verify that the proof that the user provides to the relying party, is actually theirs, and not stolen from or reproduced from someone else (‘user binding’). Some industry parties, therefore, call for biometric-bound credentials, where proofs must be verified by asking the user to provide their biometrics at the time of sharing the proof with the relying party (e.g., checking their face or fingerprint). Even if the actual biometrics are not shared with the relying party but only used to greenlight the proof itself, this checking act may feel like a privacy infringement.
Beyond privacy, governments are set to gain a central role in issuing digital identities, raising another question: how can we protect groups that do not trust governmental nor private identity assurance providers, when they seek access to communities of like-minded individuals online? Likewise, how can exclusion of those without access to digital identities (e.g., because they do not have documents, lack digital literacy, or cannot afford or choose to reject capable smartphones) be prevented?
Legal protections already undercut
Luckily, the eIDAS regulation sets out to offer protection against some of these issues. Regrettably, subsequent regulatory processes already undercut these safeguards. For instance, relying parties must publicly register what information they ask from people’s identity wallets and for what purposes, for users and others to evaluate. However, Member States’ agency in implementing this obligation nationally waters down protection levels EU-wide. Furthermore, the eIDAS prescribes that EUDI wallet use must remain voluntary, which the European Banking Authority undermines in their proposal for stronger anti-money laundering regulations. Moreover, the eIDAS’ legal protections around unlinkability and unobservability of user transactions and behavior (article 5a(16)), and around enabling the use of pseudonyms where actual identification is not necessary (article 5), are being weakened in the Commission’s Implementing Acts that fill in eIDAS implementation details. More generally, current attacks on end-to-end encryption and the rise of far-right political parties across the EU, raise the question to what extent legal safeguards can prevent co-option of technologies widely entrenched in modern-day life in the long term. In this context, we have seen how the “think of the children” argument can do heavy lifting.
Grander infrastructural questions
Even with those concerns out of the way, rolling out age and identity verification infrastructures is bound to reshape the way in which organizations (big and small, public and private) deliver their services, as they entrench tech companies’ computational infrastructures. For instance, in the US, many of the digital IDs eligible for passing airport security live in Apple, Google, and Samsung wallets. In Europe, the German EUDI wallet implementation necessitates cloud services and an internet connection, because not all contemporary smartphones are secure enough yet. On top of that comes the requirement to have a Google or Apple account, and the possibility of big tech companies to restrictively interpret what facilitating EUDI wallets means, raising both privacy and digital autonomy concerns, even when employing zero-knowledge proofs.
Thus, even if the EU’s age verification app and EUDI wallet would address drawbacks of contemporary age verification methods by improving data confidentiality, their promise to empower people by decentralizing data to users’ phones falls short due to the centralization of critical functions it entails. In doing so, these trust technologies reshape a myriad of trust relations when people have to ‘prove’ their attributes online and offline.
Because the safeguards in both the eIDAS (with its ordinary legislative procedure) and the Commission’s Implementing Acts prove laudable though insufficient, the Commission should pause its race to Union-wide age verification and identity wallets, to better incorporate the mentioned feedback from civil society and researchers in shaping trust technologies and their regulatory frameworks. It is key not to cave in to heavy industry lobbying, worldwide regulatory pushes, and ‘protecting the children’ narratives; or the EU risks undermining rather than protecting the very rights at stake.
Authors
