You've Been Murdoched: Australia’s Teen Ban Offers a Warning for Europe
Caroline De Cock / Mar 12, 2026On March 5, the European Commission convened the first closed-door meeting of its special panel on child safety online, opened by President von der Leyen herself. Its recommendations, due by summer, will determine whether the EU should move toward harmonized age restrictions across twenty-seven member states. With France, Germany, Spain, Greece, and Denmark all mulling over national bans and the UK debating further age-restriction measures alongside its Online Safety Act framework, the cascade is accelerating.
Before Brussels follows Canberra's example, it is worth asking how Australia got there — and who was actually driving the machine, and why.
The campaign that became a law
The timeline of Australia's ban for under-16s is not subtle. In March 2024, Meta declined to renew its commercial agreements with Australian news outlets under the News Media Bargaining Code, a mechanism that had generated over $200 million for publishers in the preceding three years. The "Let Them Be Kids" campaign launched by News Corp Australia across its national mastheads followed closely on the heels of that decision, collecting over 54,000 petition signatures. Nova Radio, owned by Lachlan Murdoch's Illyria Pty Ltd, ran a simultaneous campaign declaring a ban would win the vote of every parent.
By November 2024, legislation had passed both chambers of parliament after a committee inquiry of three hours, following a 24-hour public submission window. Prime Minister Albanese thanked News Corp by name in his announcement: "I do want to single out News Corp for the campaign that they've run, Let Them Be Kids."
The remark was striking. A head of government credited a media company, publicly and by name, when announcing legislation affecting millions of children. The legislation that the media company had campaigned for. The legislation whose primary targets were the digital competitors of that same company.
Not a conspiracy but commercial logic
This is not a conspiracy. It is commercial logic operating in plain sight. Social media platforms have disrupted the legacy media business model: advertising revenue that once flowed to newspapers and television now flows to Meta, TikTok, and YouTube. A law compelling platforms to remove millions of under-16 accounts does not just advance child protection; it also redirects children's attention and advertiser spending back toward traditional media. Online news outlet Crikey noted drily that the ban is "as much a News Corp policy as it is a government policy."
This does not mean parental anxiety is manufactured. Platform design harms are real. If anything good can come out of these discussions, it would be that platforms feel compelled to address the features that are creating some of the legitimate issues raised by parents and experts, leading to more control for parents, minors and possibly all users over their online experience. The problem is not the concern; it is what gets built from it.
When Australia's leading mental health organizations jointly opposed the ban, arguing it would cut children off from support networks, they were no match for a coordinated campaign across every major masthead. When over 140 scholars signed an open letter calling for a nuanced approach, it went largely unreported. Professor Amanda Third, in her chapter in Springer's The Public Child (2025), calls this "regulatory theatre": the figure of the threatened child deployed to generate political momentum while actual evidence is sidelined. She identifies three "disappearances" in the Australian debate: the private child, children's own voices, and a decade of considered online safety reform already underway.
The result was legislation passed in six days, without a clear definition of social media, that left the eSafety Commissioner twelve months to work out enforcement. Reports quickly surfaced of teenagers circumventing the restrictions using borrowed accounts or AI-generated images. ReachOut's Kids Helpline received nearly 100 contacts in three weeks from teenagers reporting distress at losing their support networks. 95.7% of LGBTQIA+ young people surveyed by Minus18 relied on social media to access friends and emotional support. And as Crikey reported in February 2026, the official position of Australia's own regulator is that it does not yet know if the ban is helping. That inconvenient fact has not reached the governments now using Australia as their template.
Europe: same conditions, different postcodes
In the United Kingdom, News UK, Murdoch's British operation, holds the same commercial position as News Corp Australia: a once-dominant publisher hollowed out by the platforms now being targeted. The Sun's reporting that Starmer was "leaning towards" a ban performed the same momentum-amplification function. Health Secretary Wes Streeting invited Jonathan Haidt to brief officials directly, mirroring an Australian summit that Freedom of Information documents later revealed was designed to build momentum rather than examine evidence.
At the EU level, the question of which European media groups stand to benefit commercially from policies redirecting young audiences away from social platforms has not been asked. The list would include Axel Springer, Bertelsmann, Mondadori, and their equivalents — all facing the same structural advertising displacement that drove News Corp's campaign in Canberra. The European Parliament's November 2025 non-binding resolution calling for a harmonized digital age of 16 by default is now the reference point for the Commission's Special Panel's deliberations. Yet that question was not asked there either.
The political framing already signals where the center of gravity lies for some. In her 2025 State of the Union, von der Leyen cited Australia as a model and compared social media age limits to restrictions on alcohol and tobacco. The Commission's communications are saturated with the language of parental empowerment: children are subjects to be protected, not rights-holders to be heard. The agenda of the first meeting did include a slot for youth representatives, but their interventions opened a seven-hour session structured entirely around the expert panel's framing, not the other way around. That is Professor Third's second “disappearance,” that of children’s own voices, replicated in Brussels.
A chorus of dissent that must not be buried
What is different about the European moment is the scale of expert opposition already assembled: a broad, cross-disciplinary consensus that the approach being contemplated is the wrong one.
The Council of Europe's Commissioner for Human Rights, Michael O'Flaherty, published a clear rebuke this February: "Banning children's access to social media shifts the responsibility for safety from the platforms that create the environment to the children who navigate it." He called on governments to prioritize binding legal duties, algorithmic transparency, and robust DSA enforcement before reaching for restrictions. The source of harm, he argued, is the platforms' design and incentives, not the children using them.
Eurochild, the network representing children's rights organizations across Europe, put it with equal directness in its February 2026 position paper: "The choice is not simply between a 'ban' and 'no ban'. That framing obscures the real issue." Eurochild does not call for a blanket ban. It calls for a rights-based framework targeting platform business models — attention extraction, profiling, addictive design — rather than the children navigating them. Its core argument: "Age restrictions can never replace regulation or company responsibility."
On the technical side, the evidence is no less categorical. A joint statement signed by 430 security and privacy scientists from 32 countries, including researchers from KU Leuven, ETH Zurich, MIT, TU Darmstadt, and the University of Cambridge, calls for a moratorium on the deployment of age assurance until scientific consensus is reached on both its efficacy and harms. The letter notably documents how age checks are easily circumvented, carry high error rates, and discriminate against minorities, and how mandating verification at scale would require building a global identity infrastructure whose surveillance implications have not been examined. The conclusion is stark: "Deployment is not justified unless it is proven that the benefits greatly outweigh the harms."
In Australia, equivalent voices — the mental health organizations, the academic letter-signers, the children's rights advocates — were drowned out by a coordinated media campaign and a six-day legislative sprint. The question for Europe is whether the same pattern repeats.
The pattern is worth naming
There is a structural logic visible across this cascade: from Australia to France, Spain, Greece, Denmark, Slovenia, the UK, and now Brussels. Media companies that have lost advertising revenue to social platforms have a structural interest in policies that restrict those platforms. Commercial interests and child protection concerns can occupy the same campaign. The question that keeps not being asked is: who benefits, and who decides?
In Australia, the answer to "who decides" turned out to be, in the Prime Minister's own words, News Corp. And Australia itself does not yet know if its ban is working.
The European Commission's Special Panel has an opportunity to do this differently: to ask the prior question, to weigh evidence over momentum, and to treat children as rights-holders rather than subjects to be managed. Its recommendations will carry far more regulatory clout than Canberra's legislation ever did. The March 5 agenda included, as its second afternoon guiding question, what measures "beyond age-related restrictions" might be worth considering. The fact that this question can be asked at all is what’s worth protecting. Australia offers a recent example of what happens when it isn’t.
Authors

