Why The UK’s Online Safety Blunder Wouldn’t Survive In The US
Matthew Allaire / Aug 27, 2025
UK Prime Minister Keir Starmer and US President Donald Trump disembark Air Force One at the Royal Air Force in Scotland on Monday, July 28. (Official White House photo by Daniel Torok)
The United Kingdom's Online Safety Act (OSA) has been called an "enormous step forward" by champions and "suppression of the people" by critics. This sprawling framework promises to shield children from harmful material and give adults new tools to control online content. But despite these noble goals, the OSA effectively hands a government-appointed regulator sweeping power to decide which lawful content can be accessed, by whom, and under what conditions. Such discretion might pass muster in the UK, but would crash into United States constitutional doctrine the moment it tried to cross the pond.
Surface similarities to America's proposed Kids Online Safety Act (KOSA) have led both proponents and detractors to treat them as signs that the two countries are converging on a common framework for addressing online harms. But KOSA and the OSA aren't mere cousins separated by the Atlantic — they're different legislative species entirely.
While one is designed to survive First Amendment scrutiny by regulating design features and business practices rather than online expression, the other treats government speech control as a feature, not a bug.
Freedom of speech
The fundamental divide between the OSA and KOSA lies in their constitutional underpinnings. In the UK, parliamentary sovereignty grants its legislature absolute authority to create any law, making the OSA legally valid simply by virtue of passage. Article 10 of the UK’s Human Rights Act provides limited constraints through a "proportionality test" requiring restrictions be prescribed by law and serve legitimate aims, like child protection. This system enables a centralized regulatory model where Ofcom wields extensive authority over platform conduct. The government has successfully positioned child protection as justification for unprecedented speech controls. This approach has found support in a political culture of elastic terrorism laws and polarized child safety debates.
The US operates under a fundamentally different constitutional framework rooted in deep skepticism of government power over speech. The First Amendment to the Constitution places a formidable barrier against government interference in the marketplace of ideas. The Supreme Court has established a strong presumption against regulating speech. Content-based regulations are nearly always subject to strict scrutiny and have a high probability of being struck down in court.
Rather than dictating what content must be removed, however, KOSA regulates the platforms' business practices and technical architecture as content-neutral measures designed to survive a First Amendment challenge. It specifies that the duty of care does not require blocking minors from deliberately seeking out content on their own. Critics argue platforms cannot realistically gauge how features affect users, making compliance impossible without proactive censorship. Yet many platforms already conduct extensive research on psychological effects — not because governments demand it, but because it serves their commercial interest in keeping users engaged and limiting harmful fallout. KOSA complements these efforts by incentivizing them by law without risking the broad removals feared under the OSA.
Platform classification
Both statutes establish "duty of care" obligations for online platforms, but define and operationalize these duties through markedly different legal mechanisms. The OSA applies extraterritorially to any service accessible to UK users via a tiered system: Category 1 would likely capture the social media giants — Facebook, YouTube, TikTok, Instagram — based on a combination of user numbers and functionality, specifically the presence of algorithmic "recommender systems" that amplify content. This categorization immediately subjects them to the most stringent duties, recognizing their outsized influence on public discourse and user experience. Category 2A is intended for smaller search engines like Bing, while Category 2B is expected to cover smaller but still substantial user-to-user platforms that offer private messaging like Discord.
KOSA creates a two-tier system where all platforms likely to be used by minors face core safety obligations, but only platforms with more than 10 million monthly active users must provide annual transparency reports and third-party audits. KOSA also includes extensive carve-outs for email providers, business software, and messaging services that the OSA lacks entirely. Critically, KOSA also states that nothing in the duty of care "shall be construed to require a covered platform to prevent or preclude a minor from deliberately and independently searching for content" on their own volition. These protections as well as the high revenue threshold would likely exempt Wikipedia, while the OSA has dragged it into a legal battle with Ofcom.
Content categories
The OSA establishes explicit content categorization representing the most comprehensive content-based internet regulation attempted by any democratic government. Sections 61 and 62 mandate "Primary priority" content like pornography and self-harm material be completely inaccessible to under-18s. "Priority" content including violence and bullying requires graduated protection: complete blocking for younger children, restricted access for older minors. Section 121 grants Ofcom power to compel proactive scanning for child sexual exploitation and abuse (CSEA) and terrorism content, which is often technically impossible without breaking encryption.
KOSA, by contrast, avoids direct content mandates, regulating design features such as infinite scroll, auto-play, and push alerts. Its duty of care requires platforms to assess whether those features expose minors to a select list of harms including eating disorders, suicidal behaviors, physical violence, and sexual abuse. KOSA clarifies that no part of the bill “shall be construed to require a covered platform to prevent or preclude a minor from deliberately and independently searching for … content”. Where the OSA targets specific topics for removal, KOSA targets specific topics for design consideration — shifting the scope of enforcement from content prohibition to corporate responsibility.
Enforcement
Under Sections 11 and 12 of the OSA, platforms must conduct annual risk assessments and implement "proportionate systems and processes" to address identified risks. Failure triggers enforcement action, including fines up to 18 million British pounds or 10% of global revenue — a potentially destabilizing risk for small Category 2B services that will deter market entry.
The OSA empowers Ofcom to define evolving standards, demand information, and compel audits through detailed codes of practice. Enforcement action ties adherence to these living codes, not just statutory text, cementing Ofcom’s role as less of a traditional consumer protection agency and more like a government-appointed content czar. These mandates force platforms to make nuanced and politically sensitive editorial judgments under constant threat of government intervention. This is precisely the kind of entanglement between government and speech that US constitutional law has sought to prevent for decades.
KOSA makes use of existing consumer protection enforcement, treating violations as unfair practices under the Federal Trade Commission Act. State attorneys general can bring actions for certain violations, but Congress drew a bright line: they cannot bootstrap federal duty of care violations into state liability.
Age verification
The issue of age verification highlights the tension between user safety and privacy, and here too, the bills take starkly different paths. Under the OSA, platforms must use age-checking technologies ranging from simple age declarations to intrusive verification requiring government ID or biometric scanning. Ofcom's 2024 Code of Practice specifies platforms must be "highly effective" at determining age.
KOSA employs a "knowledge" standard to avoid constitutional and privacy pitfalls. It expressly states platforms are not required to "implement age gating or age verification functionality." This standard builds on the Children's Online Privacy Protection Act (COPPA), passed in 1998, which uses a similar approach to regulate data collection from children under 13. KOSA expands this model for minors under the age of 17, but crucially does so without explicitly mandating new data collection that would trigger a First Amendment challenge. This approach attempts to thread the needle by holding companies accountable for what they know without forcing surveillance that might stifle user expression.
Two bills, two traditions
It’s tempting to see the UK’s Online Safety Act and the US’s Kids Online Safety Act as parts of the same trend, but they spring from very different systems. The OSA empowers Ofcom to regulate categories of lawful speech and mandate access controls, a model that fits within parliamentary sovereignty but would collapse under US First Amendment doctrine. KOSA, by contrast, is essentially enforced through consumer protection law, targeting platform design choices and business practices, not speech itself.
For those worried that Washington is importing Westminster’s model, the reality is simpler: these are not parallel steps toward “Big Brother,” but divergent approaches shaped by constitutional limits. They serve as a reminder that in the US, online safety debates are enabled by our First Amendment, not resolved at its expense.
Authors
