UK’s Online Safety Push Pits Expectations Versus Reality
Mark Scott / Apr 30, 2026Mark Scott is a contributing editor at Tech Policy Press. He previously served as a member Ofcom's Online Information Advisory Committee until February 2026.

Prime Minister Sir Keir Starmer delivers opening remarks ahead of a meeting with senior figures from TikTok, X, Meta and other social media giants at 10 Downing Street, Westminster. Picture date: Thursday April 16, 2026. 84281335 (Press Association via AP Images)
On May 7, people in England, Wales and Scotland head to the polls in what is expected to be a drubbing for the country’s Labour Party. In Wales, Plaid Cymru and Reform UK are leading in the polls. In Scotland, the Scottish National Party is on track to retain power. And across England, Reform UK and its populist leader Nigel Farage are expected to be the major winners.
Yet it’s online where British policymakers, lawmakers and regulators are starting to fret.
Next week’s nationwide votes represent the first time UK citizens have gone to the polls since the country’s Online Safety Act came fully into force. It represents an early test of British claims that it is leading the way globally in protecting people online — including from potential election interference that may involve AI-generated fakes.
Like the European Union’s Digital Services Act, the British rules aim to curb online harm — including those related to potential election interference – by requiring companies such as Google, TikTok and Meta to be more transparent about how they police their global online platforms.
So far, that has included demands that these social networking giants remove illegal content like terrorist content and child sexual abuse material, as well as lengthy transparency reporting obligations to give outsiders and regulators a clearer view of what is going on within social networks’ virtual walls.
“The Online Safety Act introduces clear duties on tech firms,” Melanie Dawes, the chief executive of Ofcom, or the British regulator in charge of implementing the online safety rules, told an audience in March. “They must now assess these risks upfront, and address them effectively.”
Yet there is a growing divide between Ofcom, whose ranks have swollen by more than 500 officials in the last three years to meet the regulatory demands of implementing the complex online safety rules, and lawmakers and advocates who say the agency is not doing enough to hold Big Tech accountable.
At its core is a tension over how the country’s Online Safety Act was designed.
The legislation is primarily focused on holding tech companies accountable for their existing trust and safety policies. It also does not give regulators direct power to intervene on specific pieces of potentially problematic content, no matter the clamor for action from advocates or politicians.
Instead, Britain’s online safety rules were designed to understand and deal with so-called systemic risks, or areas of online illegality like terrorist content and hate speech, without becoming a “Ministry of Truth” whenever something bad happens on social media. This distinction has become even more important after US President Donald Trump criticized the rules for illegally hampering Americans’ free speech.
That nuance — in which Ofcom has statutory powers to hold companies accountable for their policies, not individual content decisions — is getting drowned out by British politicians’ eagerness to clap back against what they perceive as failures by these companies to keep locals safe online.
In 2024, for example, online misinformation and hate speech escalated into real-world attacks in the wake of the killing of three young girls in the north of England. Social media users falsely accused members of the UK’s muslim and migrant communities of instigating the murders. National lawmakers, law enforcement and advocates urged Ofcom to act, even though the country’s Online Safety Act had yet to come into force.
The UK is now considering potential social media bans for children that mirror policies already in place in Australia and, increasingly, likely on the books in jurisdictions worldwide. Such measures would likely come through amendments to the country’s existing online safety rules, giving Ofcom greater authority to stop minors from accessing social media.
The regulator is already trying to get ahead of the game. In April, it published an update on what platforms already had to do to keep children safe online, including mandatory risk assessments to mitigate potential harm to kids and requirements that force companies to take steps so minors don’t view pornography or other problematic content.
Yet the regulator is also struggling to keep track of its growing list of regulatory powers. In February, Ofcom started an investigation into X amid reports that Elon Musk’s Grok AI chatbot account on the social network had been used to create and share sexualized deepfake images of people, including children. Yet even that probe came with a caveat.
AI chatbots, the agency added, were not under the scope of the country’s online safety rules if they didn’t allow people to interact with each other, were not online search services and did not generate pornographic content.
“We can only take action on online harms covered by the (Online Safety) Act, using the powers we have been granted,” Ofcom added.
Those limitations will be on display around the UK’s upcoming elections.
In an open letter to tech companies, Oliver Griffiths, Ofcom’s online safety director, reminded firms about their obligations to protect British citizens online during the upcoming election period. That included almost exclusively internal corporate checks — not direct regulatory oversight — to stop election-related online harassment, threats and foreign interference from circulating around the May 7 vote.
“The (Online Safety) Act does not explicitly identify misinformation or disinformation as specific harms that need to be addressed,” he wrote. “However, where such content amounts to a relevant offence, or intersects with a type of content set out in the Act that is harmful to children, the duties on providers will apply.”
In truth, the UK’s local elections in England and devolved votes in Wales and Scotland, respectively, are unlikely to be flooded with reams of problematic content — although would-be voters have seen such content in their social media feeds.
So far, Ofcom’s responses to these potential election-related threats have been limited, relying primarily on companies’ own efforts to protect the online world.
That approach is emblematic of tensions between the reality of Ofcom’s current regulatory powers and wider expectations from lawmakers and advocates, alike, that more needs to be done to hold tech giants accountable.
Authors
