Home

It’s Time We Set New Standards for How Tech Companies Tackle Online Child Abuse

Julie Inman Grant / Nov 27, 2023

For decades, the tech industry argued that regulation of any sort would risk strangling innovation, and the best thing governments and regulators can do is stay out of the way.

While this might have sounded like a good idea at one point, the idea the online industry would and could regulate itself has proved – with hindsight – to be deeply flawed. It’s an ethos that has led to growing safety failures on multiple online fronts – most critically in the global battle to defeat the proliferation of online child sexual abuse material. Policymakers and regulators around the world are increasingly seeking to address this critical issue.

Two years ago, the Australian Parliament put in place tough new legislation that would task the online industry in Australia with drafting mandatory and enforceable codes. In keeping with Australia’s established co-regulatory approach to industry compliance, the codes would remove any doubt or perceived “grey areas“ around what the community expects of all online platforms when it comes to dealing with the worst-of-the-worst content on the internet.

The codes would operate under Australia’s new Online Safety Act, cover eight major sectors of the online industry and have global implications for how some of the largest and wealthiest companies in human history tackle this issue. Most importantly, the new laws empowered eSafety to step in with compulsory standards if draft codes were not found to provide appropriate community safeguards.

Earlier this year, eSafety found that six draft codes contained appropriate community safeguards to be registered. But, despite two years of negotiation and a number of consultations with industry associations, two codes still lacked sufficient industry commitment to protect children.

These two codes were the Relevant Electronic Services Code, principally covering messaging services and the Designated Internet Services Code which covers a range of apps and websites as well as file and photo storage services. Enforceable commitments for these two sectors are important in the fight against the spread of child sexual abuse material. These services are actively used to store and distribute this terrible content. We think it reasonable for these services to deploy technology which identifies known child sexual abuse material.

“Known” child abuse material is material that has already been identified and verified by global child sexual abuse organisations and law enforcement agencies and continues to circulate online.

In June, eSafety announced it would take the decision out of industry’s hands and move to enforceable standards. This November, eSafety began consulting publicly on these draft standards.

So just how big is this problem and why did eSafety need to take this action?

You only need to flip through the latest report from the National Center for Missing and Exploited Children (NCMEC), the US’s centralized reporting system for the online exploitation of children, to get an idea how serious this issue has become.

In 2022, NCMEC received 32 million reports of child sexual abuse material, including 49.4 million images and 37.7 million videos from tech companies.

These figures don’t represent child abuse material found in shadowy corners of the Dark Web either, they are reports from mainstream platforms, many of which we all use every day. While these numbers are terrifying, we know it’s only the tip of a much larger iceberg lying just beneath the surface. This is because many of the world’s biggest companies simply aren’t reporting it, and worse still, many aren’t even checking for it.

One of the better “detection performers,” Meta – the owner of Facebook, Instagram and WhatsApp – made around 27 million reports of child sexual exploitation and abuse material to NCMEC in 2022. By contrast, Apple, with its billions of handsets and iPads all connected to iCloud, and many using iMessage, reported just 234.

When eSafety explored the “why” last year through our transparency compulsion powers, we found that the company is not making a serious attempt to look for this material or to enable in-service reporting of illegal content.

And therein lies the problem. For too long, some of the biggest companies in the world seem to have made a decision to not turn on the lights for fear of what they might see if they did.

So, what exactly are we asking these companies to do under these standards?

Each draft standard outlines a broad suite of obligations including requirements to detect and deter unlawful content like child sexual abuse material, put processes in place to deal with reports, and providing tools to empower end-users to stay safe and reduce the risk of this highly harmful content surfacing and being shared online. The standards will also cover “synthetic” child sexual abuse material that might be created using open-source software and generative AI programs.

All pretty reasonable stuff. Many Australians would be forgiven for thinking these companies are already doing some or all of these things – and many of them are.

I want to be clear that eSafety is not attempting to require companies to break end-to-end encryption through these standards or indeed elsewhere, nor do we expect companies to design systemic vulnerabilities or weaknesses into their end-to-end encrypted services.

But operating an end-to-end encrypted service can’t be a free pass to do nothing, either. Tech companies should not be absolved of the legal and moral responsibility of limiting the hosting and sharing of live crime scenes of horrific child sexual abuse.

Many in industry, including some operators of end-to-end encrypted services, are already taking meaningful steps to achieve these important outcomes and they should be commended.

We know people care deeply about their privacy and some have expressed concerns that scanning for “known” child sexual abuse material represents a slippery slope.

But the reality is, one of the world’s most widely-used tools to allow for matching of hash ‘fingerprints’– Microsoft’s PhotoDNA– is not only extremely accurate, with a false positive rate of 1 in 50 billion, it’s also privacy protecting, as it only matches and flags known child sexual abuse imagery.

It’s important to emphasize this point: PhotoDNA is limited to fingerprinting images to compare with known, previously hashed, child abuse material. The technology doesn’t scan text in emails or messages, or analyze language, syntax, or meaning.

Many large companies providing online services take similar steps in other contexts, processing webmail traffic using natural language processing techniques to filter out spam, or apply other categorization rules.

There doesn’t seem to be any uproar about this, nor should there be. Privacy, security and safety are three legs of the same stool – all need to be upheld and balanced.

Ultimately, the tech industry needs to do more and do better when it comes to detection and prevention of illegal conduct and content on their platforms. Australians not only expect it, but rightly demand it.

It’s clear that all tech companies not only have a clear corporate responsibility to tackle these crimes playing out on their platforms, they have a clear moral obligation, too. eSafety is confident that these draft standards will bring us one step closer to realizing these goals.

Authors

Julie Inman Grant
Australia’s eSafety Commissioner, Julie Inman Grant, leads the world’s first government regulatory agency committed to keeping its citizens safer online. Her career began in Washington DC, working in the US Congress and the non-profit sector before taking on a role at Microsoft. Julie’s experience a...

Topics