Home

The One Simple Trick to Measuring Abuse in Tech’s $440 Billion Ads Business

Rob Leathern / Jun 5, 2024

When it comes to understanding abuse in the enormously profitable online ads market, large tech platforms currently grade their own homework. Today, all we get from the platforms are reports touting how many millions of violating ads they take down, how many thousands of reviewers they have, or how much money they’ve spent on platform safety. These are as non-comparable as mattress prices, and are little more than media talking points.

And yet there is hope! Regulators in the EU and beyond now require certain basic ad transparency measures from tech platforms. But, by requiring regular disclosures of a sizable random sample of data, regulators could empower third parties to compare platforms’ progress against one another and against clear and understandable baselines.

Ad transparency has come a long way, but has further to go

For four years until the end of 2020, I led product management for business integrity at Facebook (now Meta). In mid-2018, my team and I launched the world’s first ad transparency suite, including an archive of political ads and the ability to see all ads running on Facebook and Instagram in the company’s Ad Library. These products were shipped before any global regulations required ad transparency or advertiser verification. Such regulation did eventually arrive, such as Europe’s Digital Services Act (DSA). The DSA is now in effect, with particular requirements on large platforms that have over 45 million monthly active users. The DSA’s Article 39 specifies that platforms should provide a publicly accessible ad repository that includes information about all advertisements served on the platform.

Laws like the DSA that aim to hold these companies accountable are important. Advertising funds the operations of the tech giants, and thus a lot of the technology we use everyday. In the last 12 months alone, advertisers spent an incredible $440 billion across just four advertising platforms: Google, Meta, TikTok and Amazon. A good deal of this ad revenue helps small and new businesses reach consumers across the globe with targeted ads, putting them on an even footing with large corporations.

But the same lack of friction and ability for anyone to quickly get started buying ads with only a valid credit card, means that there are few barriers to entry for scammers. Snake oil salesmen can quickly find a ready audience to sell fake products, or cheat the elderly out of their savings with nonsensical crypto schemes promoted by celebrity deepfakes. Elon Musk clones claiming people never need to work again thanks to quantum trading systems? Ads promoting questionable diet supplements, drugs or free iPhones? Worse still, platform ad algorithms optimize on an advertiser’s behalf in real time and help these scam ads find ready targets without the bad actors having to do much.

Despite possible fines for non-compliance with the DSA of up to 6% of annual global revenue, the current version of these platforms’ ad transparency tools are far from useful in the EU, let alone elsewhere. In April, Mozilla released a detailed study that concluded: “None of the ad transparency tools created by 11 of the world's largest tech companies to aid watchdogs in monitoring advertising are operating as effectively as needed.” I’m not sure, however, that this report gets at the most important changes we need in these transparency rules.

With UK Prime Minister Rishi Sunak getting deepfaked in scam ads in the United Kingdom, his government has convinced Amazon, eBay, Meta, Google, Microsoft, TikTok (amongst others) to sign the UK government’s Online Fraud Charter – a set of voluntary mechanisms to reduce the prevalence of scam ads. And yet no one really has any systematic estimate of how many people are getting scammed daily in the UK via each of these platforms.

When I was at Meta, we built automated machine learning and human review workflows to understand images and text at scale in order to stop scammers and bad actors trying to evade ads enforcement, as did our peers elsewhere. More recently, I have experimented with new generative AI tools that could be useful for this task. Many of these huge advertising companies, including Google and Meta, are also spending billions building generative AI systems, but have yet to explain in any detail how these might improve protecting users via better ads policy enforcement. We hear that “enforcement is never perfect,” but we have nothing but anecdotes to judge just how not-perfect their mechanisms are.

We need to know the denominator

This is the fundamental problem. We have no metrics and no unbiased way to judge the progress they are making. While ad libraries let researchers and users find specific ads or advertisers, there is no way to get a sense of the overall ecosystem of potentially abusive advertising. The best way to change that would be to require these companies to make a daily random sample of their ads available to researchers and nonprofits.

Such a sample should be provided by country, and large enough for a thorough and statistically representative country-level analysis to be performed at least monthly. This would allow third parties to assess how many scams or violating ads are running on these platforms, using their own clear and publicly-explained methodologies. The platforms can presumably provide this sample fairly easily, since they already have the infrastructure built for public ad repositories, by providing an appropriate unrestricted endpoint and the necessary series of identifiers. Some of my former colleagues have shown how to use generative AI to detect content policy violations, and researchers could thus measure and score each platform’s effectiveness on enforcement without needing dozens or hundreds of humans to look at these ads.

If even a single academic-nonprofit collaboration could provide such analysis, it would mean that we’d have comparable and unbiased measurements of how well each platform is doing at keeping people safe. I believe that with a results-based “scoreboard,” platforms could improve public perception and accountability in an objective way (instead of sharing figures that may sound impressive but do not capture how well they are doing). The people doing this work at these companies would then be able to point to their ranking and determine the right level of investment, and new companies would have appropriate benchmarks to point to.

Ads must meet a higher standard

Much has been said about free speech vs. content moderation, but smart commentators know that the bar needs to be higher for content that platforms algorithmically promote, or get paid to promote. Advertising is both! And it’s not just users that suffer when bad ads proliferate – small businesses all over the world are also negatively impacted when prices are increased by cheaters.

In the absence of a robust way to analyze paid speech, how could the public possibly know if these companies spend enough of that $440 billion in advertising revenue to protect the billions of users and millions of businesses on their platforms? A robust random sample that smart observers can systematically analyze – with the help of AI – will get us there pretty quickly.

Authors

Rob Leathern
Rob Leathern previously led the Business Integrity product team at Meta, and was VP Product for Privacy & Security at Google. His opinions about online advertising, AI safety and privacy appear in various media including CNBC, the Financial Times, WSJ and the Economist.

Topics