Home

We Worked On Election Integrity At Meta. The EU – And All Democracies – Need to Fix the Feed Before It’s Too Late

Matt Motyl, Jeff Allen / Jun 5, 2024

The European Parliament in Brussels, Belgium. Shutterstock

Later this week, voters across Europe will go to the polls for critical elections that will decide control of the EU parliament – and the future of Europe. It’s just one of the many important elections globally. Votes are already cast in India, South Africa, and Mexico, with the UK and US to follow. As data scientists who specialized in election integrity while at Meta, it was our job to find accounts that were engaging in behaviors that could interfere with elections, like those managed by Russia’s Internet Research Agency. We know first hand what it looks like when the system is blinking red – and when a social platform is amplifying disinformation from foreign actors or is allowing hate speech that threatens free and fair elections.

Days out from the start of the EU elections – and five months out in the US – we’re there.

In April, an investigation revealed that Facebook and Instagram are letting a well-known and growing network of pro-Kremlin pages push political ads to millions of Europeans with virtually no moderation. At the same time, the company has shut down vital tools used by researchers, journalists, and election observers, flouting new rules under the Digital Services Act (DSA) to drive transparency and access. Combined with a hollowing out of election integrity teams at Meta, Twitter, and other platforms, as well as a proliferation of electoral deep-fakes and increased actions by Russia and China to influence 2024 elections, these actions have created a perfect storm: growing risks and declining safeguards.

We’ve seen this before: in the US in 2016, in India over the last several weeks, and in the EU in 2018, despite internal Facebook memos that warned executives that changes to the platform's algorithm were generating "misinformation, toxicity, and violent content" around the world. Meta ignored warnings and chose instead to seek engagement and profits, in part out of fear of angering the political right – even as the researchers recognized the potential long-term damage to democracies.

By now, the problem is well understood. Fortunately, we know what it takes to fix it.

The key driver of election disinformation – and the key aspect of social media platforms being exploited by bad actors – are the algorithms. Algorithmic systems are how social media platforms determine what content a user will see. The basic components of recommendation systems on large online platforms are similar – they shape the content that is recommended, what shows up in your feeds and searches, and how advertisements are delivered. In lay person's terms, you don’t control so much of what you see on your “page,” despite claims to the contrary. The social media company does.

These systems have been weaponized over and over again – exploited by bad actors to sew confusion, target election workers and voters, amplify disinformation and so on. The fixes are not complicated, and are well known. First, engagement-based ranking systems are problematic because people are disproportionately likely to engage with more harmful content that is divisive and contains misinformation. Second, if accounts aren’t verified as belonging to real people who are who they say they are (as opposed to an intelligence operative in Macedonia masquerading as an American with extreme political views), disinformation, hate speech, and other threats to democratic processes will proliferate.

Our organization, the Integrity Institute, published a report in February on mitigating algorithmic risks and a series of proposals on what a responsible recommender system around elections looks like. There’s a clear roadmap for election integrity. The platforms know this too. In fact, if you went on to Facebook or Instagram in the days leading up and immediately following the US election in 2020, your feed would have looked entirely different. Giving in to pressure from democracy and transparency advocates, and burned from their experience in 2016, Meta (then Facebook) implemented a series of “break glass” measures to improve the integrity of its platforms. In short, this came down to algorithms: users were shown credible news sources, not engagement-driven disinformation; election lies and threats were disallowed and aggressively policed.

The dam held just long enough that Meta likely played a meaningful role in safeguarding the 2020 election from threats that succeeded in prior elections. (Notably these guardrails were removed following the election, just before the January 6, 2021 insurrection at the US Capitol was organized partly on Facebook and other social media platforms.) What this tells us is that social media companies can make elections safer. They just choose not to. We either need to encourage them to do the right thing, or force them. Or both.

In the EU, Commissioner Thierry Breton has rightfully announced an investigation into Meta for election disinformation. This is laudable, but investigations alone aren’t enough. Fortunately, Commissioner Breton and the EU have other tools at their disposal. In 2023, the first wave of regulations under the landmark DSA took effect; further requirements kicked in earlier this year.

Under the DSA, one of the most ambitious regulatory regimes for tech companies and online platforms, the EU has extraordinary power and reach to act. In fact, some of the DSA’s most significant requirements are meant to ensure that platforms implement risk mitigation measures, and that “systemic risks” to society are minimized. The EU, by law, could demand evidence from platforms about how their algorithms are optimized, and the role they play in the spread of harmful content. While they cannot demand specific mitigation measures, forcing platforms to be honest about the scale of harmful content on their services, and what is causing its spread (e.g., algorithmic recommendations, or engagement-based classifiers that place such content higher in the ranking queue), can pave the way for accountability.

Based on what we know about what platforms can do in the context of elections, the Commission should be watching closely and demand evidence for sufficient platform action, and explanations where there isn't any. In crisis situations where platforms do not take sufficient action, Articles 36 and 48 of the DSA may even permit the EU to deem platforms as out-of-compliance and fine them up to 6% of their global revenue. Even at this late hour, these platforms could decide proactively to launch sufficient election-related protections ahead of the EU election, and set a model for subsequent elections around the world. And, as the 2020 US elections showed, even a brief period post-election, given the late date, could have an impact.

Few countries have the regulatory power of the DSA, though. In the US and UK, for instance, “safe by default” could and should be the rule in upcoming elections. The US has few protections in place, and no legislation looks like it is set to pass let alone have impact before November. While the UK passed the Online Safety Act last year, it is unclear what effect it will have on election harms. As in the EU, time is running out.

Elections raise the stakes substantially for platforms. It’s the most critical time to ensure they aren’t amplifying false content or other communications meant to stoke violence or election delegitimization algorithmically. And they are the most powerful moments to show that we have solutions that work. We can have a social internet designed to help individuals, societies, and democracies thrive. The EU can help make this happen – and show the world that we can choose safer elections, and a better internet.

Authors

Matt Motyl
Matt Motyl is a Resident Fellow of Research and Policy at the Integrity Institute and Senior Advisor to the Psychology of Technology Institute at the University of Southern California’s Neely Center for Ethical Leadership and Decision-Making. Before joining the Integrity Institute and the Neely Cent...
Jeff Allen
Jeff Allen is co-founder and chief science officer at the Integrity Institute. A former physicist and astronomer, Allen left academia for data science in 2013 and has since worked on multiple sides of the internet information ecosystem, including for publishers, platforms, platforms, and political o...

Topics