Home

Concerns Mount Over Social Media and 2024 Elections

Gabby Miller / Sep 28, 2023

Gabby Miller is Staff Writer at Tech Policy Press.

“I’m worried about the fact that in 2024, platforms will have fewer resources in place than they did in 2022 or in 2018, and that what we’re going to see is platforms, again, asleep at the wheel,” Yoel Roth, former head of Twitter’s Trust and Safety team, said earlier this week on a panel hosted by the UCLA School of Law. The event’s premise was how platforms should handle election speech and disinformation heading into an election year of globally historic proportions: at least 65 national-level elections will take place across more than 50 countries in 2024, including in countries like the United States, Ukraine, Slovakia, and Taiwan as well as the European Union.

These elections are scheduled amid heightened political and technological uncertainty that could lead to “a lot of chaos,” Katie Harbath said on the same UCLA panel. Harbath, a former public policy director for global elections at Meta, pointed out that major platforms like TikTok, Discord, and Twitch are building out new tools to handle election disinformation, even as other major platforms adjust their policies. All of this is taking place even though there are still many unknowns surrounding how artificial intelligence may further complicate platform governance.

Another variable is the extent to which generative AI will disrupt political discourse, but it is clear that the rapidly developing technology has made it easier for bad actors and political opponents to produce increasingly convincing “deepfake” images and videos at low cost. The US Federal Election Commission and Congress have both expressed a desire to crack down on the use of deepfakes in political ads, but the likelihood of federal legislation passing before the 2024 presidential election is slim, although not impossible. (States like California and Texas have successfully banned the use of deepfakes in state-level elections, but they’ve been criticized as difficult to enforce and have raised First Amendment concerns.)

And not all governments can or should be trusted with regulating these technologies, especially those repressive regimes engaged in “draconian censorship under the guise of countering disinformation.” That’s according to a new framework released Wednesday called “Democracy by Design,” which aims to safeguard freedom of expression and election integrity through a “content-agnostic” approach that prioritizes product design and policy rather than content moderation whack-a-mole. The framework takes a three-pronged approach in its appeal to Big Tech platforms, including recommendations for bolstering resilience, countering election manipulation, and leaving “paper trails” that promote transparency.

So far, a coalition of ten civil society groups, including Accountable Tech, the Center for American Progress (CAP), and the Electronic Privacy Information Center (EPIC) have all signed onto the framework. To directly counter election manipulation, the coalition proposes prohibiting all use of generative AI or manipulated media to depict election irregularities, misrepresent public figures, or micro-target voters with ads generated using personal data, as well as requiring strong disclosure standards for any political ads that feature AI-generated content. Algorithmic systems that dictate users’ feeds could also be “opt-in” during election seasons, the coalition suggests.

The civil society groups also make clear the role Big Tech plays to either enhance or undermine democracies. The vulnerabilities on social media often stem from platform architecture, the coalition says, which means that the very designs that maximize engagement and promote a frictionless user experience can also “serve to warp discourse and undermine democracy.” According to the proposal, soft interventions can mitigate these threats, including implementing “virality circuit breakers,” restricting “rampant resharing” during election season, and creating a clear and defined strike system. Most platforms already use strike systems, but the coalition wants tech companies to open up these systems for scrutiny.

However, it’s not just a platform’s design and content moderation policies that will impact the upcoming elections. How the platforms comport themselves will also unfold in the context of a war being waged mainly by the political right to pressure tech companies into moderating less online speech.

The Center for Democracy and Technology (CDT) released a report last week detailing how economic, technological, and political trends are challenging counter-election disinformation initiatives in the US. The report, titled “Seismic Shifts,” drew on interviews with more than thirty tech company workers, independent researchers, and advocates to explore the growing challenges they face in their day-to-day work – ranging from digital and physical harassment to congressional subpoenas and litigation – and recommended steps leaders of counter-disinformation initiatives should take to weather the storm. These steps include pivoting to year-round harm reduction strategies like pre-bunking as well as focusing less on individual content and more on mitigating the impact of disinformation superspreaders, among others, according to the CDT report.

What happened to Yoel Roth after he resigned from Twitter is one of the most high-profile examples of this strain of coordinated harassment. In a recent op-ed for the New York Times, he characterized the barrage of attacks he faced from X (formerly Twitter) CEO Elon Musk, former President Donald Trump, Fox News, and others, not as personal vindictiveness or ‘cancel culture,’ but rather a coordinated strategy to make platforms reluctant to make controversial moderation decisions over fear of partisan attacks against a company and its employees. (Since the launch of this online assault, which included Musk baselessly claiming Roth condoned pedophilia, Roth has had to move repeatedly – and at one point hired armed guards to protect his home – following physical threats to him and his family.)

More common, though, are the attacks against rank-and-file researchers who have drawn the ire of some of the world’s largest billionaire-owned platforms and elected officials. This summer, X’s Elon Musk filed a lawsuit against the Center for Countering Digital Hate (CCDH), accusing the nonprofit of misusing X data for its research showing how hateful content has increased under Musk’s ownership. Shortly after, he threatened the Anti-Defamation League (ADL) with legal action over its campaigns pressuring advertisers to leave the platform due to rampant hate speech and antisemitism under Musk’s leadership. Then there was the “Twitter Files,” Musk’s shoddy attempt to expose Twitter’s “liberal bias,” where he handed over documents to hand-picked writers who produced reports with few new meaningful revelations and several factual errors. And that’s just a sliver of what’s happening over at X.

Rep. Jim Jordan (R-OH), a Freedom Caucus conservative who’s been characterized as Musk’s attack dog, and his ilk are escalating their campaigns against disinformation researchers. Last week, The Washington Post reported that academics, universities, and government agencies are buckling under the weight of legal and regulatory threats that Republican lawmakers and state governments are systematically launching against them, resulting in overhauls or the dismantling of research programs aimed at countering online misinformation.

According to the CDT's Seismic Shifts report, the courts may also permanently restrain certain government communications with platforms and researchers. Take Missouri v. Biden, which is currently pending in the courts: the lawsuit accuses the White House and federal agencies of colluding with tech companies throughout the pandemic to remove disfavored content and stifle free speech on their platforms. Communications between the Biden administration and social media platforms regarding content may end up restricted by the courts.

But, there are indications that researchers intend to carry on studying these issues. Kate Starbird, co-founder of the University of Washington’s Center for an Informed Public, recently wrote in Lawfare that despite some of the Center’s previous election projects becoming the focus of online conspiracy theories, lawsuits, and congressional investigations that “grossly mischaracterize” its work, it still has dozens of researchers working to identify the harms of misinformation and election manipulation at scale.

And despite mass layoffs across the tech sector that have gutted many platforms’ trust and safety teams — including Musk reportedly halving his Election Integrity Team on Wednesday despite previous promises to expand it — Harbath finds some reassurance in the employees that do remain. “I don’t think we should forget that there are people that are trying to do this work with the resources that they have, and we shouldn’t think that this is an all-or-nothing thing,” Harbath said at UCLA’s Law School. “Panic responsibly as we go into all of this.”

Authors

Gabby Miller
Gabby Miller is a staff writer at Tech Policy Press. She was previously a senior reporting fellow at the Tow Center for Digital Journalism, where she used investigative techniques to uncover the ways Big Tech companies invested in the news industry to advance their own policy interests. She’s an alu...

Topics