Home

Grassroots Content Moderation Might Be A Solution to Election Misinformation

Jesús Alvarado / Mar 5, 2024

Jesús Alvarado is a fellow at Tech Policy Press.

A California Super Tuesday polling place on March 3, 2020. Shutterstock

As the United States gears up for another election cycle, the landscape of misinformation and disinformation continues to evolve, presenting complex challenges for platforms, policymakers, journalists and voters.

One notable trend, according to Joshua Scacco, director of the Center for Sustainable Democracy at the University of South Florida, is how social media platforms are shifting their stances on how they moderate content about elections in the US. In particular, he said, the reinstatement of accounts like former President Donald Trump's signals a departure from the stringent deplatforming measures seen in the aftermath of the 2020 election that resulted in an attempted insurrection on Jan. 6, 2021.

“In general, the American public was quite supportive of platforms engaging in self regulatory measures to combat mis- and disinformation, hate speech, threats of violence, those particular types of things that were really part of the national dialogue after January 6,” he said, citing a national survey he and a team of researchers conducted in 2021 at USF’s Cyber Florida. “I think we're in a period where some of those conversations have abated in some ways.”

But content about elections might look different now than it did in 2021 when that survey was conducted. Since then, there have been rapid advances of generative artificial intelligence tools, like ChatGPT, which produces text content based on questions or prompts users feed it, and DALL-E, which generates images based on descriptions the program is commanded. Take these tools, and you have an even larger issue: deepfake images, video or audio-based content created with the help of generative AI to mimic a person doing or saying something they didn’t.

And voters have already received such election disinformation, even outside of the social media ecosystem. Many New Hampshire voters, for example, received a deepfake audio robocall purporting to be from President Joe Biden telling them “your vote makes a difference in November, not this Tuesday,” referring to the New Hampshire primary elections that happened Jan. 23.

In Florida, where Scacco resides, state lawmakers introduced Senate Bill 850 at the start of the year to try and put some guardrails in place to help curb the misuse of generative AI in political content created and shared by political entities. The bill would require a disclaimer if the technology is used by these actors, and failure to do so would result in a fine and a misdemeanor charge. However, lawmakers there still have to come to a consensus on who exactly would be charged if the bill is enacted as law.

Ten other states have passed similar laws to Florida’s SB 850, with the ultimate goal to help provide transparency when AI-generated content or other forms of deepfakes are used in political speech. Five other states, though — California, Michigan, Minnesota, Texas and Washington — have successfully enacted a variation of this, with Texas notably being the first state in 2019 to enact a law criminalizing deepfakes during election season.

But the big picture here is that political misinformation tends to feed the extreme polarization the US has been dealing with since the 2016 election cycle. And when this type of content reaches the masses on social media, it only helps exacerbate that political polarization, said Claire Wardle, co-director of the Information Futures Lab at Brown University. She said she not only expects to continue to see election misinformation on social media platforms, but also “content [that is] designed to make supporters even more aggressive with one candidate or another … and make divisions wider, etc.”

Similar to USF’s Scacco, Wardle also emphasized the growing threat posed by the use of generative AI in this type of misinformation and political messaging on online platforms, saying there needs to be regulation surrounding the use of AI-generated content, particularly during election periods. Generative AI tools are a hot topic, and rightfully so — the technology helps automate mundane tasks in the workforce and even for some students, but it’s hard to ignore the pain points that come with it, including biases and various uses by bad actors. Guardrails in place for the use of generative AI during elections could be helpful to avoid the “is this real, or not? crisis,” Wardle said, adding that voters who can’t decipher what is and isn’t credible information will experience fatigue and will disengage from most, if not all, political news, which may result in not trusting institutions and journalists.

A way to mitigate this, she explained, is giving online platforms the Wikipedia treatment. “We have to bring people into this process [of content moderation] and say: how can they be part of talking to one another about manipulation tactics, coming up with trusted information that resonates in their community? We have to get back down to grassroots,” she said, “and I think Wikipedia is an amazing example of that. I think there are ways that we could do this. It's just resource heavy and requires working deeper with communities. It's not an easy fix, so that's why I think people aren't taking that route seriously.”

Hypothetically, this method, though not new, could open the gates to a more inclusive way of moderating election misinformation and disinformation. It could, for example, enable people from marginalized and underrepresented communities to have a say in what gets moderated on social media platforms, including surfacing those people’s reasons under a post’s accompanied community note.

Now, community notes already exist on X, formerly known as Twitter, but those are made visible only when a specific post is flagged by enough users. And though those content notes may be helpful for users on that platform, Scacco said, that’s only scratching the surface on misinformation and disinformation, especially outside of the English language. “Many Hispanic communities in the United States still have ties to family and friends who are back in Central and South America. And they communicate either through public social media applications or through encrypted messaging applications — WhatsApp, Signal, Telegram — that are even much harder to go after, in terms of the sort of regulation of things,” he said.

The use of social media and other online platforms will continue to present intricate challenges posed by misinformation in the context of US elections. As technology continues to advance, policymakers, platforms and society as a whole must collaborate to develop robust strategies that safeguard democratic processes and promote digital media literacy. By prioritizing community engagement, fostering skepticism, and implementing regulations, leaders can help voters navigate the landscape of online election misinformation and uphold the integrity of democratic discourse.

Authors

Jesús Alvarado
Jesús Alvarado is an audio journalist and is currently a producer for Marketplace Tech, where he focuses his work on tech policy, internet culture, and health technology. Holding a Master of Science in Journalism from the University of Southern California, Alvarado honed his reporting skills during ...

Topics