Home

Donate

A Guide to Social Media Moderation Policies for the Post-Election Period

Jordan Kraemer, Tim Bernard, Diane Chang, Renée DiResta, Justin Hendrix, Gabby Miller / Nov 2, 2024

January 6, 2021: Supporters of former President Donald Trump marching to Capitol Hill in Washington DC.

The final ballots in the 2024 US presidential election will be cast on Tuesday. The post-election period may be volatile, particularly if the outcome is disputed. Social media platforms will be scrutinized for their handling of mis- and disinformation, false claims of victory, and potential incitements to violence.

In order to empower observers to evaluate platform performance, advocate for stronger standards where necessary, and ensure that social media serves the public interest in the post-election period, we conducted a review of relevant policies at major platforms, including X, Meta (Facebook and Instagram), YouTube, and TikTok. What follows are our findings and recommendations.

Possibilities for the post-election period

Polling indicates that the 2024 US presidential race is in a dead heat going into its final days. Combined with projections that swing states such as Pennsylvania and Wisconsin may take days to deliver results after November 5, there is a significant possibility that the post-election period will be volatile. Republican presidential nominee and former President Donald Trump and his GOP allies have already challenged the election's legitimacy, and in the event of a close victory for Vice President Kamala Harris, they are expected to dispute its outcome.

As tools for shaping public opinion and political organizing, social media platforms played a substantial role in enabling the “Stop the Steal” movement that ultimately engaged in violence at the US Capitol building in Washington, DC, on January 6, 2021. Platforms did make efforts to moderate viral election rumors in 2020, and they took some steps to address premature claims of victory. Yet based on its comparative analysis of social media platform preparedness for the election, the Institute for Strategic Dialogue (ISD) concluded that although political figures were to blame for the spread of false claims about the 2020 election, “platforms also played a role in disseminating false claims about the electoral system by acting too late to implement policy changes or take enforcement action to ‘stop the spread’” of efforts to delegitimize the outcome.

Will platforms face similar challenges if there is unrest in the coming weeks? Much will depend on their current policies and how they are enforced. In addition to the ISD assessment cited above, multiple external analyses and journalistic reports suggest platforms have adopted a less aggressive posture on election integrity. This is partly a result of political pressure: multiple platforms were subpoenaed by the House Judiciary Select Subcommittee on the Weaponization of the Federal Government, which disparages content moderation as anti-conservative censorship.

And then there is X (formerly Twitter). Its new owner, Elon Musk, has repeatedly boosted demonstrably false claims about the 2020 election, and election officials are reportedly struggling to counter his efforts to raise doubts about the integrity of the 2024 election. With the largest account on the platform, Musk is able to draw widespread attention to false and misleading claims. Meanwhile, his preferred solution to misinformation on X, Community Notes, appears to be struggling to keep up. Even in “easy” content moderation cases, like inauthentic foreign interference, X has seemingly decreased its efforts to disrupt manipulative networks.

Three key scenarios to watch

There are a number of distinctly vulnerable periods and events during the post-election period. A resource produced by the nonprofit Protect Democracy and the consultancy Anchor Change urges focus on situations “where delays or uncertainty will escalate the potential for confusion and violence, especially directed at election workers and administration sites.” We focus on three key scenarios in which the posture of the platforms may matter most.

  • The first is with regard to premature claims of victory. Consider a case in which polls have closed, and votes are still being counted. For instance, a prominent candidate may prematurely declare victory, as occurred in 2020. The announcement, shared widely across platforms, is picked up by major partisan influencers, generating a massive surge of engagement. Supporters celebrate, but opponents grow suspicious; their candidate has not conceded. This leads to confusion about the actual status of the race, and rumors spread rapidly. The candidates accuse each other of stealing the race. Officials attempt to counter the narrative with accurate information, but the volatile environment increases distrust and destabilizes public confidence in the eventual outcome. Many election rumors are already accusing immigrants and Jews of “rigging” elections or fraudulent voting; marginalized groups will likely be targets of further blame and scapegoating.
  • The second is a campaign to delegitimize the outcome. In the event that Donald Trump loses the election, it is almost certain that he will dispute its outcome. A vast legal and political machinery is in place to challenge election results, and it appears to be more entrenched and prepared than in 2020. New generative AI tools make it easy to create a larger volume of manipulated media, such as synthetic “leaked” audio purportedly admitting shenanigans, that might prove useful to advance claims of voter fraud. Given that a significant percentage of the voting public believes the 2020 election was stolen and is primed to believe that 2024 may be rigged, there is a substantial risk that social media platforms may be overwhelmed with false claims. They may struggle with enforcing synthetic media policies in this scenario as well, especially as their current rules are already hard to enforce against manipulated images and video.
  • And, finally, there is the biggest concern of all: violence. Imagine this possible scenario: as results slowly roll in, dissatisfied people begin to spread rumors of widespread election fraud. Tensions rise as unverified claims of ballot tampering, destroyed ballots, and stolen votes go viral on platforms like X and Telegram. The candidate who has lost fans the flames. Groups of enraged supporters, convinced the election is being stolen and possibly egged on by extremist groups, organize rapidly through social media, calling for action. As frustration builds, small protests coalesce into violent confrontations between poll workers and angry partisans. There are, of course, a range of other possibilities – from targeted attacks on vote counting facilities in key swing states such as Pennsylvania to mass demonstrations around key transition events, as occurred most prominently on January 6, 2021.

Assessing platform preparedness for the post-election period

Are platforms prepared to address these scenarios in 2024? Since the 2022 midterm elections, platforms have made significant revisions to their election misinformation policies. Some of these changes are laudable: following very serious threats against election workers during and after the 2020 US presidential election, most platforms now specifically prohibit threats against election workers. If enforced in real-time, these policies could reduce violence against election workers, especially if such attacks follow online threats.

Three of the major platforms whose policies we reviewed (YouTube, X/Twitter, and Facebook) now have more circumscribed policies regarding election misinformation. These changes are worrying because they weaken platform prohibitions against the types of misinformation that can serve to delegitimize election outcomes and contribute to violence. However, in all cases, some restrictions are still in place against posting false or misleading information, generally following US law. These fall under the following categories:

  • Misinformation about the time, manner, or place of voting
  • Intimidation of voters or election workers
  • Encouraging violence or harassment

These rules, however, do not explicitly cover false or premature claims of electoral victory in user-generated content unless they constitute procedural interference, incitements to violence, or harassment (although Facebook has committed to use fact-checkers and to contextualize and demote viral misinformation in the newsfeed). In the first scenario, if a campaign declares victory before all votes are counted, supporters and/or bad actors could spread the false story on social media and undermine public confidence in the eventual outcome. It’s unclear what actions platforms will take if an account makes a premature claim of victory.

Despite the violence in 2021, X, Facebook, and YouTube have made no commitments to address rumors of election fraud comprehensively. Candidates and their supporters will be allowed to stoke fears of ballot tampering, vote stealing, or other interference. Although these platforms prohibit calls to harassment and violence, they have regularly overlooked implicit exhortations, such as when a candidate or influential figure directs their followers to target critics, a form of “stochastic harassment.” If 2020 is any guide, violent extremist groups may coordinate online harassment and offline attacks through fringe platforms and messaging applications that do not moderate content.

All the platforms we reviewed prohibit coordinated inauthentic or deceptive behavior, a tactic often ascribed to nation-state propagandists. They could apply this policy against networks of fake or misleading accounts operated by election deniers. Disrupting these networks is difficult to do in an automated way, however, and requires dedicated time and resources, which platforms, in many cases, have not committed. Many campaigns to spread false or misleading information also do not meet platform criteria for “inauthentic” behavior when carried out by genuine or verified accounts (even though verified accounts should be held to higher standards, as they often have greater reach or credibility). Such collective activity from real accounts could accelerate the spread of false information about electoral victories, which platforms will be unlikely to stop.

Platforms’ rules are further undercut when they apply these policies selectively, such as exempting high-profile accounts for the sake of “newsworthiness.” This vague category can apply to the pronouncements of electoral candidates, party officials, allies, or staff, as well as to other political figures and influencers. Notably, X lowered its threshold for considering an account newsworthy in 2023, expanding the number of accounts that can be exempted from their policies. Expanding the newsworthiness exception would lower the barriers to acting on problematic material related to the first and second scenarios. For instance, if Trump declares early victory or disputes the election results, his X account and likely his Facebook account will not be restricted or sanctioned, nor will those of his high-profile supporters or surrogates. And, of course, no action will be taken on posts referencing false claims about the outcome of the 2020 election (or subsequent attack on the Capitol to prevent its certification), even if those messages serve as a predicate for claims in 2024. These exceptions may permit the propagation of deceitful rumors used to shore up fraudulent legal cases.

Of the four platforms we reviewed, only TikTok has policies against most forms of election misinformation, relying on a network of fact-checkers. When the truth is still uncertain, TikTok will make relevant posts ineligible for the all-important For You Feed until they have a good basis on which to greenlight or remove the content. TikTok may be a more reliable site for election information as a result, although, for many Americans, it is less central to political discussion.

Advertising policies are another area of platform policies that deserve scrutiny in the post-election period. Here, platforms are generally much stricter in the period leading up to and on election day, with rules against:

  • “Ads that ... call into question the legitimacy of the upcoming US election, or contain premature claims of victory.” (Facebook/Meta)
  • “Making claims that are demonstrably false and could significantly undermine participation or trust in an electoral or democratic process.” (YouTube/Google)
  • “False or misleading information intended to undermine public confidence in an election.” (X) (Caveat: highly partisan media may be exempt from this policy.)

TikTok, more cautious once again, prohibits all political ads (though evidence suggests this prohibition is poorly enforced.) It should also be noted that Google announced that it will pause election-related advertising after polls close, and Meta announced that it will not allow any new political or issue advertisements to be published in the week leading up to the election. It is currently unclear at what point platforms will end these advertising pauses. If they do so prematurely, while election results are still uncertain or post-election rumors persist after Election Day, ads can be used to further spread rumors and misleading claims.

It’s important to remember that platforms have wide latitude to make content moderation decisions they regard as appropriate to the circumstances no matter what their policies may say. And, it’s important to note that social media is not the cause of political violence and that many forms of media and digital communications tools and platforms play a role, as was the case in the events that led up to January 6, 2021. But the major social media platforms we reviewed here–because of their size, the incentives they create for political figures, users, and the broader media ecosystem, and their policy commitments–are worthy of close scrutiny.

The chart below is presented as a resource for observers to help evaluate the performance of the major platforms and whether they are delivering against their stated policies should any version or combination of these scenarios unfold during the post-election period.

Additional recommendations to address potential instability and violence after the election

In a recent post, technology policy experts and former Facebook policy executives Katie Harbath and Nicole Schneidman laid out practical advice to platforms on how to approach a potentially volatile post-election period, including “establishing clear escalation processes to navigate fast-evolving and uncertain scenarios” and engaging with external stakeholders. And in the newsletter Everything in Moderation, trust and safety expert Alice Hunsberger urges platforms to make sure frontline moderation teams are well-resourced and supported. We offer these additional recommendations to prepare.

  • Clearly and visibly address premature claims of victory and false election claims. Platforms should reinstate and enforce rules against false allegations of past fraud and unfounded claims of early victory, such as labeling false election claims as they did in 2020, especially for high-profile accounts. Platforms should stringently enforce their policies on all groups, pages, and accounts.
  • Prohibit and prevent live-streaming violence and material intended to incite further violence. If violence does erupt following the election, platforms must ensure it cannot easily spread or be promoted online. Platforms must be prepared to interrupt attempts to livestream violent content that seeks to recruit others, glorify or normalize violence, or spread graphic content for shock value. They can implement the following safety features and checks to curb the spread of election violence:
    • Limit who can use live-streaming features seamlessly during and after the election. Platforms should prohibit livestreaming by entities they categorize as “dangerous organizations or individuals” who advocate violence; restrict live-streaming features to users who meet certain benchmarks, such as a high minimum number of followers (YouTube restricted who could livestream after October 7, 2023, to channels with at least 1,000 subscribers, then later lowered that threshold to just 50); and ensure users who can livestream without safeguards have been verified in some manner.
    • Institute and enforce stricter policies against accounts that livestream violence. Amazon’s Twitch announced in 2020 that it would create an external advisory council to address safety issues on the site but disbanded the group in 2024 after making only one public update in September 2023. Meta’s Facebook announced a “one strike” policy in May 2019 to its Live feature, promising to restrict accounts after one violation of their most serious policies, but ADL in 2023 could find no documentation of this policy in Meta’s rules.
    • Make it difficult to livestream violence unchecked. Platforms should regularly prompt streamers with pop-ups to continue, making it difficult to stream while carrying out violence; add broadcast delays for unverified users, so moderators or automated tools can review and take down violative content; and block links to and from livestreams on websites that allow violent extremist content.
  • Continue or reinstate pauses on political advertising while there are signs of escalating violence and persistent election fraud claims, but do not wait until violence erupts. Following the 2020 November general election, Google and Facebook lifted ad pauses to allow ads to run during Georgia’s runoff election. Facebook once again paused political ads immediately after the runoff, on January 6th, 2021, but Google did not reinstate the pause until after the insurrection at the US Capitol.
  • Be prepared to provide information on unclear election status. In the event that election results are still not clear and widespread rumors and confusion spread following election day, platforms should be prepared to indicate that the status of election results is still unclear in highly visible surfaces on their platforms and direct users to authoritative sources of information.

Related Reading:

Authors

Jordan Kraemer
Jordan Kraemer, PhD, is Director of Research at ADL’s Center for Technology & Society and an anthropologist of emerging media. As a 501(c)(3) nonprofit, ADL takes no position in support of or in opposition to any candidate for elected office. The views expressed here do not necessarily represent the...
Tim Bernard
Tim Bernard is a tech policy analyst and writer, specializing in trust & safety and content moderation. He completed an MBA at Cornell Tech and previously led the content moderation team at Seeking Alpha, as well as working in various capacities in the education sector. His prior academic work inclu...
Diane Chang
Diane Chang is the founder of Invisible Fabric, a consultancy on issues at the intersection of technology, media, and society. She is currently an entrepreneur-in-residence at the Brown Institute for Media Innovation at the Columbia University Graduate School of Journalism, and Safety Product Manage...
Renée DiResta
Renée DiResta is an Associate Research Professor at the McCourt School of Public Policy at Georgetown University, and the author of Invisible Rulers: The People Who Turn Lies Into Reality. She researches influence across social and media networks and has studied rumors, propaganda, and disinformatio...
Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a new nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & ...
Gabby Miller
Gabby Miller was a staff writer at Tech Policy Press from 2023-2024. She was previously a senior reporting fellow at the Tow Center for Digital Journalism, where she used investigative techniques to uncover the ways Big Tech companies invested in the news industry to advance their own policy interes...

Topics