Home

Syllabus: Large Language Models, Content Moderation, and Political Communication

Prithvi Iyer, Justin Hendrix / May 7, 2024

This piece will be updated sporadically with additional resources. While we cannot post every link we receive, we encourage the Tech Policy Press community to share material that may be relevant.

With the advent of generative AI systems built on large language models, a variety of actors are experimenting with how to deploy the technology in ways that affect political discourse. This includes the moderation of user-generated content on social media platforms and the use of LLM-powered bots to engage users in discussion for various purposes, from advancing certain political agendas to mitigating conspiracy theories and disinformation. It also includes the political effects of so-called AI assistants, which are currently in various stages of development and deployment by AI companies. These various phenomena may have a significant impact on political discourse over time.

For instance, content moderation, the process of monitoring and regulating user-generated content on digital platforms, is a notoriously complex and challenging issue. As social media platforms continue to grow, the volume and variety of content that needs to be moderated have also increased dramatically. This has led to significant human costs, with content moderators often exposed to disturbing and traumatic material, which can have severe psychological consequences. Moreover, content moderation is a highly contentious issue, driving debates around free speech, censorship, and the role of platforms in shaping public discourse. Critics argue that content moderation can be inconsistent, biased, and detrimental to open dialogue, while proponents of better moderation emphasize the need to protect users from harmful content and maintain the integrity of online spaces. With various companies and platforms experimenting with how to apply LLMs to the problem of content moderation, what are the benefits? What are the downsides? And what are the open questions that researchers and journalists should grapple with?

In this syllabus, we examine some of what is known about the use of large language models (LLMs) to assist with content moderation tasks, engage in various forms of political discourse, and deliver political content to users while also considering the ethical implications and limitations of relying on artificial intelligence in this context, and how bad actors may abuse these technologies.

This syllabus is a first draft; it will be periodically updated. If you would like to recommend relevant resources to include, do reach out via email.

AI and Political Communication

In this section, we track academic research examining the use of generative AI for counterspeech, hate-speech detection, political communication, and to create and mitigate disinformation campaigns.

Counterspeech and hate speech detection

Political campaigns and disinformation

LLMs and Content Moderation

In this section, we provide resources on the opportunities and risks of using large language models for content moderation. Research on this topic examines the ability of LLMs to classify posts based on a platform’s safety policies at scale.

Tech Policy Press coverage

Blogs

Technical papers released by AI companies

Events

Academic papers

Authors

Prithvi Iyer
Prithvi Iyer is a Program Manager at Tech Policy Press. He completed a masters of Global Affairs from the University of Notre Dame where he also served as Assistant Director of the Peacetech and Polarization Lab. Prior to his graduate studies, he worked as a research assistant for the Observer Resea...
Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a new nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & ...

Topics