Home

Study Suggests We Should Worry About Political Microtargeting Powered by Generative AI

Tim Bernard / Feb 7, 2024

Tim Bernard on the findings of "The persuasive effects of political microtargeting in the age of generative AI," a new paper from Almog Simchon, Matthew Edwards, Stephan Lewandowsky published in PNAS Nexus.

Ever since ChatGPT was released in late 2022, there has been considerable speculation about risks to society resulting from Generative AI, but we have seen relatively few glimpses of these coming to fruition: cheating on homework, lawyers citing hallucinated cases, even the Taylor Swift images, as harmful as they may have been, may not have reached a “civilizational danger” level, but the Slovakian election affair and the robocalls in New Hampshire are of serious concern. A recent study by three researchers at the University of Bristol investigates another fear that has been circulating: could Generative AI enable worryingly effective targeted political campaigns?

This is not a new fear. As the paper spells out, Cambridge Analytica claimed to be able to use Facebook data to establish individuals’ personality traits and use this information to create remarkably persuasive political campaigns. Public exposure of these claims, along with a range of malfeasances by the company (and the fact that they had worked for some successful but controversial causes, including the Trump 2016 presidential campaign and the Brexit Leave campaign), caused significant concern. Later revelations, however, suggested that Cambridge Analytica did not, in fact, have the abilities it claimed. But could future attempts to combine vast amounts of personal data—whether sloshing around on social media platforms or on the servers of data brokers—with political communications deliver on the basic premise of the now defunct company’s sales pitch?

Indeed, the new research purports to demonstrate that this time is different. The authors cite several past studies from recent years as providing the foundation for this one. The first major building block is research that establishes that personal attributes, including “Big 5” personality attributes can be reliably inferred from online information. The second is that microtargeting individuals with messages tailored to their own attributes has been shown to be more persuasive, in certain situations, to a statistically significant degree.

Where does generative AI come in? The technology has the power to make the previously very labor-intensive work of microtargeting easily scalable, if it can:

  1. Customize political messaging copy for different segments of the audience, and
  2. Validate that the generated text will appeal to the intended segment.

The researchers pursued these questions in two sets of studies:

Study 1 took real political ads from Facebook; used a large language model (LLM) to rate them for attractiveness for the “openness to experience” personality dimension; and asked subjects to rate them for perceived persuasiveness and then take a test to establish a score for their “openness to experience” personality trait. This study showed positive results indicating that the models could, in effect, identify what messages would be most persuasive for people with different degrees of a well-established personality trait.

Study 2 used LLMs to generate different versions of existing political ads, one for those with high openness scores and one for those with low openness scores. Since Study 1 had already established that the models could identify persuasive messages for those with differing scores, the generated messages were evaluated by LLM to validate their targeting effectiveness.

Taken together, the paper’s studies suggest that LLMs can be used to easily develop effectively microtargeted political ads at scale. The magnitude of the effect of the ads in the experiments was fairly small, but as the authors point out, in close elections, a small proportion of extra voters can make all the difference. This is of particular concern in democracies such as the US, where the results in closely divided swing states can decide the Presidential election by just a few thousand votes.

The paper explicitly identifies this ability as a threat to fair elections, noting that microtargeted content could be “untruthful or manipulated or both,” and claiming that the benefits of the technology will go to the largest campaigns (others have acclaimed the technology for its ability to level the playing field and enable small campaigns to take advantage of techniques already in use by the giants).

The authors offer their findings up as evidence for regulators and policymakers to take into account, and offer one suggestion of their own: that a predictive model could be developed (presumably by email services and ad platforms) to alert users to when they are viewing microtargeted campaign content, thus blunting its effectiveness.

It is clear that campaigns will use any (legal, hopefully!) means at their disposal to enhance their results, and so if this method does indeed work, it will be used. The maker of one of the more well-known generative AI platforms (one of the systems used in the study) would like its products not to be used for such campaigns at scale. But while OpenAI’s policy prohibits building custom applications using their API for campaign work, there are other models out there, including open source ones, and companies that specialize in using AI for political campaigns already exist.

Expecting platforms to develop and implement effective countermeasures may be rather optimistic. But even without specific information about which communications are microtargeted, the public may still adapt. If microtargeted ads become the new normal (in and out of politics), just as targeted ads have done, we do not know if their extra measure of efficacy as confirmed in this paper will be able to endure. Either way, the wave of 2024 elections across the world will likely see a few live experiments.

Authors

Tim Bernard
Tim Bernard is a tech policy analyst and writer, specializing in trust & safety and content moderation. He completed an MBA at Cornell Tech and previously led the content moderation team at Seeking Alpha, as well as working in various capacities in the education sector. His prior academic work inclu...

Topics