Why Electoral Authorities Need an AI Framework
Alberto Fernández Gibaja / Nov 20, 2025Alberto Fernandez Gibaja is the Head of the Digitalization and Democracy Programme at International IDEA. The views expressed in this publication do not necessarily represent the views of International IDEA, its Board, or its Council members.
When speaking to his peers not long ago, Tom Rogers, Australia’s former Electoral Commissioner, put it bluntly: “Electoral authorities were forced to become experts on disinformation. Now, we’re being forced to become experts on AI.” His warning captures the mood across electoral institutions today as they grapple with AI: a mix of urgency, excitement, caution and determination. After more than a year of discussion with electoral authorities and civil society organizations in more than 70 countries, one thing is clear: AI has entered elections, whether institutions are ready or not.
Election officials say they are once again on the frontlines of a technological disruption they did not choose but cannot ignore. This time, geopolitical competition and economic hype complicate things even further, challenging them to look past the hype and address the overpromises of AI systems, as one representative from a Pan-African civil society organization rightly put it.
Electoral authorities also need to understand what AI can realistically do, what it shouldn’t do, and what must be in place before it’s deployed. That means building clear governance frameworks, ensuring accountability, and investing in the skills and technical literacy that will allow election officials to make informed, independent decisions about how and when to use these tools.
These authorities are the often-hidden engine behind elections. Their mandates and structures vary; some are permanent institutions, while others operate only during election periods. Some countries even have multiple bodies dividing responsibilities for administration, oversight, or adjudication. But their purpose is the same: to safeguard the integrity of democracy by ensuring that elections are free, fair, and trusted.
AI’s promise and peril for elections
Any new technology can disrupt their work, and AI is no different. As part of the Tech Accord to Combat Deceptive Use of AI, International IDEA, with financial support from the Societal Resilience Fund established by Microsoft and OpenAI, has run a series of dialogues, trainings and discussions with electoral authorities from more than 70 countries. The goal of this work is to help authorities in maintaining elections that are free, fair, and trusted, even in the age of AI.
From these conversations, several fundamental lessons have emerged that should inform any discussion on AI’s impact on democracy.
First, the double-edged nature of AI is particularly evident for electoral authorities. Resource-intensive tasks that are often done manually, such as reviewing invoices from political parties or consolidating voter rolls, can become faster and more efficient with AI. Electoral officials recognize these benefits, and nearly one in three electoral authorities already use AI in some form. Yet, they are equally aware of the risks.
The use of generative and decision-making AI systems in electoral management remains highly complex. They can create new cybersecurity vulnerabilities and increase dependencies on technology and computing power, raise data privacy concerns and reproduce bias or hallucinations. The absence of governance frameworks and clear regulations in most countries adds another layer of uncertainty.
Yet, according to most electoral officials, the biggest risks — and their biggest fear — lie beyond their control: how political campaigns, candidates, and other actors use AI in ways that can further disrupt the information environment and erode public trust in elections. More than half forecast AI will have a negative impact on the information environment the next time their country goes to the polls.
Almost all electoral authorities doubt these actors will follow regulations or principles governing AI. While many see potential for AI to make their own operations more efficient, they express deep concern about its broader effects on democracy, particularly in the amplification of misinformation and manipulation.
Most electoral authorities still cannot point to a clear example of AI decisively influencing an election. For now, they are bracing for the worst but acknowledge that AI has only marginally changed the underlying threat of information distortion. Of special concern is the threat to their ability to communicate clear, authoritative information on the process itself. An actor trying to derail voting can use AI to sow confusion, for instance, by forging official documents or producing synthetic images to create confusion or discourage turnout. One electoral official expressed deep concern about the disruptive potential that false synthetic videos of electoral violence could have.
A second lesson is that, to resist the hype, electoral authorities must view AI as a tool for solving concrete, existing problems. Although this may seem obvious, history is full of examples of where technology has created new challenges for elections. More than 60% of the authorities we engaged with have already been approached by vendors offering AI-based solutions. However, as impressive as the technology might seem, rushed deployments, weak ethical safeguards, or accuracy issues can quickly undermine public confidence. Decreasing trust means turning a potential solution into a new problem. Some of the most trusted electoral processes in the world remain paper-based, and in elections, one guiding principle should still apply: If it works, don’t break it.
This means AI providers must understand the special needs of electoral authorities and adapt to them. Elections’ cybersecurity standards and needs are — or at least should be — orders of magnitude higher than for most other sectors, as constant cyberattacks on electoral infrastructure in many countries demonstrate.
AI providers — from specialized tools to foundational models and infrastructure providers — need to recognize the computing and data access limitations of electoral authorities, but also the sensitive nature of the data they manage. Many of these authorities, funded by the state, lack the computing power required to run complex AI systems or the high-quality data needed to train them. This is particularly true for smaller countries.
Corporate-level requirements might not be easily met by electoral authorities. Security reasons may prevent them from using certain data to train AI systems. Cloud-based solutions hosted outside of the control of the electoral authority are not always desirable due to the sensitive nature of the data and the dependencies they may create for a process, where even minor errors can have major consequences.
In the same vein, accuracy is a concern for electoral authorities. A chatbot recommending places to eat can make small mistakes without losing its usefulness. AI systems used during elections don’t have that luxury. For now, 100% accuracy is unattainable in AI, and it might never be. In high-risk and high-stakes processes like elections, even small mistakes, hallucinations, or simple inaccuracies can damage citizens’ perceptions of the process, jeopardizing the most important asset electoral authorities have: trust.
Building a shared framework
What comes next? Electoral authorities and their partners need to develop a shared blueprint or framework for the use of AI in elections that creates a common language and a shared understanding among electoral actors. It is both needed and demanded. This framework should serve as a baseline of minimum requirements or principles for any use of AI related to elections. It would allow authorities to communicate consistently and enable greater interoperability in principles, actions and safeguards. In turn, this can then foster collaboration and learning among electoral authorities and between them and providers, policymakers and regulators.
Above all, a shared blueprint would help electoral authorities to collectively build resilience and trust in an era of rapid technological change. It would offer a baseline to decide when AI should or should not be used, defining minimum safeguards for security, data protection, and human oversight and establishing a common understanding among the electoral community on how to use AI. With such a framework, electoral authorities could learn from one another, harmonize good practices, and engage more effectively with technology providers and international partners. In doing so, they would not only protect elections from the risks posed by AI but also harness its potential to improve transparency, efficiency, and public confidence in democracy.
Authors

