To Make Sure AI Advances Democracy, First Ask, ‘Who Does It Serve?’
Richard Reisman / Jul 16, 2025
Deceptive Dialogues by Nadia Nadesan & Digit / Better Images of AI / CC by 4.0
“When looms weave by themselves, man’s slavery will end.“ I happened upon that ancient line from Aristotle in 1964, in a New York Times article on automation and employment. It has shaped my imagination and career ever since. I learned to program during a summer job at Bell Labs, and was thrilled at how computers could end certain forms of drudgery. Then, around 1970, three experiences brought the future into focus. At a conference, I saw Doug Engelbart reprise his famous 1968 “Mother of All Demos” on augmenting human intelligence as contrasted to early notions of artificial intelligence, and also, I clicked on an implementation of Ted Nelson’s seemingly magical hyperlinks. Third, I delved into early concepts for augmenting human collaboration, as enabled by Murray Turoff’s computerized Delphi conferencing.
In those early days of the digital technology revolution, I saw glimpses of the future of computing and networking that Steve Jobs later described as “bicycles for our minds.” My interests in epistemology, psychology, history, media, and economics added sociotechnical dimensions to that vision. Around 1990, working on financial market data news feeds, I saw how individual traders could select analytics filters. Then, around 2003, I proposed designs for an ecosystem of tools for collaborating on open innovation.
Like many of those early pioneers in computing, I took it as a given that these new tools would be customizable to serve their individual users, and that they would promote freedom and enhance democracy. But that vision turned sour in recent decades. The corporate platformization of social media and the web spoiled their potential to enhance freedom, ultimately threatening democracy and ‘enshittifying’ our bicycles for the mind. Now a similar set of phenomena is emerging around artificial intelligence, as the potential for AI tools to serve users is coopted into business models that steal our agency and extract our value to enable corporations, oligarchs, and authoritarians to steer us to what amounts to junk food and toxins for the mind.
Over the next ten years, I believe humanity will make a fateful choice: will AI help us think better and work more effectively, or will it make things worse, perhaps irreversibly? Will these tools support people and their communities, or will they mainly benefit the companies that build them—and the governments they may be beholden to—by manipulating and exploiting users and their data? The possibility that it is the latter is an imminent threat, not a distant one. In the United States and abroad, democracy seems to be near an inflection point. It may soon be too late to reclaim control of essential tools before oligarchic corporate dominance becomes inescapable—if we do not refocus on the need for truly human-centered control.
To avoid this outcome, we need to urgently redesign the technology ecosystem to restore individual human agency—not just over how we allocate our attention, but over how we arrive at our very understanding of the world, our actions, and our distinctly human ability to collaborate and develop collective intelligence.
The bedrock principle
Scholars and policymakers concerned with these questions are animated by the urgency of this moment. For instance, an April symposium hosted by the Knight First Amendment Institute on “Artificial Intelligence and Democratic Freedoms” sought to “examine the risks that advanced artificial intelligence systems pose to democratic freedoms, to discuss sociotechnical as well as technical interventions to mitigate these risks, and to identify ways in which these systems may be employed to support democracy.” Among the opportunities for AI to bolster democracy, the organizer of the symposium, Seth Lazar, discussed the promise of AI agents to complement, not replace, participatory and procedural forms of democratic deliberation. Gillian Hadfield noted the need to support both bottom-up and top-down social processes. Sydney Levine discussed the need for communities and institutions to help establish norms, but that it is for people to interpret and apply that in their own context.
Many at the symposium focused on the threats that AI poses to democracy, or the ways in which it seems in conflict with democracy. Kevin Feng observed that while AI is most useful when we have a concrete idea of what we want, democracy works in a world in which there are no concrete answers and the need is to find consensus on how to move forward with imperfect or incomplete data. Spencer Overton observed that while AI may be useful at “bridging” different points of view to help find consensus, there are both moral and practical reasons to limit assimilation and preserve a level of disagreement.
A final panel discussed the drive toward artificial general intelligence (AGI) and its potential implications for society. It considered complicated new questions, including issues that may arise from the advent of multi-agent AI systems.
While many viewpoints and lines of inquiry addressed various questions at the intersection of AI and democracy, no singular principle emerged around which to organize the field’s efforts—which is perhaps more than could be expected at such a transdisciplinary gathering. My view is that the primary question on which the design of high level sociotechnical systems thinking should be centered is on the simple question, “Who does it serve?” Democracy is not a thing to be automated and optimized by AI, but a deeply social human process to be augmented in its workings by AI. Democracy is given legitimacy not by what is decided, but by how it is decided. AI, if it is to be pro-democratic, must fit into our sociotechnical evolution as a tool for augmenting that human process of collaboration, deliberation, and negotiation, and never for replacing it.
At several points during the conference I commented on how personal AIs, working as participants in multi-agent systems, may be central to ensuring that AI supports democracy (itself the very model of a multi-agent system) rather than destroys it. That elicited generally supportive responses from a number of the speakers. Here I expand on that.
Drawing on the lessons of social media—where algorithmic attention agents are already producing deep sociotechnical change at scale – technological tools need to bolster and restore the human processes that enable freedom of expression to work in the first place. Human discourse is, and remains, a social process based on three essential pillars that must work in synergy: 1) Individual Agency, 2) Social Mediation through a complex ecosystem of communities and institutions, and 3) Reputation. All of these guide our freedom of thought, expression, association, and impression, working together to enable maximum freedom, while organically nudging by social and reputational influence to support our collective interests and sense-making. Democratic freedom is not just a matter of what is done, but of how it is done—as an inherently rich and complex human process.
In other words, AI must serve the people who use it to engage in free expression and make democratic decisions—as individuals and through informal and organic collectives that people give legitimacy to, including both civil society and formal government. AI can help people understand themselves, others, and the world, but it cannot bestow legitimacy. It can only help distill that legitimacy from the humans who are in the various social and political decision systems that the AI augments.
Some are concerned that democracy is messy and seemingly inefficient, but no other form of government has sufficient fairness, adaptability, extensibility, and resilience to maintain freedom for humans in an increasingly interconnected, diverse, and dynamic world. AI could make human governance less messy, but being less messy by automating control without the input of its human constituents would likely make governance more authoritarian, and less optimal for humans overall.
Making a world with AI safe for democracy
Trying to nudge large corporate platforms to “align” AI systems and agents with the interests of their “users,” “stakeholders,” and “society” will not be enough to avoid crippling principal-agent problems that would destroy the core processes of democracy. Public policy should support this duty to serve democratic interests at a broad sociotechnical level. That includes development and enforcement of technology, market, legal, and regulatory policies, as well as a broader civic culture, that enshrine and support the right to interoperable personal AI tools. These must be loyal, responsible, and supportive to their users—in the ways that those human users decide best fit them. Given the level of democratic backsliding in many parts of the world, this must be addressed, at multiple levels, by a whole of society effort.
As Richard Whitt and I previously suggested on Tech Policy Press, we need to leverage personal AI agents that are “agential” to ensure the answer is democratic. This is not a matter of top-down, a priori design, but of design for openness, extensibility, and generativity. It is a rewilding that can nurture a reweaving. The Free Our Feeds initiative is promoting this kind of refocus for social media in order to create personal feed attention agents that serve their users and communities, and that kind of approach should extend to all kinds of AI. To achieve this, consider these four points of intervention: technology, regulation, markets, and users and their communities.
- Technology is a tool: not a solution in itself, but an essential enabler. As the technology for multi-agent AI systems develops, protocols are emerging for interoperation of AI agents and for secure sharing of context data and action authorizations with external agent tools. This is needed to enable personal AI agents to represent us loyally, even as fiduciaries—and to augment us, as powerful and nimble bicycles for our minds. Think of this as “Have your AI agent call my AI agent,” where your agent (or that of your community) negotiates with institutional AI agents on what data can be used for what purpose, and what actions can be taken, on what terms. We will face complex assemblages of AI agents, so we will need to have verifiably trustworthy AI agents of our own to help us determine when and how we can trust those other agents.
- Regulation should seek to ensure that AIs can be trusted to serve their users first and foremost, even when doing so is in conflict with corporate or other interests. Here, interoperability is key. Regulation should support open interconnection of AI systems in ways that minimize principal-agent problems. This can naturally limit corporate and oligarchic power, and empower markets, users, and their communities. Where countries may be backsliding, liberating tools can be developed elsewhere and then applied as hacks to wedge services back open. Regulation to ensure AIs serve their users can support development of “a thought-through normative framework that reclaims not only what is regulated, but why—not simply to reduce harms, but to reestablish the moral architecture of a democratic digital public sphere."
- Markets should be kept open and level, able to do their work in serving user and community demand. If a jurisdiction (most impactfully the US) fails to “billionaire proof” its AI, other jurisdictions must take the lead, creating more democratic tools that can be applied wherever needed. Healthy co-opetition is needed to balance competing interests. That can drive business model innovation to find better ways to get those who demand democracy and freedom to sustain a supply of tools and services that help support that, and to translate that demand and supply even where parts of the ecosystem are less fully supportive. The US can lead in democratization and technology as it has traditionally, but regardless of which direction the US goes, alternative efforts (like “digital sovereignty” and Eurostack) can be designed, and then “land-and-expand” democracy wherever needed, from the bottom up, even without top down support. Win-win, both/and economics that grows the pie and expands freedom should be nurtured to avoid the narrow-minded suboptimization of zero-sum, binary, winner-take-all thinking.
- Users and their communities should be educated in digital civics and focused to demand their tools serve them, rebel against enshittification and oligarchic or authoritarian control that steals their power, and demonstrate market demand and support for prodemocratic and prosocial sociotechnical solutions. People are social creatures that seek the support and guidance of those around them, especially when their communities and institutions are healthy. Our AI tools should be refocused to better augment that—not only individual humans, but communities in the loop, especially regarding values and reputations. And yes, users are often lazy and careless, but still very willing to choose and use tools that are made simple and predictable, like a familiar brand. We must all remember that democratic freedoms are contagious and self-reinforcing when nurtured to demonstrate their vibrance, but atrophy when neglected.
We live in a world of many democracies, in many levels, shapes, and sizes—if we can keep them. AI can serve us in doing that, or serve those who would trample over our freedoms, whether corporations, oligarchs, or governments, or some combination of all three. The key question about AI should always be “who does it serve?” We should all care about the answer.
Authors
