Home

Donate
Perspective

Let the Public Govern AI

Alice Siu / Sep 5, 2025

Alice Siu is Associate Director at Stanford’s Deliberative Democracy Lab.

Recent reporting on Meta’s internal AI guidelines serves as a stark reminder that the rules governing AI behaviors are frequently decided by a small group of the same people, behind closed doors. The sheer scale of work every AI company grapples with, from determining ethics and mapping acceptable behaviors to enforcing content policies, affects millions of people through processes that the public has no visibility into.

The truth is that these silos are constantly happening across the industry.

Tech policy, particularly AI policy, is often so complex and evolves so rapidly that everyday perspectives are not easily captured. As consumers, we’ve grown accustomed to a system where the most important decisions about technology governance happen in exclusive settings.

But what if we flipped the script? What if users helped create the rules?

The reality is that tech companies wield unprecedented power over our social interactions, information ecosystems, and personal data. Yet the governance of these systems has traditionally remained opaque, and the algorithms that drive these platforms are equally as difficult for the public to understand.

While individual users may struggle to understand the nuances of technical systems, deliberative input offers companies a powerful tool to bridge this gap, leveraging the wisdom of diverse groups to surface insights and questions that internal teams, working in isolation, may miss.

This is the promise of deliberative democracy: inviting communities of all backgrounds to engage in meaningful, informed discussion about how the systems that affect them are built and governed. Deliberative democracy has been utilized in local and national governments for decades to address a range of issues from electoral reform to abortion policy.

Applying the deliberative democracy approach to the responsible development of technology can massively scale public input and impact. And we’re already beginning to see the industry embrace these more deliberative forms of public input.

Conducted in partnership with Stanford’s Deliberative Democracy Lab, Meta's Community Forums recruit representative samples from multiple countries, equipping participants with briefing materials, inviting them to deliberate in small groups, and engage in Q&A with experts and policymakers. Over the last two years, two deliberative forums have brought together 1,500 people from six countries, including Brazil, Germany, and India, to deliberate about AI chatbots and agents. A recurring theme throughout is that people are open to various use cases of generative AI chatbots as long as they’re properly informed.

Similarly, Anthropic, in partnership with the Collective Intelligence Project, gathered 1,000 people to draft a constitution outlining the public's principles for AI. This effort aimed to capture the collective wisdom and values of the public, ensuring that AI development aligns with societal ethics and priorities.

Recently, Stanford University’s Deliberative Democracy Lab announced a new Industry-Wide Forum, convening multiple tech companies, including Cohere, DoorDash, Oracle, PayPal, Meta, and Microsoft, to gather public feedback to shape the responsible development of AI agents. Launching in the Fall, this deliberative forum aims to give companies direct insight into what users want from AI Agents, help them build trust and legitimacy with users, and foster cross-industry collaboration to support the long-term development of industry standards.

As societies begin to meaningfully explore the uncertainty of how these new technologies will be integrated and regulated, there is a window of opportunity for new and innovative modes of public discourse to take root and shape these transformative technologies.

Getting thoughtful public input on technology requires engaging users in high-quality deliberations. Having organized hundreds of Deliberative Polls worldwide, I can say with certainty that deliberation makes people more informed and better equipped to weigh competing tradeoffs.

New tools like Google DeepMind’s Habermas Machine, an AI system designed to help groups of people find common ground during deliberations, are making steps towards this vision.

A growing number of companies and researchers are deploying AI to enhance public input, such as through platforms to synthesize user-generated content, tools to help people find mutual understanding in deliberation, and scaling deliberation globally, like our AI-assisted Stanford Online Deliberation Platform.

While these efforts show promise, voluntary experiments aren't enough. We need systematic integration of public voice in tech governance, not to slow innovation, but to ground it in lived experience.

This isn't about choosing between innovation and democracy. It's about recognizing they're stronger together. The stakes right now are too high for the current governance approach to continue. As AI systems become more sophisticated and ingrained in day-to-day life, the decisions companies make about acceptable behaviors will have profound societal impacts.

When left to make these consequential decisions in isolation, even well-intentioned companies miss the mark. Competitive pressures, internal blind spots, and the swift pace of technological development create conditions where user preference often takes a backseat to speed and innovation. We must move beyond reactive fixes after problems emerge to proactive engagement that helps mitigate them from the start.

Tech companies can lead by embracing public input as an essential component of product development. Policymakers can create institutional frameworks that incentivize deliberative democracy.

If we don't act, tech governance will remain a series of behind-the-scenes decisions by teams under pressure to move fast, with the public left to discover the consequences only when investigative reporting forces transparency. The public deserves a seat at the table where these rules are written, not just a chance to react when they fail.

Authors

Alice Siu
Dr. Alice Siu serves as the Associate Director of the Deliberative Democracy Lab and is a Senior Research Scholar at Stanford University's Center on Democracy, Development, and the Rule of Law at Stanford University’s Freeman Spogli Institute. She earned her Ph.D. from Stanford's Department of Commu...

Related

Participatory AI: Forging Shared Frameworks for ActionFebruary 26, 2025

Topics