Home

Donate
Perspective

How LLM Alignment Can Help Counteract Big Tech's Centralization of Power

Oriane Peter / Jul 21, 2025

This post is part of a series of contributor perspectives and analyses called "The Coming Age of Tech Trillionaires and the Challenge to Democracy." Learn more about the call for contributions here, and read other pieces in the series as they are published here.

Glitched Landscape by Cristóbal Ascencio & Archival Images of AI + AIxDESIGN / Better Images of AI / CC by 4.0

The increasing mediation of public discourse by AI tools, such as OpenAI’s ChatGPT or Google’s “AI Overviews,” raises a critical question: whose perspectives, viewpoints and interests are actually reflected by these technologies? The latest controversy surrounding Elon Musk’s “maximally truth-seeking” chatbot Grok might shed some light on an answer. On the eve of xAI’s public apology for its model's more problematic outputs, reports surfaced suggesting the chatbot explicitly analyzed Elon Musk’s tweets before answering certain controversial prompts. Whether intended or not, this behavior highlights the propaganda potential of these prolific communication systems, as well as the threat they pose to the informational foundations of democratic debate.

However, it’s worth noting that such controversies also signal the limited command that AI companies have over their models’ outputs. From racist rants to security vulnerabilities, examples of unintended chatbot behaviors are common. Yet these limitations and vulnerabilities hint at potential counter-strategies against billionaires’ expanding dominance over the information landscape and suggest potential pathways to protect democracies from the negative effects of AI.

The power struggle of information control

The struggle to control the reproduction and dissemination of information is not a new phenomenon, nor is the power centralization that often accompanies it. When the printing press began to spread throughout 16th-century Europe, political authorities reacted by imposing licensing regimes that determined who could operate a press in their country. In England, such licensing eventually led to a near-monopoly on printing and publication by a London guild—a control that lingered for nearly two centuries.

A similar pattern repeats with each new media technology, from newspapers and radio stations to local TV news and internet search engines. It is therefore unsurprising that a comparable battle is unfolding today over LLMs. This struggle is visible on a geopolitical scale: From US President Donald Trump’s executive order to “free AI from ideological bias” to the recently uncovered well-funded Russian operations to influence the outputs of Western LLMs.

The power to control these AI systems is largely concentrated in the hands of the private companies that develop them. Grok’s behavior is only the latest sign that companies could shape models’ outputs for political ends. For instance, DeepSeek—an LLM developed by Chinese hedge fund founder Liang Wenfeng—reportedly follows Chinese censorship guidelines. Similarly, Meta recently announced that its open-source AI would accommodate US political conservatism more so than previous versions, and more closely match the political leaning of Grok. This move has been described as an effort by Meta’s CEO, Mark Zuckerberg, to appeal to the current US administration.

The singular influence of LLMs on perceptions of reality

As tech billionaires continue to amass wealth, they seem to be emerging victorious in this latest iteration of the information control struggle. However, LLMs have a few features that distinguish them from previous information technologies, making them potentially more erosive to democracy.

First, LLMs can generate text at an unprecedented rate and scale, raising concerns about the "pollution" of online spaces with vast quantities of synthetic content. Moreover, these systems produce plausible-sounding falsehoods, making it harder for users to distinguish between reality and fabrication. This phenomenon ushers in what philosopher Mark Coeckelbergh describes as an era of post-truth: "a political condition in which there is public anxiety about what are facts."

As Hannah Arendt warned in her last public interview:

If everybody always lies to you, the consequence is not that you believe the lies, but rather that nobody believes anything any longer.[…] And a people that no longer can believe anything cannot make up its mind. It is deprived not only of its capacity to act but also of its capacity to think and to judge. And with such a people you can then do what you please.

Such an epistemic crisis may be exacerbated by generative AI. The powerful ability of LLMs to manufacture uncertainty could facilitate an unprecedented challenge to democracy.

The last difference of LLMs lies in the significant sentimental symbolism in which they are framed, which makes it increasingly difficult to counter their output with healthy critical thinking. Silicon Valley has designed its LLMs to resemble anything from a trusted friend, to a clone of a dead relative, to even a god-like presence, often infusing their outputs with heavy, emotional meaning. Their friendly tone, constant availability, personalized responses, and sycophantic tendency contribute to a pervasive impact on our perception of truth. This deeply emotional framing enables the design of systems that not only retain our attention but manipulate our intentions. Consequently, tech billionaires are building technology that can intimately shape our understanding of the world, potentially subverting the democratic agency of citizens.

AI alignment: a necessary tool of control

But this power to shape citizens’ understanding of reality crucially hinges on the ability to control what a model can, and importantly, cannot, output. To reach their current performance, state of the art models were trained on vast datasets, essentially most of the internet. In theory, such diversity should allow them to produce responses that correspond to a wide range of opinions. This was indeed the case with earlier models, a factor that became problematic as they could be easily nudged into echoing the toxic views present in their training data. OpenAI has claimed that the success of its products is due to the solution it found to this problem: AI alignment.

OpenAI proposed ‘aligning’ its model to (some) human preferences—essentially ‘teaching’ the model which responses are acceptable and which are not. This process underlies the mechanism behind the chatbot’s typical refusal, such as, “I’m just a chatbot; I’m afraid I can’t help with that request.” In effect, alignment is the critical step following training on vast swaths of the internet, determining which content is to be reproduced and distributed and which is to be silenced. It is key to maintaining control over the information an LLM provides. Although the specifics remain opaque, alignment is likely how engineers at Meta and xAI can steer their models' political outputs. Therefore, alignment is a necessary component for companies to maintain control over the output of their models.

However, while alignment might be a vector of knowledge control, it could be subverted to offer a pathway toward a decentralization of power. Indeed, it seems that LLMs can be quite flexibly reconfigured. It did not take long for online communities—irritated by the refusals of open-source LLMs—to "un-align" them, coaxing models into divulging, for example, recipes for building bombs. Moreover, if the claims that DeepSeek is essentially a "copied" version of ChatGPT hold true, it suggests that even a closed-source model can be reshaped with ‘limited’ resources. (DeepSeek was itself again “re-aligned”—or “uncensored”—shortly after its release, by the American company Perplexity.)

AI alignment: an opportunity for decentralization

This ‘alignment ping-pong’ suggests that the control tech billionaires wield over their LLMs may be subverted. This is, in part, worrying: it means that any actor with malicious intent can repurpose LLMs at will. However, it also presents an opportunity: by re-aligning these models to serve different interests, there is potential to resist the growing control over the information landscape. The advantage of alignment is that it is a significantly cheaper way to shape AI than to completely retrain it. While alignment techniques like Reinforcement Learning Through Human Feedback (RLHF) are expensive and complex to implement, more recent parameter-efficient fine-tuning (PEFT) methods offer a more accessible approach for many groups to align their own models. For example, LLM researcher Kush Varshney proposes leveraging PEFT to enable diverse communities to realign models that more faithfully represent their own norms and cultures.

Aligning one’s own models can therefore serve as a means to reclaim control over how information is conveyed to a target audience. For instance, in response to alarming reports about the potential consequences of citizens relying on commercial LLM chatbots for voting information, governments might consider aligning and deploying their own LLM-based retrieval and information systems (RAG). These systems could be tailored to inform users on up-to-date and vetted information, and under local legal requirements.

Moreover, this approach offers an opportunity not only for governments but also for civil society and any group committed to disseminating accurate information. For example, a US coalition of medical doctors might choose to align and deploy its own model to deliver scientifically sound information about vaccinations, grounded in curated sources of trusted information.

Thus, decentralization may prove to be an essential first step in resisting the further centralization of knowledge control by Big Tech. Through the surfacing of various aligned models, a multitude of voices could emerge amid the flood of disinformation enabled by LLMs.

There are, of course, limitations to what alignment can achieve. It sits atop pre-training data, which already shapes much of an LLM’s output. Moreover, as AI companies improve their ability to control their models, ‘re-alignment’ may become increasingly difficult. Furthermore, deploying an LLM-based system involves more than just training—it requires a substantial infrastructure to reach a large-scale audience, an endeavor that is both costly and environmentally unsustainable.

Finally, there is a risk in relying solely on technical solutions for inherently social issues. Strengthening democracies will necessitate broader social changes that cannot be mediated by even the most elaborate algorithm. Nevertheless, alignment remains one of the most immediate and accessible tools for communities or institutions seeking to counter the growing dominance of tech CEOs in shaping information production and distribution.

Authors

Oriane Peter
Oriane Peter is a machine learning engineer and interdisciplinary researcher in the practical implementation of Responsible Machine Learning. She is currently pursuing a PhD with RAI UK at King’s College London, investigating the role of Large Language Model (LLM) homogenization in the co-production...

Related

Perspective
Tech Oligarchy Imperils Democratic Information FlowsJune 9, 2025

Topics