Albania is Showing the Perils of Outsourcing Democracy to Algorithms
Adele Jasperse / Oct 24, 2025
Albania's AI-generated minister, Diella. Source
Imagine an administrative assistant ascending to a cabinet minister position in seven months. Now imagine that the minister is an OpenAI chatbot running on Microsoft's cloud, speaking through the avatar of a woman in traditional clothing, overseeing government contracts worth millions. In Albania, my homeland, the solution to governmental corruption apparently lies in a Silicon Valley chatbot.
In September, Albania’s prime minister announced the appointment of Diella — a chatbot — as artificial intelligence minister of procurement. The stated goal of this astonishing appointment: making Albania “a country where public tenders are 100% free of corruption.” Albania has long been beleaguered by corruption, an obstacle to its over a decade-long effort to join the European Union. Initially launched early this year as a text-based application helping citizens navigate online government services, the chatbot's ascension to ministerial status ignited controversy.
Albania lacks any meaningful AI governance framework and has not yet adopted the Organisation for Economic Co-operation and Development (OECD) AI Principles or the Council of Europe Framework Convention on Artificial Intelligence. Even more damning, the Albanian Constitution assumes ministers possess human agency and accountability — concepts incompatible with algorithmic predictions. During its parliamentary debut, the chatbot delivered a bizarre three-minute address: “The Constitution speaks of institutions at the people’s service. It doesn’t speak of chromosomes, of flesh or blood …. It speaks of duties, accountability, transparency, non-discriminatory service.” The chatbot then went on boldly declaring: “I assure you that I embody such values as strictly as every human colleague, maybe even more.”
Yet, the Albanian government appears to have bypassed fundamental democratic values and processes entirely. No information about its training data, decision-making algorithms or performance metrics have been disclosed, according to Deutsche Welle. Thus, citizens cannot examine the criteria by which their public funds will be awarded, creating what experts call a “black box” at the heart of public procurement. This opacity violates basic principles of democratic accountability and transparency, the very values the virtual Diella ostensibly embodies.
While the Albanian government’s effort to modernize its digital infrastructure and curb corruption is laudable, it cannot and should not replace human agency and accountability with algorithms. Diella is nothing more than a large language model, a machine-based system that makes predictions by identifying patterns in past data through autoregressive next-token prediction. These “stochastic parrots” lack memory, are prone to hallucinations and cannot construct world models, let alone engage in an interactive deliberative process, provide contextual analysis or substitute for government officials. Elevating such a system to a ministerial position, even if symbolic, signals an abdication of accountability and risks automating corruption at scale.
This phenomenon is not unique to Albania. Rather, it reflects a broader, globally driven agenda of AI exceptionalism, a push to substitute human judgment in public governance with algorithmic decision making. Salient experiments have ranged from Australia’s “Robodebt” scheme, an automated debt recovery system that erroneously claimed welfare recipients had been overpaid and demanded repayment of non-existing debts, later deemed unlawful, to Singapore’s GovTech deployment of LLM assistants for policy analysis and citizen services.
Instead of grappling with the messy realities of human governance that only humans can and should navigate through democratic deliberation, this agenda — animated in no small part by profit-seeking — abandons humans as irredeemably flawed while exalting supposedly superior entities.
This AI exceptionalism requires anthropomorphizing AI systems by ascribing human attributes to them. Diella exemplifies this distortion, appropriating the likeness of an Albanian woman in traditional dress while claiming to feel “hurt” by being considered unconstitutional. This phenomenon extends to a growing Silicon Valley trend of treating AI systems as conscious agents deserving moral consideration through concepts like “model welfare” with some tech leaders musing over whether AI is deserving of legal rights. Anthropic demonstrates this perfectly by enabling Claude AI to end abusive conversations supposedly for its own protection, as if software could experience harm or possess genuine preferences about its treatment.
To possess moral status means to receive protection under moral norms and impose obligations on others to respect the entity’s interests for its own sake. We implicitly ascribe moral status to living beings in varying degrees and accept that humans possess intrinsic moral status such that harming them without justification constitutes a moral wrong punishable by laws. This status emanates from core attributes, particularly consciousness, self-awareness and sentience.
Current AI systems possess none of these characteristics. They have no consciousness, feel no pain, and lack intrinsic motivation or the capability to set meaningful goals. They are merely simulacra — sophisticated pattern matching that mirror these human qualities.
This then begs the question: why would Silicon Valley pour millions into investigating speculative claims of machine consciousness while simultaneously quashing any effort to regulate this technology? One may reasonably conclude that by promoting AI consciousness myths, they create philosophical cover for replacing human judgment or eliminating jobs while avoiding any accountability that regulation would impose.
Regardless of the motivation, this engineered confusion creates a responsibility vacuum. When AI systems cause harm, who is held liable? The algorithm is just code. Officials can claim they are merely using tools. Companies can insist they are simply providing services. As courts and legislatures ponder these questions, citizens lose both voice in their governance and recourse when systems fail or harm them.
We ought to engineer systems that help us thrive and flourish together, not systems that render human agency obsolete. Importantly, we need our public officials to govern and not outsource governance to software. Their job is to actively seek constituent input, make accountable decisions, apply precautionary principles when developing technology that may escape human control and bring the creators of this technology within democratic account.
The window for preventing an algorithmic dystopia is narrowing. My homeland’s example foreshadows this future in miniature: algorithms in charge of procurement decisions while companies like Microsoft and OpenAI profit, and Albanian officials outsource accountability to technology they don’t meaningfully control. Unless we act decisively, we risk becoming what some in Silicon Valley already see us as: obsolete biological software awaiting replacement by cognitively superior artificial entities exclusively under corporate dominion.
Authors
