When Politicians Mistake AI Hype for Strategy
Pedro Tavares / Sep 25, 2025
Alina Constantin / Better Images of AI / Handmade A.I / CC-BY 4.0
When Swedish Prime Minister Ulf Kristersson casually admitted he uses ChatGPT for a “second opinion” on policy, he ignited a debate far bigger than a single app. Voters expect leaders to weigh evidence and make judgment calls — not outsource decisions to algorithms. Yet Sweden is not alone. In Albania, the government has gone even further, deploying AI to fight corruption and creating a Ministry of AI.
These cases expose a larger dilemma: how does one distinguish between AI that serves public interest and AI that silently undermines it?
Why consumer tools don’t cut it in government
Generic large language models are trained on internet text to produce plausible-sounding responses. They sound polished and convincing when expressing certainty, but often they may be completely wrong. They lack verification mechanisms, citation requirements, and can't distinguish between factual accuracy and statistical likelihood. As one Swedish columnist wrote, "Chatbots would rather write what they think you want than what you need to hear."
More concerning is the infrastructure dependency these tools create. When political leaders input policy considerations into ChatGPT or other LLMs, they encounter systems where there are "non-disclosures of training sources and ultimately a decline in understanding training data", according to research from the Data Provenance Initiative from MIT. This opacity becomes particularly problematic when combined with foreign control: European leaders risk outsourcing their decision-making to US-controlled companies. This raises critical questions of data sovereignty: who ultimately controls this information, and what happens to sensitive policy discussions once they leave national servers? By relying on privately owned, foreign platforms, European leaders risk outsourcing their digital infrastructure and creating a dependency that could weaken future policy independence.
OpenAI's recent announcement of a partnership with the US government offering the country’s entire federal workforce access to ChatGPT for $1 per year highlights this challenge. OpenAI promised “strong guardrails, high transparency, and deep respect” for the “public mission.” But these corporate announcements don’t rely on public policy frameworks, as they are private companies setting the terms for how governments operate.
What public-fit AI should look like
Understanding how and when to use AI properly in politics matters. The advantages are significant when done right.
Consider the medical field. A recent study by the Lancet Oncology in Sweden found that AI working alongside radiologists detected 20% more breast cancers than radiologists alone. The technology appears to have the ability to detect very subtle signs of early cancer that the human eye might miss. Lives were saved through better data analysis and pattern recognition using the right AI.
Public policy deserves the same specialized approach. Because decisions affect millions, leaders should adopt a "public AI" framework — systems built or at least public-private consortia owned or governed, explicitly for public benefit with democratic oversight, rather than relying on consumer-grade chatbots that often respond in words of what experts call "hallucinations."
People need to build purpose-fit tools for public policy. Systems that aggregate verified data, from open databases, to policy research and global best practices, cross-reference credible sources, and identify emerging signals and patterns, while maintaining clear audit trails that should be verified by the users.
In practice, this means purpose-built platforms for policy consultation and advice. The latest OECD report on AI governance highlights tools such as the UK’s AI Consultation Analyzer, which expands the government’s capacity to analyze citizens' input in legal public consultations, ensuring that all voices are heard. Beyond consultation, specialized companies are developing AI for policy modeling: PolicyEngine simulates how policy changes affect budgets and citizens, while platforms like Futures use machine learning to spot trends, risks, and support foresight work for strategic planning.
Having good data is fundamental homework for better insights. Countries with strong data strategies, such as Denmark, Korea, and Sweden, can significantly enhance policymaking through AI while maintaining democratic governance.
AI also offers new possibilities for democratic engagement. In California, following the devastating wildfires in Los Angeles early in 2025, Engaged California used AI analysis to compile over 1,000 detailed resident responses into clear, actionable insights, while preserving people’s own language and highlighting shared concerns about housing, insurance, and long-term resilience. These tools enable consideration of a full spectrum of opinions, gathering feedback on immediate actions as well as medium- and long-term policies. That's AI amplifying democratic participation, not replacing democratic functions.
What’s needed to get there
For these approaches to succeed, countries need more than just trained leaders. The European approach should provide a blueprint, but it is still in its early stages. The EU AI Act establishes important principles, but lacks specific requirements for public infrastructure development. Europe's existing EuroHPC supercomputers were designed for scientific research, not for training general-purpose AI models or generative AI models. They cannot support commercial or government use at scale.
The infrastructure gap is substantial. The US recently announced a $500 billion Stargate initiative for AI infrastructure, while estimates suggest Europe would need €500-700 billion by 2030 to account for 16% of global AI computing power proportional to its economic weight.
This creates a structural disadvantage across all sectors. Without infrastructure investment, European institutions remain dependent on foreign platforms.
We also need harmonized regulatory frameworks that go beyond current efforts. As the AI Act sets standards for transparency and accountability, there should also be stronger AI interoperability frameworks across member states. Additionally, mandatory audits for AI systems used in public governance and clear data handling regulations are essential minimums.
Most critically, countries need public-private partnerships. This doesn't mean excluding private innovation, which we know is fundamental. It means creating public AI structures where companies collaborate with governments under clearer oversight, with ethics boards that include civil society organizations, ensuring systems reflect diverse communities.
This is not fear about the future. It’s recognizing that public policy requires different standards than consumer technology. When doctors use AI to detect cancer, they're using systems specifically trained on medical imaging, validated through strict testing, and deployed with clear protocols. Public policy requires the same standards.
The world is at a crossroads. AI will reshape governance and public policy; it’s inevitable, just as it has in many other areas of our society. The real question is whether democratic societies will build their own public AI infrastructure, or drift into a future where “Ministers of AI” appear with no accountability, and dependence on private platforms sets the rules in secret.
Proper training is fundamental to using technology. Leaders should not just approve regulations, but deeply understand their uses, constraints, and particularly the road ahead. AI can genuinely help — aggregating public input, identifying patterns in diverse data, and modeling scenarios. But human judgment must always guide final decisions. It’s not optional. It’s democracy working side-by-side with technology.
Margrethe Vestager, the former EU's digital chief, said, "The way we deal with technology also shows what we expect of our democracy and of our societies." Government leaders should expect technology to serve democratic values, not subvert them through negligence or even naivety.
Policy-making demands more than consumer chatbots or vague AI-driven roles in government. It requires a smart, confident, and accountable approach, one that balances innovation with responsibility. Citizens deserve this from their leaders.
Authors
