Home

Donate
Perspective

AI’s Dirty Secret: Why Policymakers Need to Wake Up to Its Energy Footprint

Pupak Mohebali / Jun 11, 2025

Extraction Network 1 by Kathryn Conrad & Rose Willis. Better Images of AI/CC by 4.0

AI is reshaping the way governments operate and deliver public services. However, as this technology becomes more deeply embedded in public infrastructure, a critical issue remains overlooked: the vast and largely unmeasured energy demands behind artificial intelligence systems.

Despite growing concern over the environmental costs of digital technologies, we lack a comprehensive understanding of AI's full energy lifecycle, from training to deployment to maintenance. Existing data is fragmented, vendor-controlled, and often excludes essential details such as energy sources, regional variations, and long-term carbon impacts.

At the same time, public sector leaders face unmet needs. There are no standardized frameworks to measure or report AI energy use, and few governance tools exist to ensure sustainability is embedded into AI procurement and policy decisions. Policymakers are left with minimal guidance and limited leverage when it comes to negotiating responsible AI adoption.

This article highlights the scope of AI’s energy footprint, the risks of inaction, and outlines a practical policy framework for governments to adopt. It also explores why the environmental burden of AI is not just a technical issue, but also an ethical and geopolitical one.

The public sector’s AI dilemma

Governments across the world are adopting AI tools for a range of applications, from immigration control to healthcare triage, from predictive analytics to citizen services. This wave of digital transformation is often framed as efficient, cost-saving, and innovative. But while AI promises faster decision-making and improved service delivery, it also introduces a hidden trade-off: its carbon cost.

The environmental impact of digital technologies has long been underestimated, especially when those systems are outsourced to third parties or run in remote cloud data centres. AI takes this to another level. Training large-scale foundation models like GPT-3 or PaLM can emit as much carbon as five average cars over their entire lifespans, according to researchers at the University of Massachusetts Amherst. And that’s just the training phase. Running these models, known as “inference,” also consumes substantial energy, particularly at scale. A 2023 study from the University of Washington found that serving a tool like ChatGPT could use as much as 500,000 kilowatt-hours per day, depending on user volume and infrastructure.

In the public sector, the consequences of this are twofold. First, governments may be contributing to emissions without realizing it, thereby undermining their climate commitments. Second, the lack of transparency from AI providers prevents public servants from making informed, ethical choices.

Despite growing adoption, most governments remain unaware of AI’s true energy demands. Existing procurement and sustainability policies often overlook the hidden carbon cost of AI systems. This lack of oversight calls for a clearer, more actionable framework for public sector accountability, one that recognizes the unique intersection of digital transformation and environmental stewardship.

Emerging examples highlight the stakes:

  • The UK government is piloting an AI tool named ‘Humphrey’ to help local authorities improve efficiency. While touted for reducing administrative load, there is little public information on the tool’s energy footprint.
  • AI systems are increasingly used to optimize heating and lighting in buildings. At 45 Broadway in Manhattan, BrainBox AI claims to have reduced HVAC energy consumption by 15.8%, resulting in an annual savings of $42,000.
  • The US Department of Energy is exploring how AI can support the deployment of clean energy, including faster siting and permitting. However, these initiatives also raise questions about the underlying energy demand.

These case studies demonstrate both the promise and environmental pitfalls of AI, underscoring the need for oversight, transparency, and accountability.

Why no one has the full picture

One of the most significant issues is the lack of a consistent standard for measuring and reporting AI’s energy use. Tech companies treat this data as proprietary, while governments often lack the technical capacity to request or evaluate it. Even researchers struggle. Estimates vary widely depending on what is being measured, training, inference, hardware, cooling, and whether the underlying infrastructure is powered by renewable or fossil fuels.

This black box problem is particularly dangerous in a policy context. It leaves governments open to greenwashing claims, where vendors tout “sustainable” AI without any verified data. It also stifles accountability. How can public institutions justify large-scale AI adoption without knowing whether it aligns with their environmental values?

To address these challenges, researchers and policymakers should advocate for standardised energy metrics for AI systems, invest in independent monitoring infrastructure, and encourage legislation that mandates transparent reporting of AI energy use. Independent oversight bodies, similar to those used in data protection, could be tasked with auditing energy claims and enforcing compliance. Initiatives such as the Sustainable AI Coalition and AI for the Planet Alliance provide promising collaborative models.

To date, most AI governance frameworks have focused on fairness, bias, and transparency, but they rarely address environmental accountability. This omission leaves a critical blind spot in public sector tech strategies.

If we continue down this path, we risk repeating the same mistakes made in the early days of cloud computing, which is prioritizing scale and convenience over sustainability and ethics.

A framework for responsible AI energy governance

To avoid sleepwalking into an energy crisis of our own making, policymakers need to treat AI’s environmental footprint as a serious governance issue. This involves creating standards, demanding transparency, and integrating sustainability into AI procurement and deployment. While this framework is ambitious, it is not without challenges, including vendor pushback, resource constraints, and technical complexity.

Here’s what a practical policy framework could look like:

  • Energy Disclosure Standards: Require AI vendors to disclose the estimated energy usage of both training and deploying their models, especially for public sector use. For example, a local authority procuring an AI tool for housing applications should be able to compare energy usage just as they would compare cost or privacy standards. Similar disclosure standards exist for vehicle emissions and building performance, setting a precedent for AI.
  • Sustainability Audits for AI Procurement: Public sector buyers should introduce energy audits as part of the procurement process. “AI Procurement in a Box” framework provides a strong foundation for this. For instance, agencies could include energy impact as a line item in AI contract bids, much like accessibility or cybersecurity requirements.
  • Model Labelling and Benchmarks: Similar to appliance energy labels, AI models could be scored on their environmental impact. For example, just as fridges carry energy efficiency ratings, AI systems could be assigned "AI energy grades." This would help non-technical decision-makers select models with both performance and sustainability in mind. A hospital deciding between two diagnostic algorithms could weigh energy efficiency as part of the final evaluation. The EU Code of Conduct for Data Centers, a voluntary initiative that sets out best practices for improving data center energy efficiency, provides a practical foundation. The Code promotes energy-saving actions such as server consolidation, temperature control, and the use of renewable power sources. Importantly, the Code has been referenced by the European Commission’s own Joint Research Center and is used as a guiding benchmark by numerous public and private organisations in Europe. Future adaptation of this approach could lead to a recognised energy certification system for AI models, influencing how governments structure digital procurement standards.
  • Open Energy Data Collaboration: Establish public-private partnerships to develop open datasets and benchmarks for AI-driven energy use. These would support better decision-making across the public sector and create pressure on vendors to compete on accuracy and efficiency. Canada’s Open Government initiative offers a potential model.
  • Alignment with Digital ESG Frameworks: AI’s environmental impact should be integrated into public digital ESG strategies, alongside ethics, equity, and data privacy. For example, a city that tracks emissions for public buildings could expand this to include digital tools used in its operations.

These measures won’t solve everything overnight, but they would introduce much-needed visibility and accountability into AI’s environmental cost, especially in the public sector, where taxpayer-funded tools must align with climate goals and democratic values.

But these policy tools are as much about values as about compliance. The environmental footprint of AI intersects deeply with questions of fairness, justice, and global inequality.

Ethics, equity, and the environmental burden

The energy needs of AI go beyond just being a climate concern; they’re also a matter of justice. Most of the computing power behind today’s large-scale models comes from a handful of tech giants concentrated in the Global North. But the environmental fallout, from carbon emissions to resource-intensive hardware production, is felt globally.

Without energy transparency, countries with limited resources may be excluded from AI development or be left with tools that conflict with their climate goals. Meanwhile, public sector buyers in richer nations risk locking into high-emission contracts that contradict national sustainability policies.

Case studies from around the world reflect this imbalance. In countries where electricity grids are already strained, deploying large-scale AI tools could exacerbate energy insecurity. For example, a 2023 Nature study on AI and power usage flagged that many low- and middle-income countries may be particularly vulnerable to AI-driven energy demand, especially when reliant on imported infrastructure or outsourced services.

Policymakers must ask: Who benefits from AI’s expansion, and who bears the environmental cost? Without transparency, that cost may be disproportionately shifted onto the world’s most vulnerable communities, those already impacted by climate change.

From awareness to action: a call to policymakers

The energy crisis of AI remains largely invisible. That’s exactly why it demands urgent policy attention.

In the same way we’ve come to expect privacy impact assessments or ethical AI reviews, energy should be part of every major public sector AI decision. Governments must lead by example; it is not enough just to adopt AI, but to do so responsibly, transparently, and in line with democratic values.

This means:

  • Asking tough questions about energy use during procurement
  • Demanding disclosure and labelling from AI providers
  • Funding independent research to track digital emissions
  • Including energy impact in national digital and climate strategies
  • Providing incentives for low-energy AI innovation through public sector grants or green tech partnerships
  • Publishing annual digital sustainability reports for government technology use

If we care about the future of public technology, we can’t afford to ignore its environmental price tag. It’s time for policymakers to stop seeing AI as an abstract cloud of code and start treating it like the real-world infrastructure it is, powerful, physical, and anything but free.

Authors

Pupak Mohebali
Pupak Mohebali is an AI Policy Research Consultant and founder of Ink IQ Hub, where she advises on ethical AI and global tech governance. Her work focuses on making complex AI developments more transparent, sustainable, and aligned with democratic values.

Related

Perspective
AI Doesn’t Need More Energy — It Needs Less Concentration of PowerMay 16, 2025
Podcast
xAI's Memphis Neighbors Push for Facts and FairnessMay 6, 2025

Topics