Policymakers Have to Prepare Now for When the AI Bubble Bursts
Mark MacCarthy / Nov 24, 2025In chapter 12 of his 1936 groundbreaking treatise, The General Theory of Employment, Interest, and Money, economist John Maynard Keynes wrote, “When the capital development of a country becomes a by-product of the activities of a casino, the job is likely to be ill-done.” Commentators have begun to recycle (such as here and here) this famous indictment of casino capitalism in discussions of today’s AI bubble.
Policymakers are not going to tell the private sector what investment decisions to make. Casino capitalism is safe. Institutions and individuals with money will continue to put their assets to work on their favorite projects — and to lose their investment if they guess wrong.
But there is no longer any serious question about whether the casino economy has produced an AI bubble. By almost any measure, investment in AI infrastructure far exceeds any foreseeable returns. JP Morgan estimates that the projected $5 trillion global investment in AI infrastructure will require revenue of $650 billion a year from AI products, indefinitely, to give investors a reasonable 10% annual return. That’s not going to happen. The real questions are when will the bubble burst, how severe will the consequences be and what should policymakers do now to prepare for the inevitable downturn.
Policymakers should take three steps now in advance of the coming popping of the AI bubble.
- First, regulators and legislators should put in place utility rate designs that will protect ordinary electricity ratepayers from picking up the bill for stranded utility investments when AI companies cut back on their unprecedented energy demands.
- Second, federal, state and local agencies should explore now the possibility of acquiring distressed energy infrastructure assets and using them for needed public purposes.
- Third, agencies should step up efforts to support AI — not the construction of ever larger language models but the application of AI to diverse national needs including healthcare, education, public safety and government services.
The AI bubble
The focus of much AI investment is the construction of giant data centers to train and operate AI language models. These enormous data centers draw unprecedented amounts of power. In early November 2025, Construction Review reported that there are six data centers under construction in the US with over one gigawatt (GW) of power—an amount sufficient to power 750,000 homes. In Louisiana, Entergy has received regulatory approval to supply up to 2.2 GWs of power to Meta’s AI data. Meta is contemplating expanding its power draw to 5GWs. Goldman Sachs has estimated that building the energy infrastructure for AI data centers will require $1.4 trillion by 2030.
OpenAI alone is planning to spend $1.4 trillion over the next eight to ten years on AI data centers and infrastructure, even though its current annual revenue is, at best, $20 billion. The major hyperscalers — Amazon, Microsoft and Google — have stable alternative businesses which they can use to fund their data center investments. Meta has a $27 billion private debt deal in connection with its Louisiana facilities. These data center ventures have some odd features, including circular flow financing, whereby Nvidia provides funds to a data center on the condition that it buys Nvidia chips, and the phenomenon of phantom data centers, which are created when a company applies to several utilities at once for the same data center project.
But these are symptoms — or rather surface disturbances — reflecting the underlying AI instability. There is no doubt that AI language models are a promising new technology that will provide efficiencies and innovations throughout the economy and society. But there are too many dollars chasing the same objective. The internet, too, was a promising technology in the late 1990s, but telecom companies over-invested in transmission facilities for internet traffic. When the telecom crash happened in 2002, half a million people lost their jobs, the Dow Jones communication technology index dropped 86 percent; the wireless communications index cratered by 89 percent. The $2 trillion decline in telecom stock value triggered an additional $5 trillion decline in the broader market. Twenty-three telecom companies went bankrupt, including the collapse of the telecom giant WorldCom, at the time the single largest bankruptcy in American history.
An AI crash could produce similar pain. Amazon founder Jeff Bezos has said this over-investment is fine because at the end of the painful transition, the country will be left with productive technology. But there is a further element behind this AI bubble. There is a strong likelihood the investment will be largely wasted in the same way as Meta’s $45 billion bet on the metaverse was wasted. Instead of leading to fundamental improvements in language models or to artificial general intelligence (AGI), the industry might be on a road to nowhere.
The limits of machine learning language models
AGI is a fuzzy concept, without an agreed-upon definition or operationalization that would measure when developers of AI models had achieved it. In a recent Tech Policy Press podcast, tech journalist Brian Merchant defined AGI as “the promise of doing everything.” Or as economist Pascual Restrepo puts it, “AGI is the knowledge or technology for transforming raw compute into all types of useful work.”
Once AGI is specified as a magic production function that takes compute as an input and produces anything you want as output, it becomes clear why companies are spending their fortunes to get to it. With AGI, they can establish a potentially permanent monopoly over an essential part of the nation’s economy; without it, they would be relegated to the sidelines, while another company takes center stage. And it becomes a race a company cannot afford to lose. As Meta CEO Mark Zuckerberg put it:
If you build too slowly and then super intelligence is possible in three years, but you built it out assuming it would be there in five years, then you’re just out of position on what I think is going to be the most important technology that enables the most new products and innovation and value creation and history.
But that definition also illustrates why the goal of AGI is a chimera. It is not a feasible engineering project to build a machine that can do everything. In 2017, Andrew Moore, then dean of Carnegie Mellon’s School of Computer Science, threw cold water on this idea, saying “… no one has any idea how to do that. It’s real science fiction. It’s like asking researchers to start designing a time machine.”
Even if it were something specific enough to be achievable, it is clear that scaling language models won’t produce AGI. GPT-5 was a modest improvement over earlier models, and Google’s just released Gemini 3, while apparently the best of the lot so far on benchmarks, is no AGI breakthrough. Cognitive scientist Gary Marcus was an early and lonely voice insisting on the limits of machine learning language models. Years ago, AI pioneer Yann LeCun wrote, “A system trained on language alone will never approximate human intelligence, even if trained from now until the heat death of the universe.” He recently left Meta to found a startup, just as the company is putting all its eggs in scaling machine learning language models as the way forward.
In March of 2025, a Presidential Panel on the Future of AI Research of the Association for the Advancement of Artificial Intelligence published a survey showing that 76% of its AI researcher membership think that “scaling up current AI approaches” to yield AGI is “unlikely” or “very unlikely” to succeed. Recognition of the limits of language models is now the consensus in the AI research community, even as the scaling hope lives on as a financially bloated zombie in the commercial AI labs.
The demand for gigantic AI data centers is largely driven by training demand, as much as 80% according to one industry executive, and so returns depend on the validity of the industry’s hope that training ever-larger models on more data will produce a breakthrough. But if the industry’s bet that large data centers will produce dramatically better AI services proves to be unfounded, as seems to be the consensus in the research community, then what happens? Should the government step in to rescue the companies that lost their scaling bet? Sam Altman denied that his company needed or wanted a government bailout to reduce the risk of its AI infrastructure spending spree after the idea was floated by the company’s chief financial officer. Policymakers should take him at his word when he says, “If we screw up and can't fix it, we should fail.” He and his investors took the investment risk and should face the downside when their bet does not pay off. Perhaps another company, likely Microsoft, would take over its management and operations, acquire its intellectual property and engineering talent, and the world would go on.
What should policymakers do now in the energy sector?
Governments should not rescue investors and companies that misjudged the way forward on AI technology. But it does not follow that they should do nothing. The implications of a broad AI industry pull back are not confined to the tech industry. An AI industry collapse might lead to a protracted stock market downturn, as it did in the telecom crash. The release of Nvidia’s third quarter 2025 earnings report briefly lifted tech stocks and the market generally. But its stock and the broader market resumed their declines reflecting investors’s concerns about an AI bubble. These portents lend some credence to the possibility that an implosion in AI-related stock values could trigger broader market declines. It is also likely that a broad pull back on AI capital expenditures could lead to a recession. As Paul Kedrosky noted, AI capital expenditures are “eating the economy” and will reach $312 billion in 2025 — approximately 1.2% of US GDP, more than the GDP share of telecom investment before the 2002 crash. When that goes away, what props up economic growth?
What about the effects on the energy sector? If the industry loses its scaling bet and doesn’t need massive AI data centers in anything like the quantity they had previously demanded, then what happens to the energy generation and transmission facilities built to power them? As the Wall Street Journal suggests, “If the AI hype is overblown or the tech industry doesn’t ultimately need as much electricity as projected, other customers would get stuck with the infrastructure costs.”
As former Energy Department official David Klaus and I argued in a series of commentaries and op-eds (here, here, here and here), legislators and regulators need to do three things to address these issues. Regulators should create a distinct tariff class for AI data centers aiming to recover not just energy consumption but also the fixed costs of energy generation and grid upgrades. Second, they should require AI data centers to make up-front pre-payments, or grid connection fees, which would require developers to pay a non-refundable connection fee that covers the data center’s anticipated share of generation, transmission and distribution infrastructure costs. Third, regulators should require data center operators to post performance bonds or sign revenue guarantee agreements so that they have to pay minimum charges regardless of whether their power demand is lower than projected. In January 2025, the Department of Energy released a useful compendium of additional rate designs for large load customers that could be deployed to protect ordinary ratepayers from the risks of stranded investment.
In addition, federal, state and local policymakers should also explore a possibility raised in a comprehensive and thoughtful report on data center project finance from the Center for Public Enterprise. The report urges policymakers to begin now to devise “an investment strategy centered on acquiring distressed energy infrastructure assets and repurposing them to serve future demand.” Rather than allowing stranded energy resources to sit idle, agencies and legislatures should now begin a planning process for putting them to productive, equitable, common-good uses. It is better to begin that conversation early and be ready with a plan before the crash happens.
AI industrial policy
Policymakers should embark on a third and crucial policy initiative now in anticipation of the bursting of the AI bubble. AI is an important and valuable new tool — a normal technology, in the phrase of computer scientists Arvind Narayanan and Sayash Kapoor. The underlying models are probably about as good as they are going to get, but specific applications based on these models will diffuse gradually through the economy and society. AI language models will still use large amounts of inference time compute to deliver AI services. This is why the AI industry will still need AI data centers and the energy to power them, though not in the massive, concentrated way that would be needed for training runs in a vain search for AGI.
But this transition will not happen automatically. Government measures to support AI are needed to ease the transition to the more productive and equitable use of this technology. The negative reaction to OpenAI’s trial balloon for a government bailout should not extend to other proposals for government support. There should be no government subsidies for the industry’s misguided AI scaling quest to build larger and larger AI models in the hopes of a breakthrough. Rather, policymakers need to recognize and plan for the fact that the productive and equitable application of AI to diverse national needs including healthcare, education, public safety and government services will require active government support.
As former Google CEO Eric Schmidt notes in rejecting the industry’s obsession with AGI, this focus on the application of AI to real world problems is what China is doing. Its State Council published an ambitious “AI+” initiative in late August, promising full government policy support for initiatives aiming to implement AI in specific use cases. As described by Kendra Schaefer, the tech policy analyst for the business research firm Trivium, in a recent podcast, the initiative focused on AI adoption in six sectors: science, industrial manufacturing, consumption, quality of life, governance and global cooperation. In each case, the government focused on a specific unsolved problem such as limited results from science investment or underconsumption and sent the message to industry that there would be policy support for company AI initiatives aimed at solving these specific problems. As more specialized agencies implement this general policy umbrella, this initiative will mature to bring more concrete support for AI initiatives in manufacturing, transportation, healthcare, finance, agriculture, consumer services, energy, governance and public services.
There is some indication that China’s approach up to now has worked. The Chinese companies DeepSeek and Alibaba provide open source models that have been developed at a fraction of the cost of US models and that are in a position to become the default input to AI applications around the world, and even in the US. One of the partners in the venture capital firm Andreessen Horowitz thinks that there is an "80% chance" the startups seeking his company’s investment support are using “a Chinese open-source model.” Under its new AI+ policy, the Chinese government intends to provide support for companies throughout the Chinese economy and society to use these domestic open source models as the foundation for needed products and services.
The Trump administration is not averse to taking steps to support the US AI industry. Its latest initiative is to provide $1 billion in loan guarantees to the energy company Constellation to support the reopening of Three Mile Island to supply power to a Microsoft data center. But this is exactly the wrong sort of industrial policy, encouraging industry to go off on a wild-goose chase for AGI through AI language model scaling. It can and should reverse course and focus on the AI use cases rather than AI model development.
The fear of making a mistake in industrial policy should not paralyze policymakers. The country has crossed the Rubicon on active government efforts to assist industries to achieve important government objectives. The only question is how to do it intelligently in the case of AI. Government agencies should focus policy support on sector- and task-specific use cases of AI to ensure that funding shortfalls, coordination difficulties, externalities and other market failures do not prevent the emergence of effective and equitable technological approaches to some of our most pressing national problems.
Authors

