Home

Donate
Podcast

What to Do If the AI Bubble Bursts

Justin Hendrix / Apr 12, 2026

Audio of this conversation is available via your favorite podcast service.

If you read, watch, or listen to financial news, you’ll find there is a boom in discussion over whether the AI boom is a bubble, and what the consequences might be if it bursts. Today’s guest says that if such a crash occurs, it will represent a significant policy opportunity—a potential point of intervention that could lead to meaningful reform of the tech sector.

Asad Ramzanali is the Director of AI and Technology Policy at the Vanderbilt Policy Accelerator for Political Economy and Regulation, and author of the recent report, "After the AI Crash."

"Instead of waiting for the crisis and hastily developing insufficient policies, lawmakers should prepare for this anticipated crisis now," he says.

What follows is a lightly edited transcript of the discussion.

Asad Ramzanali:

I'm Asad Ramzanali. I'm the Director of AI and Tech Policy at the Vanderbilt Policy Accelerator.

Justin Hendrix:

Asad, I'm pleased to have you join me today. We're going to talk about this report you've just put out from the Vanderbilt Policy Accelerator called “After the AI Crash.” And the title assumes there will be a point in time which we can refer to as after the AI crash. Why are you so convinced that that crash is coming?

Asad Ramzanali:

Yeah, and appreciate you having me, Justin. I started this project with a, if there is an AI crash, there was a lot of discussion at the end of last year around, are we in a bubble? Are we not in a bubble? And my thought was if we are, are we prepared for that, like as a policy community, about what to do about it? And so I wasn't convinced that there was one when I started writing it, but when I got into the weeds of researching this paper, I became convinced that there is a pretty bad basic financial situation going on.

So I don't start from the premise that we definitely will have an economic crash, but I do start from a, there's a plausibility that we do. And so I try to lay out in the report, here's how we might get there, here's how it might occur. And if it does, the main part of the report is how do we think about a policy response if that happens?

Justin Hendrix:

And it seems like your kind of fundamental argument about why a bubble exists is sort of similar to what we've seen various other experts point to. That there's a mismatch between the capital expenditures on AI, particularly from the large firms, the billions and billions being invested, and the rate of returns. That's essentially it, there's a basic math problem.

Asad Ramzanali:

There's a basic math problem where we have … J.P. Morgan anticipates that we're going to invest $5 trillion between now and 2030. When you match that with the tens of billions that are coming in as revenue from these AI systems, there's a basic math problem. And so at that fundamental level, that's the only thing I'm going off of, is that, hey, these huge projections of ‘AI is going to change everything.’ That might be true for a lot of parts of society, but in the next five years do we anticipate that revenues will catch up to the trillions of dollars of investments, is a completely different question than how good is the technology? What are the good and the bad societal implications of adopting it? It's a math problem.

Justin Hendrix:

There are counter arguments to this point of view. Some of them kind of hold that we're looking at this the wrong way, that this is more like electricity than the dotcom bubble or more like the steam engine, which is going to radically change a variety of sectors. I don't know, do you feel like you've seen any counterargument to the bubble thesis that you think still, I don't know, gives you any doubt?

Asad Ramzanali:

The place where I could be wrong is how quickly that these technologies get adopted within businesses that pay for the technologies. So I use a lot of J.P. Morgan's analysis on the finance side, but Bain and Company, the consultancy, also did an analysis, and the way they look at this is if we shifted all of society's technology spending, and most of that is enterprise spending, what businesses spend on backend systems, if all of that moves to AI, we still have hundreds of billions of dollars of gap per year in revenue. So that's where my mind goes to you have to assume not just all of it shifts in the next five years and it expands significantly. And so that's where, again, I'm not of the like, I have to be right, there is a crash, I'm of the, it's totally plausible, and if there is one, we should be ready as a policy community for what that means, because the way the financial structure of how the money's being spent, that impacts all of us. It's all of our money in some way or another that's mixed into everything that's happening.

Justin Hendrix:

Well, let's get into that a bit then as far as being ready. It's both, I assume, partly to stave off the worst consequences, but if we look back to the housing crisis, we look back to the 08 period, there was an opportunity then to reshape the economy that some would argue maybe was squandered, that more could have come out of that moment to potentially fundamentally shift the way the American economy works. Is that the type of thing you're thinking about here, that there's a little bit of both here? It's a little bit of stave off the worst and then also see it as an opportunity if it occurs?

Asad Ramzanali:

Yeah, that's right. I think about in the 2007, 2008 crisis where a housing financial crisis caused an economy-wide downturn that impacted everybody. Every industry, every person who was saving up for retirement was impacted. The way I think about that is the responses to that were, one, we pumped a bunch of capital to bail out companies and banks, but not people. And that's where we should learn from that. There was a huge political reaction you'll remember on the Right and Left, the Occupy Wall Street movement. So much of the political backlash held that out to be a central tenant of what they were reacting to.

Second, the types of things we did, I look at in a couple different buckets, Dodd-Frank set up a system that was really technocratic. There was an inter-agency group, the Financial Stability Oversight Council, that met a couple of times a year and figured out which banks and non-banks would be systemically important enough to try to avoid another 2008-like scenario, to try to avoid the government bailing out banks, and yet, just not that long ago, we had to bail out Silicon Valley Bank. A bank that was on nobody's list as systemically important at that level.

And so that's where I go back to, did that work? Do we feel good about that? And I think part of when I talk to the people who are working on that response, when you're reacting to a crisis, you don't have the imaginative capacity to think about the big ideas. And so this is an attempt to say, let's do that thinking now, let's lay that out, let's have the conversation, let's have the debate about those issues now.

Justin Hendrix:

You brought up the idea of a bailout. It feels to me that the kind of politics around a potential bailout of tech firms are pretty poisonous. That would not be an attractive idea to pretty much anybody at this point.

Micron's $50 billion expansion project at their headquarters in Boise, Idaho, on April 10, 2026. (Photo by Alex Milan Tracy/Sipa USA)

Asad Ramzanali:

I think that's right if you conceive of a bailout as a check written by the government to a technology firm. I do not think that's right for other types of things that might happen. The CFO of OpenAI mentioned loan guarantees, and that got a big backlash because that would be seen as a big bailout. What I get worried about is we're already leaving billions of dollars in tax breaks at the state level, we're already doing those for data center build out. Doesn't that start to look like a slow bailout? So that's the scenario that I worry about, is we do that in other ways that look like tax subsidies or that look like long-term contracts where the government is overpaying for a service.

So that's what I'm watching for. I'm not saying we're too far down that road now, although on tax subsidies, we are pretty far down that road. But the check to a company, I don't think that's a good idea for many reasons, including when there's a bank run. There's a systemic problem, it's my money in that bank. There's no such thing as a bank run on an OpenAI or an Anthropic. That analog for the rationale for the bailout doesn't even exist.

Justin Hendrix:

Yeah, I want to get into some of the policies that you think Congress should consider if in fact this bubble does burst, but just for anybody that's listening abroad or may not be familiar with some of the things that came out of the 08 crash basics on Dodd-Frank, maybe even Glass-Steagall we should cover as well. There are some parallels here that you bring up. What do you folks need to know when it comes to some of the kind of models that you're looking at?

Asad Ramzanali:

Yeah, so let's do the couple of minute version. So in the 1920s and 1930s, America experiences the Great Depression, the whole world experiences a Great Depression. That is in large part a financial crisis. The two members of Congress who led the banking committees, Glass and Steagall, that was their last names, Senator Glass and Representative Steagall, came together on a number of banking reforms in the 1930s, including the 1933 Banking Act, which includes a few sections that we now call the Glass-Steagall Act, but it's just four sections of that larger bill.

The basic tenant is a bank that takes in deposits, that takes in your money, can't also be investing those kinds of funds in the market. That basic tenant of separating those kinds of businesses to keep the risk pools separate and to not make the depositors bear the risk of commercial activity that a bank is doing, that actually goes back to the Bank of England's charter in the 1600s, that banks shouldn't mix financing and commercial activity. And so that idea is really old.

1933, we codify it in Glass-Steagall and we say that that's a thing that shouldn't happen. Over time, Glass-Steagall gets weakened through regulations, through court opinions, and it formally gets repealed in the 1990s through the GLBA, a separate law that's not worth getting into. 2008 happens, and there's a lot of discussion around should we have big structural shifts again like we did after the Great Depression where we separated banking activities? And instead we took a more technocratic path. That's the setup that I'm looking at here of we didn't do that, we didn't do Glass-Steagall, we did Dodd-Frank, which was a number of things. One of which was a system to create a listing of which entities are systemically important.

Now, the one caveat is we did create the CFPB, the Consumer Financial Protection Bureau. And that's one of the ideas that I think about here, is that happened in part because then Professor Warren wrote a journal article about the need for a consumer protection agency before the crisis. But the big structural ideas of after you hear about postal banking or other universal banking types of ideas that could create new models in the economy and in financial services, a lot of those came after the crisis. A lot of those discussions happened five years after the crisis. It's the kind of thing where if you had them on the shelf at the time, that's at least part of the debate around how to respond.

Justin Hendrix:

Let's get into a few of these policies that you think Congress should consider right now. The first one, you want to stop financial engineering. There's enormous sums of cash, particularly around the hyperscalers, but also outside them in the broader data center investment. A lot of that is more debt oriented. You point to this other thing that I think people have been, I don't know, very curious about, at least on my social media feed. I've seen multiple examples of the depictions of this circular equity financing. That seems to really capture people's attention or imagination for exactly what's going on here, all the complicated ownership schemes and debt and loan guarantees, et cetera. I don't know, how do we get this piece of the house in order?

Asad Ramzanali:

Yeah, so let's take equity, which is investing, owning parts of a company, and debt as two separate things here. So on the equity side of the house, companies investing in other companies is not new. Companies giving financing to their customers is not new. Companies investing in their customers is new at this scale where you have unprofitable customers that are getting money to buy from their vendor from an investment from that vendor. And the problem there is it obscures the actual business incentives in the market that could exist.

You're basically, a company is saying, "Here's money to buy my product." And so we actually don't... That messes with prices, that messes with demand, it messes with all types of questions around what's going on. As far as I can tell, we've never had circular equity investing at this scale ever. There's like one-off examples of this happening in one-off companies or in roll-ups of some minor industry, but not at this scale, nothing like this.

So my view is if we've never seen anything like this and we're seeing it basically pump up the markets that we're worried about, we should put a halt in that. On the debt side of the house, the big issue is opacity. So corporate bonds... so many of the hyperscalers are big tech firms that really got their rise as asset-light companies that didn't have to get massive loans all the time, and that changed. They're all now borrowing a ton of money in the corporate debt markets. The modern corporate bond market comes about in the 1870s during the railroad bubble. We're now better about that market. We have transparency rules, we understand how that works, but a lot of this has moved beyond the traditional corporate debt, traditional corporate bond markets, to a thing called private credit, which is way more opaque.

But the name's a little bit funny, it shouldn't be called private credit because it's actually a lot of their investors are your 401k, your IRA, your life insurance plan. If your parents have a pension, that's where a lot of that money gets invested, and so it has a public impact. So when you think about Facebook, we hear a lot about their big Louisiana data center, the Meta Data Center. That is a $27 billion facility that is not on Facebook's books. It is not a loan that they took out. It's not money that they're paying directly. It is a special purpose vehicle, an SPV, so a distinct LLC that receives money from a private credit loan.

So we don't know the details of any of this stuff until it gets reported. We don't know how big the problem is. Someone the other day just asked me, how much of data centers are in private credit? I don't know the answer, and I've been paying attention to this pretty closely. I have guesses but that's not useful. We actually don't have a sense of the scale of the problem. So that's where all of this needs to come into the spotlight. We need to know the details of how much this is going on so that we can understand risk.

Justin Hendrix:

Well, you brought up the data centers piece of it. I've been also tracking this very closely and trying to collect news reporting across the entire country on data center infrastructure development over the last couple of years, and it's been extraordinary to watch the reporting and the curiosity and the pushback evolve around, I guess if you stand up on the moon, what would appear to be one of the largest infrastructure projects we've ever engaged in. But you talk about this idea of distortive government subsidies. You've just now talked about the extent to which the companies may be holding some of these things off balance sheet, making it harder to see them through that mechanism, but it's also really hard to understand exactly the scale of even the public investment at this point, so a lot of that's speculative too, right, based on tax promises and incentives?

Asad Ramzanali:

That's right. At the state level, we have a race to the bottom where depending on how you count it, something between 30 and 40 states have a tax subsidy, so they have a tax break for investing in data centers, for the construction of data centers. Virginia famously has a pretty big one, and you hear folks at the county level, so within a state, a different governmental entity, they get more excited about data centers sometimes until recently, until the people have showed up to meetings to push them off because of the tax revenues. But one of the things I try to think about is the tax revenues coming into a county government are money going into one hand but we're losing it in the other.

So Loudoun County makes $1.4 billion a year in tax revenues from data centers. The state of Louisiana leaves $2 billion on the table. Those are the same taxpayers. Loudoun County people, the people who live there, also pay state taxes, they also benefit from the state. So that's the thing that I'm trying to watch is Loudoun County has figured out a mechanism to get benefits from data centers, and even there, you have a lot of pushback. And I think all of the people who are pushing back, they have legitimate crises in their lives that they're saying to their governments that they don't want to do this. So there's a whole host of issues on data centers that's happening.

The one I focus on here is the tax breaks at the state level have gotten, in my view, out of control. The State of Missouri in this race to the bottom, if you invest $25 million and create 10 jobs, you are exempt from all state and local sales and use tax. That is not a lot of jobs. My parents own a small business, they own an ice cream store in San Antonio, Texas. They employ eight people. So they're inching up to that eligibility threshold, which is crazy. That's not the kind of capital investment or economic development policy that we traditionally chase to give 100% tax breaks to.

Justin Hendrix:

Well, one of the other ideas that you have here is what to do if in fact some of these data centers end up as stranded assets. And I've even seen interesting ideas from some artists about, well, if you end up with data centers that are either defunct or stranded, what else can we do with them? We turn them into community centers or otherwise take advantage of them for housing or whatever, but you're talking about public cloud maybe is a more realistic near term opportunity.

Asad Ramzanali:

Yeah, and the way I think about this is these special purpose vehicles, these one-off LLCs that are set up just to manage data center, or there's also a rise of neoclouds, which I know you all have published on, which are companies that aren't the hyperscalers, are often extremely levered, so extremely indebted relative to their assets, were previously crypto-mining companies often. They own a lot of the data centers that are going up today, and those are the types of entities that I worry most about will go under, and if they go under, if they get into various stages of financial insolvency, I think it would be interesting for us to have public infrastructure, public cloud infrastructure, and this is the hard things you need to get there, and we have mechanisms to manage this.

The Department of Energy, the national lab system there, they manage data centers, they have their own data centers. The National Science Foundation has NAIRR, which is a layer on top of the cloud that tries to allocate cloud resources that researchers who want to use AI for research, they know how to allocate compute resources as well. So that's where my mind goes to. This could be a way for the government to get on the cheap a resource that would actually make for useful public infrastructure.

Justin Hendrix:

And I guess going along with that, the idea of sustaining AI R&D for public purposes, you have a section on protecting workers, the extent to which should there be a crash, we need to think about what it means for workers. I guess there's lots of folks who are invested in the boom as well, who also think that there are things we need to do to prepare for the impact of AI on workers either way. And I wonder to some extent if some of your ideas here, I don't know, almost make sense either way. You have the thing, for instance, on ending workplace surveillance. But I don't know, what do we need to do to protect workers in the case of an AI crash?

Asad Ramzanali:

Yeah. If there's a financial crisis caused by over investment in AI, job loss to me will be cause and effect and scapegoat for those over investments. So you could see job losses perpetuating because companies will want to continue to invest in AI or use AI where it's profitable, and getting rid of labor will be seen as profitable even in the instances where it's not as effective, and that's my scapegoat comment. So you see companies today laying off people and saying it's because we think AI will do their work. That's not really saying that AI took their jobs, it's saying that AI sounds like a good reason to lay people off.

And so my view is no matter how we get there, we might see job losses. And the three categories I think of are do the traditional things like expand unemployment insurance, get rid of work requirements for social safety and things. We do that during every financial crisis. We should do that again. If the job losses are big enough, I think we need to do something like a Digital Works Progress administration. So during the New Deal, we created the Works Progress Administration to put, at the end, it ended up being eight million people to work. This was the government hiring people and putting them to work for public reasons. Right now, we've known for years that state and local governments and even the federal government has a shortage of tech workers that applies beyond tech to knowledge work, but we've known for a while that there's a shortage of tech workers. If knowledge workers, and in particular coders, are the ones who lose their jobs, as a lot of the press suggests, we should put them to work for the public purposes we have.

The thing I worry about for the remaining jobs is companies will try to squeeze as much profit out of them as possible. And we've seen this happen in trucking where for 15 years we've been talking about automation leading to trucking jobs disappearing. That hasn't happened. But what did happen is all of those truckers are now over-surveilled and all of them in their surveys talk about how bad it's become. So we should get rid of that mechanism for worker surveillance to empower people again.

Justin Hendrix:

The other thing that you're suggesting here is utility style regulation, and maybe even a new digital regulator that we should, out of this crisis, come out with something that is durable and allows us perhaps to contain the tech industry going forward.

Asad Ramzanali:

Yeah. The way to think about this is, do we have the state capacity to think about the regulatory interventions that are going to be necessary that are already necessary? And some of the things I talk about in the rest of the report like utility style regulation for what are effectively digital utilities, these are markets that have a high cost of entry where you're not going to see tons and tons of competitors, that are means to ends, for that kind of administration, you need an entity to manage it, and that's why you need a new regulator. And this is also where, in my view, in utility markets, not only should you think about the regulatory interventions that are necessary. So I don't propose a throw it all at the wall, do every kind of utility regulation for every market but what makes sense.

The one market structure thing that to return to our conversation on Glass-Steagall, separating the hardware and the software, I actually think would be really important. Right now, you have speculation in one market, which is AI models, leading to an over investment in another, which is data centers. And so separating those out would actually bring a little bit of discipline where they can actually have the market mechanisms to think about supply and demand and their own, is one customer too over invested or not? So if you own data centers or chips or chip fabs, or cloud computing, which is the necessary part of data centers, then you can't also be the one that's training AI models. You have to separate those businesses out. So that's a quick view of some of the regulatory parts of the paper.

Justin Hendrix:

Let me just press you on that one. How would that work in practice? Who would have to be taken apart in your view?

Asad Ramzanali:

Yeah. So for a number of years during the conversations that the antitrust subcommittee in the house really kicked off in earnest in 2019, 2020, what's interesting to me is the financial professionals who say... Actually, what's interesting is when you look at Amazon as two parts of a business, Amazon.com, a retailer that's online, and AWS, the web services that also owns data centers and its own chip design business, and an energy wholesaler, et cetera, et cetera, a lot of financial folks want to separate those out because the multiples you get, the way you would invest in those is actually two very different industries, two very different businesses. So that's the easiest one to talk about because that's an example we've known where separating those out could bring financial benefits to the investors, it could bring market discipline in a lot of other ways.

Now, it does get complicated in some places, but it's not so complicated you can't do it. NVIDIA, the big chip designer has a lot of investments in AI companies up and down the stack. Everything from almost every independent model maker to the applications, to data companies, to neocloud companies. That's where things get complex. That's the messiest... When we see all these almost artistic depictions of how complicated the AI stack has become. My favorite is depicting it as a plate of spaghetti. That's where NVIDIA's investments make a lot of that stuff messy, that's the messiness. But we can actually deal with those. We know how to undo an investment, we know how companies can become independent.

Justin Hendrix:

So there are several other ideas in here, but I want to make sure I call out one really novel one, something that seems like it's almost impossible here in the United States. Just to prosecute fraud.

Asad Ramzanali:

Almost every financial crash we've seen in our country's history has had some degree of fraud. Accounting gimmicks, or investment fraud, or banking fraud of some kind that happens, and you'll recall in 2008, it became a rallying cry that nobody went to prison. There was no accountability. That's real, that feeling, that political backlash, that is very real, and while it's not strictly true, there was one mid-level banker who did go to prison, but the rest, we didn't feel, the populous didn't feel like there was accountability for what happened and what everybody else paid for.

In that way, look, I'm not alleging that there's criminal fraud happening now, and for those who do prosecute, they have to stick within the bounds of the prosecutor's handbooks that they get, which is about being fair on how we enforce laws. All that to say, besides 2008, every other financial crisis we've seen, people went to prison because of that kind of fraud. During the dot-com bubble, you had Enron and WorldCom, CEOs went to jail. During the savings and loan crisis of the 1980s, hundreds of people went to jail. Even in the small business loans during COVID where we put out a bunch of money for small businesses to get loans and grants, hundreds of people have gone to prison for that kind of fraud. And so all I'm saying is that shouldn't be off the table as part of the reaction that we have after a crash.

Justin Hendrix:

A lot of these ideas seem interesting, attractive to me on some level, yet they also seem like wishful thinking in the current political environment. I don't know, what are you thinking about the kind of near term possibility? You're still imagining perhaps a rational Congress might set out to try to solve some problems as opposed to necessarily doing the bidding of the industry?

Asad Ramzanali:

The fundamental thesis for the theory of why this paper, I wrote it and I wanted to work on it, is that politics change drastically during a crisis. And so while yes, I agree with you, Justin, it has been frustrating for me personally how slow we've moved on tech policy issues. I have written many bills for members that went nowhere. That said, during a crisis, we have the opportunity to do things that we've needed to do for a long time. And so my hope, my desire is that we take advantage of that opportunity to do the things that are in the public interest and not just what industry says is the right thing to do.

Justin Hendrix:

And do you have some interest from Capitol Hill? Anybody paying attention to this report?

Asad Ramzanali:

We have received a lot of interest from Capitol Hill. I've been taking a lot of calls from staffers in the House and Senate, and actually in just a little over a week from when this will air, we're going to be hosting Senator Warren, Senator Elizabeth Warren, to have a discussion about this exact topic of ‘we think there's a looming AI crisis.’ Look, I don't make a prediction on when this will happen, but I do say we need to be ready about the things that are happening in those markets and policymakers need to start debating these ideas now. So we're pleased that we'll be hosting Senator Warren to have that discussion at Vanderbilt. Well, it'll be in DC, but it'll be hosted by Vanderbilt.

Justin Hendrix:

Asad, thank you so much for speaking to me about this report. I look forward to seeing what happens to these ideas. Can't say I'm hoping for a crisis, but also I certainly recognize what you say, which is that it would be a terrible thing to waste should it come about.

Asad Ramzanali:

I also am not hoping for one, but we should be ready if it happens. Thank you for having me and thanks for taking the time.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President of Business Development & In...

Related

Perspective
Policymakers Have to Prepare Now for When the AI Bubble BurstsNovember 24, 2025
Podcast
What Are the Implications if the AI Boom Turns to Bust?November 13, 2025
Perspective
AI Hype Is Steering EU Policy Off CourseNovember 17, 2025

Topics