Adam Becker Takes Aim at Silicon Valley Nonsense
Justin Hendrix / Apr 27, 2025Audio of this conversation is available via your favorite podcast service.
From visions of AI paradise to the project to defeat death, many dangerous and unscientific ideas are driving Silicon Valley leaders. For this week's podcast, I spoke to Adam Becker, a science journalist and author of MORE EVERYTHING FOREVER: AI Overlords, Space Empires, and Silicon Valley’s Crusade to Control the Fate of Humanity, just out from Basic Books.
What follows is a lightly edited transcript of the discussion.

More Everything Forever: AI Overlords, Space Empires, and Silicon Valley's Crusade to Control the Fate of Humanity, by Adam Becker. Basic Books, April 2025.
Justin Hendrix:
Adam, I'm excited to speak to you about this book. I feel like your resume is like maybe what my resume would be if I had applied myself.
Adam Becker:
Oh, God.
Justin Hendrix:
You have a lot of interests that I share, but I don't think I've pursued them with quite the same rigor—PhD in cosmology, career as a science journalist. I want to step back just for a second on the PhD in cosmology because that stuff comes into this book quite a lot. It's actually important here.
Adam Becker:
Yes. So I pursued cosmology because I've always been interested in it. I've always been interested in physics and in space, and since I was a little kid, as I talk about a little bit in the book, and I think a lot of kids are like that, but I went through a dinosaur phase and then it was space and then just kept being interested in space and physics and math and just stuck with it because it was something that I cared about and something that I wanted to see through. I knew, actually, going into my PhD program that I was not likely to stay in academia, and I wanted to do the PhD anyway because I wasn't ready to be done with physics.
I'd always found myself drawn to talking about science with people who aren't scientists and communicating science to the public, talking about science to the public, helping people understand what science is actually like even when they're not scientists, and I see what I do now as being part of the scientific enterprise or part of the project of science, and I mean broadly speaking, but I also think that science is part of the broader human enterprise. Science is a thing that humans do. It's ultimately a cultural pursuit of knowledge, and I don't think you can cleanly separate it from everything else. Like any other human activity, it has fuzzy boundaries, but I wasn't convinced that academia was for me, and given the way that the academic job market is structured and the way the hoops that you have to jump through to stay in academia past a PhD, if you're not sure that it's for you, you probably shouldn't stay. So I didn't and instead decided to try my hand at this and found that it was a good home for me and that it suited my personality and ability as well.
Justin Hendrix:
Well, I'm grateful to journalists like you who do explain very complicated science and concepts to people like me who don't have the math of the physics to necessarily read the underlying papers or understand the science in such depth, but I don't know, the new scientist comes to my home, scientific American comes to my home. But picking up on this, I guess, connection to trying to understand how the universe works, trying to understand how the world works, I mean, you talk a little bit about how one of the reasons that you became interested in kinds of political questions, philosophical questions was from your dissatisfaction with how the adults seem to be adulting, addressing some of the complicated things that are happening on the planet at the moment. I guess these technologists, the folks that you're talking about in this book, I mean, they seem to want to get their arms around it, and they have a vision of where they're going to take things.
But I think it's fair to say this is a tech-critical book. This is a book about poking holes in the ideas about the future that are presented to us largely from Silicon Valley. You talk early on about this idea that setting the terms of conversations about the future carries power in the present. If we don't want tech billionaires setting those terms, we need to understand their ideas about the future, their curious origins, their horrifying consequences, and their panoply of ethical gaps and scientific flaws. That's the project of this book as I understand it. Is that right?
Adam Becker:
Yes, absolutely. Yeah, that is the project of this book. And as you said, these tech billionaires, they are trying, they claim that they're trying to make the world a better place and solve the biggest problems that face humanity, but one of the things about trying to make the world a better place is a lot of the problems of the world stem from the fact that the best interests of humanity as a whole and civilization often don't line up with the short term best interests of individual people, especially powerful people, and these powerful people are no exception to that, and it's easy to try to change the world and try to make the world a better place when you believe that doing that lines up well with your own personal financial and business incentives. And that's exactly what's going on here.
They have found a philosophy and ideology about how the world works, almost a religious faith about how the world works. That is based on very little actual information about how the world works. There's no science to support it in a great deal of science and other things that cut against it, but they have found this way of looking at the world that convinces them that there's a happy alignment between their own interests and the interests of humanity, and it's just not the case. So they run around saying that they're saving the world, like Elon Musk runs around saying that he's trying to save the world, and what he is actually doing is lining his own pockets.
Justin Hendrix:
You take on a lot of different ideas in this book that we've talked about on this podcast before. We've gone on about ideas like the singularity and transhumanism, the problem of unaligned AI, AGI, superintelligence. You get onto the fundamental problems of LLMs, you more or less ride against a lot of these ideas that we've seen emanate from Silicon Valley. About 130 pages deep in you're talking about eugenics, racism, sexism, other kinds of isms that are rife in the subcommunities of rationalists and effective altruists. This is where certain characters come in, Curtis Yarvin, Peter Thiel, the idea of a dark enlightenment. When did you turn this book into the publisher?
Adam Becker:
I turned this book in about two or three days before the election. That's when I turned in the final edits.
Justin Hendrix:
In a weird way, I feel like we're living slightly in a world that's perhaps more under the thrall of the idea of these individuals than we might've anticipated at the moment.
Adam Becker:
Yeah, I never wanted the book to be as timely as it's turned out to be. I mean, I was able to make a couple of very small tweaks after I turned in the final round of edits. Nothing major, just one or two words changed here or there, and one of those very last changes was to say, "Hey, on page 100 whatever, where I mentioned that JD Vance is Trump's running mate in 2024, you're going to have to change that to say that he was elected vice president." But he shows up in the book in the first place because he's a disciple of Curtis Yarvin and has spoken very highly of these ideas that come out of this dark enlightenment, which is just a nicer way of saying they want kings, they want monarchy back, they want total autocratic control, and this is not me putting words into their mouths. This is what they themselves have said. And so, we shouldn't be surprised when we see someone who believes that like JD Vance saying to Germany that they should let neo-Nazis into their government.
Justin Hendrix:
Well, let's dig into this just a little bit because I think this is one place I want to pause on, and as you say, I mean it feels like to some extent some of these ideas are coming true. I mean, you point to one of Yarvin's suggestions that government employees should be fired and replaced with loyalists to an autocratic leader. I think I literally read that today on Reuters, this story about allegedly Elon Musk, Doge using some form of artificial intelligence to scour the emails of at least one federal agency looking for signs that federal employees may be anti-Trump. Sounds almost like Yarvin's suggestions being deployed in the federal government.
Adam Becker:
Very much so, yeah.
Justin Hendrix:
What do listeners need to understand about these individuals to help understand this moment?
Adam Becker:
Oh, they're true believers. They think that they have the inside track to understanding what is going to happen in the future. They think that AI is going to inevitably become super intelligent and take us all to an interstellar paradise, and that the rest of us who are against them lack that vision and want to keep humanity in the dark ages. And this is all tied up with a racist eugenicist project and picture of how the world works and what humanity is and what intelligence is. And this set of ideas gives these people moral absolution. It gives them a sense of meaning about what they're doing, and it gives them a promise of transcending all possible boundaries. That they don't have to worry about legal limits, ethical limits. They don't have to worry about biological or physical limits. They don't even have to worry about death once the AI shows up.
So they're not worried about what's going to happen when you take on the federal government and dismantle it and replace most of its function with AI because they think that AI can do all of those things. And there's a political project baked into all of this as well that's deeply both libertarian in that it wants to dismantle government but also authoritarian in that it wants to replace the democratic process with a single leader, which will ultimately be AI. But in the meantime, it's going to be business leaders of the tech industry because they're the only people who really understand the world according to this ideology.
And all of that is just nonsense. And obviously, politically, socially, it's nonsense, but also there's no science to support the claims about AI that they're making. There's no science to support the claims that they're making about space settlements being a good idea. And a lot of it's just based on this racist, authoritarian far-right logic that goes back to eugenicists and eugenic philosophies and Christian millenarian philosophies from the late 19th and early 20th century filtered through bad readings of science fiction.
Justin Hendrix:
I want to talk a little bit about maybe the most zany ideas about where humanity might be headed in the distant future. And then, I want to ask you a little bit about the way that some of these Silicon Valley figures think about climate change as a phenomenon. This is something else you get onto in the book, but let's talk a little bit about the far future ideas. I mean, you get onto travel between galaxies, Dyson swarms, aliens, heat death of the universe.
Adam Becker:
Yes. So this will to quantify that's deep within this way of seeing the world—quantification is the key to understanding the world—and that if something can't be quantified, it doesn't matter, it's maybe not real. And there's also this other thing that goes along with that, this desire to extract value from the world, extract resources, energy, all to build this quantified notion of value. It's a very utilitarian way of looking at the world in a very reductive way. And it's not super surprising to see a branch of utilitarian ethics used to justify this.
And so, the upshot is if you look at the world that way, which is we can apply various political labels to that. We can call it Taylorist, we can call it neoliberal, and those are all accurate, but if you really look at the entire world this way, the entire universe this way, then you start worrying about, wait, where are we going to get all of the energy that we need if we want to keep growth going? Because if you have this mindset, then you believe that economic growth is vitally important, and economic growth is generally always tied to growth and energy usage. So you need to keep growing your energy usage at a fixed percentage, which means that you needed to grow exponentially. That's what growth at a fixed percentage is.
And you also need to worry about entropy because it's not enough to get lots of energy. It needs to be low-entropy energy, otherwise you can't use it. This is literally how the biological processes of life work. And so, if you look at the universe through that lens as a giant well of resources that you need to allocate as efficiently as possible, otherwise you've lost value, which feels really alien to me and also misses, I think, a lot of what it is that makes life meaningful and good and important.
But if you do look at the universe that way, then you start worrying about things like the heat death of the universe, which is this point in the far future where everything is in a state of maximal entropy and you can't do anything anymore. That's trillions upon trillions of years off. It's very, very far away, much longer than the current age of the universe away. But if your project is to maximize value at all costs, you have to worry about that. And if you want perpetual growth, then you're in an even bigger pickle because you're going to just keep grabbing as much energy as possible.
And if you continue, if you want, say, 3% annual growth in energy usage, which is historically what's been going on for the last few centuries, I believe, then in a couple hundred years, you're using all of the energy on Earth. And in 1,000 years after that, you're using all the energy output by the sun. Few thousand years after that, you're using all of the energy in the observable universe. So growth has to end. You can't just keep going like that. But this is something, this is a point that a lot of these people just miss.
Justin Hendrix:
So you're walking into where my next question wanted to go, which is this thought about climate change and the way that folks are thinking about climate change in Silicon Valley. I mean, a lot of this ends up being about energy. I mean, a lot of the geopolitics of AI at the moment appear to be about energy. And I guess, let's step back from the heat death of the universe and maybe to our own heat death if we're not careful. I don't know. What does the listener need to understand about the way that some of these more extreme characters in Silicon Valley who are the prophets of artificial intelligence, the way they think about climate change, how we're going to address it, and the primacy of advancing AI gobbling up all the energy towards that end as we go?
Adam Becker:
Yeah, I mean, this is, to me one of the most alarming things about all of what's going on in the tech industry. There is this widespread unquestioned faith that AI will soon lead to AGI, this thing that can do what humans do, and then that will lead to super-intelligent AGI, something that can do far more than humans or even humanity can do. And those are, of course, all extraordinarily ill-defined. That doesn't stop anybody. But worse than that, there is this faith that those systems are going to be able to solve climate change really, really quickly with no real justification for why that would be true. And yet, we've been hearing this over and over again.
Sam Altman has said that a good way to address climate change is to pursue AGI, build this super-intelligent AGI, and then ask it essentially for three wishes. He actually says three things, and he said, "We ask it for these three things, and that solves climate change." Yeah, great. Build a machine that's ill-defined, that nobody knows how to build, ask it for three wishes. That seems like a great approach to solving the biggest challenge of our time, especially because the path that you want to pursue to get there involves using even more energy than we're already using, many of it among high-carbon-footprint sources like coal and stuff. And Eric Schmidt, former CEO of Google, venture capitalist, said basically the same thing just a few months ago. He said, "We're never going to meet our climate goals, so the best thing that we should do is just use even more energy to get to AI, to get to AGI as quickly as possible, and then that'll solve climate change."
Justin Hendrix:
This is a kind of almost like suicide pact or something. I mean, it feels like we have to make a suicide pact with the machines. Almost sounds like a Douglas Adams story in a way.
Adam Becker:
I mean, there's a lot of bad readings of science fiction going on here. I mean, there was that great piece by Jill Lepore in the New Yorker a while back where she pointed out that Elon Musk clearly does not understand the Hitchhiker's Guide to the Galaxy. I mean, it all would be hilarious if it wasn't so serious.
Justin Hendrix:
You say something towards the end of the book, you write, "Most of the greatest problems facing humanity right now—global warming, massive inequality, the lurking potential for nuclear war—are not driven by resource scarcity or a lack of technology. They're social problems requiring social solutions. Increased energy usage, increased technological prowess, or even an increase in the amount of intelligence brought to bear on these problems, whatever that might mean, isn't likely to solve them. These are political problems, problems of persuasion and justice and fairness."
One of the things I found myself thinking about reading your book is something I try to talk out of my students that I teach at NYU at Cornell Tech, which is the engineering mindset, the idea that we're going to necessarily build our way out of various social problems, social ills. I mean, is that what we're witnessing here? Is it just like this weird thought that that's the only way to do this? The only way out of these things is through the hard power of engineering and technological progress because we're just too afraid of the types of answers that might come if we decided to look at it through a social or political lens?
Adam Becker:
Yeah, I definitely think that's a lot of what's going on here. One of the things that's so ludicrous about what Altman and Schmidt and the others who say that AI is going to solve climate change, one of the things that's so ludicrous about that is say that they're right about AGI coming, which they're not, but say that they're right, that it's going to be here. Altman said just earlier this year that he expects AGI to arrive sometime in the next four years.
Justin Hendrix:
Well, they'll say it's here.
Adam Becker:
They will definitely say that it's here, but it's not going to be, and not in a meaningful way. And again, it's an ill-defined term, but say that some godlike superintelligence, well, godlike is a bad word to use here, but say that's some superintelligence of some kind showed up in the next four years, say Altman was right about that. There is no indication that that would solve climate change, because it's not like we don't have the technology to deal with the climate crisis. I mean, there's some technology that we're missing like technology to pull carbon out of the air, and it'd be nice to have more clean energy sources, but fundamentally, this is not a problem about technology. It's a problem of political persuasion.
The only way that a superintelligence could solve global warming would be if they took over the world, and we don't want that. So this is very much this engineering mindset I think at work, and I mean, I feel comfortable saying that in part because I'm a physicist. I come out of that same culture, this idea that everything important reduces to science and technology, and it's just not the case. That's not how the world works. That's not how humans work. That's not how life works.
Justin Hendrix:
So at the end of this book, you show your cards a bit. You write sentences like "eliminating billionaires would also be an investment in the political stability that makes prosperity possible." You talk about permanent plutocracy, a tyranny of the lucky in their reliance on machines. This is the language of political and economic revolution. I don't know. How do we see ourselves out of this situation? I mean, right now, it seems like we're seeing the merger of state and tech power and the ideas of these folks that you document in this book, essentially becoming the operating system. Whether he knows it or not, for Donald Trump's administration, how do we think through this moment?
Adam Becker:
Well, I think one of the interesting things that's happening today right now is that there's a waking up happening among some of the billionaires that supported Trump. Because one of the things that's so odd to me sometimes about all of this is that not only is the agenda that these people want to pursue like Trump and Vance and the type billionaires, not only is that agenda not possible and not good for most people, not good for people who aren't billionaires, it's ultimately not good for the billionaires either. Ultimately, what does that lead to? It leads to a world devastated by climate change with no economic base, the way that they think of the billions of people in the world who are not billionaires, no economic base to support the highly technologically advanced industrialized civilization that we currently have. Basically, they'd be starving and gulch if they got what they wanted. Ultimately, they would just make a bunch of money briefly, and then the lights would go out.
And there does seem to be a recognition that something's gone horribly wrong among them right now because of these tariffs, which were, there's the pretty compelling evidence developed by AI, and the tariff policy was created by AI, which is, yeah, Trump wanted it, but when they asked, okay, what should the tariff rates be? They asked ChatGPT. Because know clearly the AI is smarter than all of us, and maybe the AI is smarter than anyone in the administration. But I think that there's some recognition finally, maybe, hopefully that this is not really even in the best interests of the wealthy.
And history also tells us that if the wealthy do not find a way to live with the rest of us, they are not going to remain wealthy. They are going to end up being political instability, and that will lead to revolution of some kind. And there is very little guarantee that their wealth and power will survive that revolution, even if it's a right wing revolution that ends up succeeding and bringing in an even harsher regime that I would hate, there's still no guarantee that that would leave the billionaires safe. There's plenty of history to support that revolutions of all stripes are bad for business and bad for the wealthy and entrenched powers.
So I think there's some recognition that this is not sustainable. I would like to think that there's also recognition on the left that the Democratic Party's friendliness with big business left it open to the attack that Trump and the modern Republican Party has engaged in upon the Democratic Party claiming that they're a party of the establishment and that the GOP is anti-establishment, which is on the one hand, a sick joke. But on the other hand, the Democrats are more comfortable with billionaires and business than they should be. If they want to actually reflect the will of the people and the best interests of the American people in the world, then you have to confront the fact that entrenched economic powers are not always going, in fact, they usually are not going to get you the best answers to the biggest questions of the day.
We know that giant multinational corporations have tried to slow and halt and reverse progress on addressing climate change. We know that. We have massive amounts of documentary evidence that that's true. So if you want to actually fix that problem, which is an existential threat to human civilization, then you've got to make a choice about whether or not you're going to work with the businesses that have tried to basically get us all killed or not.
And I think that that kind of moral clarity is the beginning of how we make this change. I think there's a tendency to not see the violence inherent in trying to support policies that will get millions or billions of people killed by climate change or that will leave millions or billions of people neglected while billionaires try and fail to set up giant colonies in space, or that will let the rest of us burn while they fuel AI data centers that will not lead to them summoning a God. And I think that there's also a great deal of public impatience with and discussed with AI. I think a lot of people don't like it, and I think that's good. They shouldn't.
Justin Hendrix:
One thing that I'm paying close attention to now, getting to some of these issues around energy and the way that AI infrastructure is being foisted on communities on some level, like the data centers and energy demands and the water demands, looking at OpenAI's blueprint for industrial development in the United States, they call for essentially removing any barriers to building for high energy AI infrastructure projects. They call for all manner of public investment in public support for the standing up the type of infrastructure necessary to do the types of things you're talking about, to get to that point as fast as possible. So I guess, I don't know. That's the other thing that I'm just fascinated by at the moment, is the extent to which these guys aren't comfortable with all of the capital they're able to accrue from the valley, from sovereign wealth funds, et cetera, they want all of the American government's data and they want literally no limits. And they also want the government to pay for the road to be paved to AGI.
Adam Becker:
Yeah, a goal that they're never going to reach, although they'll say they will. I mean, look, there's one of these things that sounds like a vaguely sourced anecdote, but is actually very well sourced that I almost put in my book. It was a conversation that Kurt Vonnegut had with Joseph Heller, the guy who wrote Catch-22. The two of them were friends, and Vonnegut wrote about this in, I think it was the New Yorker, after Heller died. He said the two of us were at a party thrown by an enormously wealthy person somewhere on Long Island, and we were there to be the artists on display to show that these very wealthy people had artist friends. And we were walking around the party and gawking at all the wealth. And then I said to Heller, right, this is Vonnegut saying, I said to Heller, "Wow, look at all this." And Heller said, "Yeah, but we have something that they're never going to have." And Vonnegut said, "What?" And Heller said enough.
These people don't have the concept of enough. If I want to engage in some kind of irresponsible psychoanalyzing of what the hell is going on in their heads, there's some way in which they just don't feel safe. They feel like no amount of control could possibly be enough. They need more everything forever. And the other thing is disregarding regulations, trying to get the government to pave the way to your own economic ascendance and saying that what you're doing is for the good of all humanity and the government actually playing along and giving you that money. That's what gave us Elon Musk with the carbon credits for Tesla, which carbon credits are a good thing. And with all of the government contracts for SpaceX, that's how he built his empire. We don't need more Elon Musks. We need at least one fewer Elon Musk. So yeah, Altman wants all this stuff for OpenAI.
Look, there was this interview that Altman did with the New York Times two, three years ago, I think it was three years ago. I talk about this a little bit in my book. It was crazy to me that it didn't get more attention, and I think it's because of how it was framed in the article that the interview appeared in. Altman has said that his goal with OpenAI is to accumulate literally all of the wealth in the world, or nearly all of it. This is what they want. They want to build a privately owned God and use that to fuel a privately owned singularity to take over the world. It's like a ridiculously complicated Rube Goldberg Pinky and the Brain plot. And the good news is none of that works that way, right?
The singularity doesn't work, AGI, they're going to claim they have it, but it's not going to be a thing. It's going to suck. The bad news is they're going to destroy the world along the way, and they don't see it because they're blinded by faith. You ask, "What do we need to understand about these people?" We need to understand that they're fundamentally irrational and motivated by something that's both absurd and not very smart. We have a tendency to lionize the wealthy in this country and claim that their business acumen also makes them generally intelligent like an AGI. I don't think that that's true. I don't think general intelligence is a thing. The concept of general intelligence has a long and racist history, and I think most of these people, the primary thing that allowed them to accumulate this wealth was two things. A lack of government guardrails against them exploiting the systems in the way that they did, and luck. And nowhere in that is any sort of genius. These people's ideas about the future are unoriginal, unworkable, and deeply destructive, and they really, really believe in them.
Justin Hendrix:
The only other, maybe most salient vision of the future that appears to be on sale on the planet at the moment, which is China's version of the future, which is invoked often by these individuals. OpenAI says, the reason it needs all the money and the resources is because we need essentially an AI that's built on democratic principles. I would be interested to unpack at some point what democratic principles they think they're working on there, but China doesn't appear too much in the book, but how do you think about this alternative version? I mean, on some level, I feel like the Chinese vision of the future is still kind of a Silicon Valley version of the future. It's still AI and 5G and technological progress and wealth redistribution driven by abundance, brought by technology, et cetera. But I don't know, at least it's to some extent thought of differently in terms of, oh, the government's still in control rather than private corporations. But I don't know, how do you think the types of characters that you're addressing in the book, how do you think they think of China?
Adam Becker:
No, I mean to them, China is the ultimate nightmare. It's complete government control, but it's also a convenient boogeyman. I do not know if the people working on AI in China believe these narratives about AGI and the singularity and AI alignment and all that stuff that the people here do. My guess is that those ideas have at least some currency there. What I've heard is that they have some currency there. I don't know enough to say more than that about what's actually going on in China, but what I do know is that competition with China is like B thing that these people will use to say, well, look, if we don't do it, China will, and then China will be in control of everything and China will be running the singularity. Yeah, I don't care about that because the singularity's not coming. I don't care about that because AGI is not coming in the way that they think it is.
Justin Hendrix:
What leaves you with some optimism?
Adam Becker:
Yeah, what leaves me with some optimism? I have seen a lot of very smart, dedicated, committed people working hard to push back against these ideas about the future and against the Trump administration and against the tech oligarchs. I have seen increasing activity among tech unionization efforts because while these ideas are very popular among tech CEOs, they're a lot less popular among the people who actually work at the tech companies. And I mean, a lot of this stuff, a lot of the push for AI, I think, like I said before, comes from a kind of fear. And one of those fears is the fear of loss of control of the companies because ultimately the company is dependent on the workers. And if you can somehow replace all of them with AI, then you don't need anybody else there. And you can have your dream of perfect control from the boardroom.
But that's not going to work out. So the fact that tech workers haven't bought into this in general and that there are increasing unionization efforts, that makes me happy, that gives me hope. The fact that the Democratic Party seems to have finally discovered something vaguely resembling a backbone and starting to fight back against Trump in a meaningful way, that's nice to see. The fact that in general, this seems to be handled better in Europe and that people like Peter Thiel are saying that they're afraid of Europe and the European Union. That gives me hope. And also the fact that for all that these tech billionaires don't want this to happen, that we live in a world or we live in a country where I can still write this book and have it be published and nobody's going to come arrest me for it. That's pretty good. And that gives me hope.
Justin Hendrix:
I hope folks will go out and check out this book, More Everything Forever: AI Overlord, Space Empires, and Silicon Valley's Crusade to Control the Fate of Humanity by Adam Becker. Adam, thanks so much.
Adam Becker:
Thanks for having me, Justin.
Authors
