Home

Donate
Perspective

The Myth of AGI

Alex Hanna, Emily M. Bender / Jun 3, 2025

This piece is part of “Ideologies of Control: A Series on Tech Power and Democratic Crisis,” in collaboration with Data & Society. Read more about the series here.

WASHINGTON, DC - JANUARY 21, 2025: OpenAI CEO Sam Altman (center), US President Donald Trump (left), Oracle Chairman Larry Ellison (first right), and SoftBank CEO Masayoshi Son (second right) speak during a news conference announcing an investment in AI infrastructure. (Photo by Andrew Harnik/Getty Images)

"AGI is coming very, very soon. And then after that, that's not the goal. After that, artificial superintelligence. We'll come to solve the issues that mankind would never ever have thought that we could solve. Well, this is the beginning of our golden age."

Masayoshi Son, January 21, 2025, speaking at Donald Trump’s press conference announcing the Stargate initiative

Tech CEOs, futurists, and venture capitalists describe artificial general intelligence (AGI) as if it were an inevitable and ultimate goal for technology development. In reality, the term is a vague signifier for a technology that will somehow lead to endless abundance for humankind — and conveniently also a means to avoid accountability as tech moguls make off with billions in capital investment and, more alarmingly, public spending.

AGI is a term that famously lacks a precise meaning, and certainly does not refer to any particular imminent technology. Definitions range broadly in ways that primarily suit the economic arrangements of the individuals and organizations ostensibly trying to create it, or the cultural mystique of a set of adherents to a set of fringe ideologies. OpenAI’s charter defines AGI as “highly autonomous systems that outperform humans at most economically valuable work.” Mark Zuckerberg has said that he does not have a ​​“one-sentence, pithy definition” of the concept. Embodying the mysticism around the term, Ilya Sustkever, the former Chief AI Scientist at OpenAI, would lead chants of “Feel the AGI!” around the office. In a leaked agreement, Microsoft and OpenAI crafted a much more well-defined metric: whether such a system could generate $100 billion in profit.

In all these cases, the term is meant to evoke something with awesome power, much like the term “AI” used to, before it became overexposed in marketing. The bid for awe in the use of “AGI” echoes the discourse from the field's origins. In an influential 1956 report, computer scientist Marvin Minsky, considered to be one of the founders of the academic discipline of artificial intelligence, remarked that “[h]uman beings are instances of certain kinds of very complicated machines,” and if we could somehow get to replicating key parts of the human brain, then we would be able to achieve “AI”. This approach to “thinking machines” resonates with the more modern deployments of “AGI”.

When we give credence to the idea of AGI, it does multiple things in the real world. First, it signals that a computer program that is proficient at one thing — like predicting words from other words, which is what ChatGPT and other chatbots are doing — can do important social and economic work, such as addressing gaps in major social services, doing science autonomously, and “solving” climate change. These are real proposals: California Governor Gavin Newsom has suggested that traffic issues and homelessness can be solved in California with “AI”, while Google DeepMind CEO Demis Hassabis has suggested that we will cure cancer and eliminate all diseases in five to ten years with autonomous AI scientists. Former Google CEO and board chairman Eric Schmidt said that we shouldn’t worry about the climate emissions of AI systems because “AGI” will solve climate change for us.

The second issue is closely related to the first: claims of “AGI” are a cover for abandoning the current social contract. Instead of focusing on the here and now, many people who focus on AGI think we ought to abandon all other scientific and socially beneficial pursuits and focus entirely on issues related to developing (and protecting against) AGI. Those adherents to the myth of AGI believe that the only and best thing humans can do right now is work on a superintelligence, which will bring about a new age of abundance. Venture capitalist Marc Andreessen has said that “AI” will “crash wages” and deliver us into a “consumer cornucopia” where the marginal cost of consumer goods will approach zero. OpenAI CEO Sam Altman believes that, once AGI is built, everyone will “own” a small bit of access to it. He envisions this being apportioned as “universal basic compute,” a riff on universal basic income, as if a chance to direct the actions of the big supercomputer would be all that is required to meet one’s needs.

If you think this sounds weird, mystical, and god-like, you’d be correct. The last bizarre direction of discourse about AGI is that it plays into the idea of a big, possibly benevolent robot god who will rescue humans from ourselves—that is, if we happen to imbue it with the right values. These people believe in one of two versions of a technological future: either an AGI that is trained with proper values will lead to a world of limitless abundance, where we live in post-human forms, or a big robot superintelligence will wipe us out. People like Ray Kurzweil, a futurist and Google fellow, believe in the former, specifically in an ideology known as the technological singularity. Others like Eliezer Yudkowsky, a blogger and internet personality, fear we will develop a machine superintelligence too quickly to control it, that it will realize it does not need to rely on humans, and that we will have to find a way to prevent this superintelligence from going rogue. For Yudkowsky, it appears that no such plan is off the table, including and up to bombing a datacenter that has been taken over by this being.

All of these ideas would be relegated to the fringe, if they did not have massive appeal and influence by major executives in industry and government. Even though “AGI” is a poorly defined, fuzzy concept, it has an outsized impact in policy circles: Not only do promises of AGI motivate the Stargate initiative, but such promises also help motivate limitations on AI regulation, such as the 10-year moratorium on state-level regulation of AI passed by the House in their funding bill. The next time you see someone talk about the promise or threat of AGI, you should ask: What social or political problems are they interested in papering over, and how are they implicated in creating them or making them worse?

Four pieces the authors recommend include their book, The AI Con, the “The TESCREAL bundle: Eugenics and the promise of utopia through artificial general intelligence” by Timnit Gebru and Émile Torres, More Everything Forever by Adam Becker, and the Great White Robot God by David Golumbia.

Authors

Alex Hanna
Alex Hanna is Director of Research at the Distributed AI Research (DAIR) Institute. She focuses on the labor needed to build the data underlying artificial intelligence systems, and how these data exacerbate existing racial, gender, and class inequality.
Emily M. Bender
Emily M. Bender is a professor of linguistics at the University of Washington, where she is also adjunct faculty in the School of Computer Science and Engineering and the Information School. She specializes in computational linguistics and the societal impact of language technology.

Related

Perspective
Tech Power and the Crisis of DemocracyJune 3, 2025
Podcast
Taking on the AI ConJune 1, 2025

Topics