Home

Donate
Perspective

What Happens When 'Superintelligence' Doesn't Appear in a Few Months?

Cole Donovan / Dec 2, 2025

AI is Everywhere by Yutong Liu & The Bigger Picture / Better Images of AI / CC by 4.0

As a longtime proponent of fusion energy, I have a special appreciation for the folks running around Washington DC and Silicon Valley acting as heralds of the coming superintelligence. All of the elements look familiar: promises of a technological revolution, claims that we stand on the brink of transformation, and assurances that utopia is just one round of investment away.

It’s also a source of constant frustration for individuals closer to the scientific community. Rational arguments about what needs to be done in order to develop the technology are often overtaken by easy, hype-driven promises. These promises often assert the most difficult problems don’t need to be solved in order to advance to the next stage of development. Motivated reasoning steals resources away from actual engineering challenges and diverts them to quick investments that frequently fall victim to typical economic boom and bust cycles.

At worst, these promises and resource misallocations distract society from being able to advance the technologies in question. The “hard stuff” gets starved of resources and knowledge capital in favor of quick wins. This is particularly apparent in the space domain, where we are much farther from being able to land on the moon, today, than we were in 1969.

What’s maddening about the drive for superintelligence is that we really don’t know where we’re aiming. Unlike developments in other emerging technologies like fusion or cryptographically-relevant quantum computers, it is not clear how we’d even measure a superintelligence, nor is it certain that GPUs and the models that run on them are the ideal technologies for replicating the function of a brain. This has not stopped massive investment in fancy predictive text models that are bad at math but give us the illusion of engaging in conversation with another human, as opposed to focusing on specialized tools that have already yielded significant advantages in everything from science to photography. Yet superintelligence is treated as almost imminent, possibly even mere months away, setting up the world’s hardest problem for delivering on the AI industry’s promises.

With other technologies, people might say “well, if the superintelligence doesn’t manifest in 18 months, the proponents will be proven wrong and we can all move on for our lives.” The difference is that many superintelligence proponents will argue (with a straight face) that society isn't organized to solve other problems, such as climate change, so we should focus our bets on unconstrained artificial intelligence investment, which can then solve those problems for us.

Why use resources today if the rest of our problems are solved tomorrow? Ignore that a superintelligence will still need perfect knowledge of the world around it in order to solve those and other problems. There is just the assumption that we’ll be able to derive the universe from a piece of fairy cake. If you’re familiar with the science underpinning the peripheries of knowledge, then you’re also aware that our observations, supported by theory, can lead to sources of tension, requiring new, additional, and continual observations to resolve. Cutting investment can result in the elimination of the tools and diversion of talent that are necessary to advance our understanding of the world around us.

Things may even start to get theological, as some use language that makes superintelligence sound a lot like an ‘entelechia’ — the full realization of the potential of intelligence — offered by Aristotle and beloved by Aquinas, bringing humanity closer to true contemplation. This is obviously bollocks, but offers possible insight into the minds of folks like the investor Peter Thiel who claim that efforts to control AI is the same thing as the Anti-Christ (which makes sense if you happen to believe that denying access to superintelligence is tantamount to restricting access to God).

If the technology fails to achieve its core objectives in the timeframe suggested, the reshaping of society taking place (ostensibly in order to maximize the resource flow to pursue that objective) will have already happened. We’ll have replaced our investments in clean energy with massive resource dumps into a dirty grid, taken money away from education in support of LLM-mediated learning, dismantled the global infrastructure for monitoring climate change, undermined our social safety nets, and concentrated even more wealth in the hands of the few. Ordinarily these ideas would be debated on their merits, but the imperative to create the superintelligence, often framed in the context of national security, helps blow past the usual objections.

For those of us who are not so fortunate, misdirected AI investments make us less healthy, leave us with fewer jobs, and may result in significant developmental deficits in our critical thinking and creativity. In the defense and criminal justice sectors, it means the possibility of offloading responsibility for life or death decisions that we cannot explain to a machine. These may be acceptable consequences if you believe that freedom and democracy are incompatible, or that America would be better off with fewer highly educated people, but it’s generally bad news if you’re of the view that all people deserve equal protection under the law or that the entrepreneurial spirit has a tendency to manifest in unexpected places.

This looks like a very different political project. It is something distinct from the hype and bust cycles tripping up meaningful advancement in fusion, nanotechnology, and quantum computing. It is a project that should immediately lead people to question the motives of the techno-optimists whose political goals also happen to align with that particular set of objectives. Their political project has clear goals; their emerging technology does not.

When collateral damage from our bad investments starts to look like upside to those who harbor a particular set of beliefs, it’s worth investigating what’s actually going on and figuring out who’s getting screwed in the process.

Authors

Cole Donovan
Cole Donovan is the Associate Director for Science and Technology Ecosystem Development at the Federation of American Scientists (FAS). He previously served in the White House Office of Science and Technology Policy (OSTP) as Assistant Director for International Science and Technology and Assistant ...

Related

Perspective
Why Tech Hype Is Rising and What Venture Capital Has to Do with ItDecember 2, 2025
Perspective
To Have Democracy, We Must Contest DataOctober 14, 2025

Topics