The Silicon Illusion: Why AI Cannot Substitute for Scientific Understanding
William Burns / Aug 18, 2025William Burns is a fellow at Tech Policy Press.

Hanna Barakat & Archival Images of AI + AIxDESIGN / Rare Metals / CC-BY 4.0.
The narrative that artificial intelligence is revolutionizing science is now nearly inescapable. From Nobel Prizes to billion-dollar biotech investments, the story goes that AI is not just the future—it is already remaking scientific discovery. However, beneath the spectacle lies a more troubling reality: AI, as currently deployed in science, obscures more than it reveals, exacerbating the very problems it claims to solve.
In "The AI Con," Emily M. Bender and Alex Hanna critique these illusions while arguing that AI is already causing real-world harm. Yet even favorable reviews of the book, such as the one published in Science, hedge their praise with claims that AI has “promising capabilities” in scientific research that would justify its development. Such arguments, which acknowledge AI’s dangers while promoting its promise, appear to cast scientists as latter-day sorcerer’s apprentices pressing ahead, even while cognizant of risk.
This analogy, in some ways, recalls the anguish of Cold War figures like J. Robert Oppenheimer, or “Dr. Strangelove” as parodied by Stanley Kubrick and Peter Sellers in the 1964 movie and later dramatized in the 2023 film, “Oppenheimer.” But perhaps the analogy gives too much credit to our current reality. Recent historical research suggests we might not be seeing the full picture: it was not always technology out of control, but technology that simply did not work.
A 2021 study of declassified archives argued that a Vietnam War-era project codenamed Igloo White – electronic sensors hidden in the jungle, linked to computers intended to target US bombing – was in reality “an enormous bureaucratic mire” that North Vietnamese opponents readily spoofed. The project became, in effect, a digital facade with little behind it.
Public discussions today about AI are equally difficult to interpret because firms often present us with a dramatic spectacle. While spectacle is far from new in the history of knowledge, it has also been a tempting opening for grifters, against which those with more sincere intentions had gradually developed safeguards that are now being forgotten. Critical scholars Lisa Messieri and M.J. Crockett argued last year in Nature that “proliferation of AI tools in science risks introducing a phase of scientific inquiry in which we produce more, but understand less.” It could, as such, undermine the ability to produce reliable knowledge, upon which the entire scientific edifice depended—an intellectual hazard, as it were, atop the moral hazard, which is already happening.
Science in crisis
AI’s more recent ascendancy in the world of science has not occurred in a vacuum. Over the last two decades, science itself has been in a sort of intellectual crisis. As an analysis in 2021 argued, regarding stagnation in pharmaceutical discoveries, “no consensus exists yet about the true scope of the crisis, its significance, and its underlying reasons.”
Take the biological sciences, for example. The Human Genome Project, orchestrated by the US Department of Energy and the National Institutes of Health in the 1990s, was once expected to lead to a flowering of new medicines, but it failed to deliver on its most optimistic promises. The industry itself, however, was unable to find an adequate explanation. An article from 2008, from a scientist from a major pharmaceutical company, found that “Nothing that companies have done to increase NME [new medicines] output has worked, including mergers, acquisitions, reorganizations, and process improvement.”
The reasons for the current crisis extend beyond bad luck or poor management. In the 2000s, unlike now, the validity of information technology was still challengeable among scientists. In silico (computer) data was regularly dismissed as imaginary, as against in vitro (in a test tube), which was better, but the ultimate proof, of course, was in vivo (living things).
The late Carl Woese, an iconoclastic opponent of “engineering” biology, had been a beneficiary of the US military-industrial complex that built contemporary science. But he did not agree with what he saw as the imposition of its methods, which he linked to the innovation crisis itself. He wrote in 2004:
“A society that permits biology to become an engineering discipline, that allows science to slip into the role of changing the living world without trying to understand it, is a danger to itself.”
He went on to claim the nineteenth century as the “defining” moment – the twentieth century had been a dead end, due to an excessively mechanistic view that corrupted science, “like a Trojan horse.” What was needed, he wrote, was a new holistic turn to restore science to its former scope, where it would seek to “understand the world, not primarily to change it.”
In 2011, the economist Philip Mirowski published Sci-Mart, a coherent argument that neoliberalism, with its already fantastical worldview, had hollowed out scientific rigor and killed creativity. If Western science was obsessed with technology, as Woese had written, even there, its efforts were now mostly failing to produce results useful to society. These explanations would have been unpopular in the dominant commercial world, even though it was itself stumped. But, as Mirowski argued, the rot also spread to the backwaters of basic science in the universities.
Proteins are not silicon chips
It is from this crisis – and into it – that the latest wave of AI broke. As a report on AI in science from the OECD, published in 2023, argued, a justification for AI is that it “could help” because science may be becoming “harder.”
The AlphaFold 2 system, for which Demis Hassabis and John Jumper won the Nobel Prize in Chemistry in 2024, is often held up as proof of that promise. It predicted the structures of 200 million proteins, yet it would be experimentally impossible to verify even a fraction. To do so, proteins would typically have to be isolated from cells in notable quantities – a capricious process – and then subjected to techniques such as X-ray diffraction and nuclear magnetic resonance.
These steps might consume years, even when talking about a single protein. Nevertheless, the philosopher Daria Zakharova claimed in an unreviewed academic paper that AlphaFold qualifies as “scientific knowledge” because its “predictions are taken to be trustworthy and used by scientists.” But, of course, no one can say if this putative “scientific knowledge” is reliable.
In strictly material terms, AlphaFold is not a representation of the behavior of proteins but, rather, the behavior of silicon chips (which underlie the computation). The physical nature of information processing, as highlighted by computer pioneers like Rolf Landauer over the past century, tends to be overlooked (as Landauer himself points out). In this sense, AlphaFold’s inventors advanced a claim that silicon chips could mimic proteins. It begs the question of how materials that are unrelated, both chemically and spatially, and temporally, could mimic one another. At least, substantial amounts of evidence would be required to prove it.
Yet, in instances where efforts have been made to verify, the results seem mixed. A recent study by Garrido-Rodríguez and colleagues argued, for example, that the AlphaFold computation did not “correspond to the experimentally determined models,” referring to a class of ubiquitous and biologically vital proteins called serpins. Evidently, more intensive research might be needed on the reliability of AI as a predictive tool. However, it would also be worth stepping back to ask why we are using our already limited scientific resources to study the possibility of such links between unrelated materials, without a solid train of evidence to support why such links might exist, why they would matter, and to whom they would matter.
As most biologists realize, life does not reveal its secrets easily, even when we are probing the apparently “simplest” among our fellow inhabitants of Earth, like bacteria. Cells are dynamic. Protein folding is a complex business. The concept of proteins as essentially static locks and keys that fit together originates from high school science.
However, it leads to a way of thinking that is hard, even for experts, to shake off. While modeling protein structures has been a long-standing intellectual strand in biochemistry, dating back to the first model made from plasticine and wood by J.C. Kendrew in 1957, it makes particular assumptions about proteins that are obviously different from “how they really are.” In those earlier cases, of course, the models were used to interpret observational data from X-ray diffraction, not a stand-in created by a computer, as with AlphaFold. The question is always, therefore, how scientists can overcome faulty preconceptions and identify the problems that will count.
The really crucial techniques in the field of protein chemistry are not ones that think tanks and policy analysts have heard about because these techniques don’t have AI’s PR budget. Instead, the laboratory relies on such methods as “Western blot” (used to detect proteins), numerically controlled liquid handling rigs to purify protein mixtures, and “molecular biology” to synthesize proteins in bacteria and yeasts. While IT plays an important part in some of these devices, these are products of the petrochemical and mechanical engineering industries, such as pumps, chemical reagents, and plastic tubes.
Mansions of straw
The cancer researcher and winner of the 2019 Nobel Prize in Physiology or Medicine, William G. Kaelin Jr., warned that his profession must build “houses of brick” rather than “mansions of straw” when it came to evidence for scientific claims. "I worry about sloppiness in biomedical research,” he wrote in Nature, adding that “the causes are diverse, but what I see as the biggest culprit is hardly discussed…the goal of a [scientific] paper seems to have shifted from validating specific conclusions to making the broadest possible assertions.”
Kaelin, Jr. was not expressly referring to AI, but he could have been. Consider the UK, which has become a testbed for AI-solutionism in medicine. The UK Biobank, a government company that holds genetic data on a subset of the UK population, was reported this March to have partnered with pharmaceutical companies and the Alphabet subsidiary, Calico, which will gain access to the data for AI studies. The project was described by the Financial Times as “a flagship example of how advanced computers and artificial intelligence models can harness big biological data sets to look deeper into how the human body works and can malfunction.”
One question to ask, however, is whether mining of these datasets is likely to produce reliable knowledge, even in theory. The data was described in 2017 as “not representative of the general population…UK Biobank participants generally live in less socioeconomically deprived areas; are less likely to be obese, to smoke, and to drink alcohol on a daily basis; and have fewer self-reported health conditions.” There are evident misgivings over the structure of these and similar health data sets, which are certainly not secret.
Even optimistic observers must admit: if the data is inadequate and the AI opaque, what is the actual epistemic value of these projects? Nobel Laureate Kaelin, Jr.’s advice: “the question…should be whether…conclusions are likely to be correct, not whether it would be important if…[they] were true.”
If science is to be rescued from its current malaise, the solutions are already visible. Proposals like Isabelle Stenger’s “slow science” seem worth a try, given they could open up the burden of proof to scrutiny when scientific claims are made and encourage a public service mindset among scientists. Yet, if there has been epistemic renovation in science so far, it has been extremely timid and has not produced hoped-for effects. In the mind of an investor, the idea that AI could break the deadlock and bring forth all manner of profitable inventions appeals doubly. AI is a method that requires only capital to implement, and can be done at scale, as opposed to fiddly, human-centered, empirical research, where ingenuity and luck (which cannot be so easily bought) appear more prevalent.
But investing in AI is also a potent way of maintaining the status quo, while appearing to shake it up, because it posits a technological future without the systemic change that other reforms imply. In that light, we must resist the spectacle. AI sells a vision of progress where algorithms can unlock secrets faster, better, cheaper; yet, the secrets of nature are not so easily revealed, and knowledge without understanding is no knowledge at all.
Authors
