Home

Donate
Perspective

Should the AI Race Be About Bigger Models, or the Search for Meaning?

Vinay K. Chaudhri / Oct 13, 2025

Teresa Berndtsson / Letter Word Text Taxonomy / CC 4.0

Top executives at American artificial intelligence giants OpenAI, Microsoft, and Advanced Micro Devices said at a US Senate hearing recently that while the US is ahead in the race, Washington needs to boost infrastructure and champion AI chip exports to stay ahead of Beijing.

The fight for AI supremacy relies on the scaling hypothesis that says that by making current models larger, training them on ever-increasing amounts of data, and utilizing greater compute— the hardware resources required — we are sure to achieve Artificial General Intelligence (AGI)

If the scaling hypothesis proves to be true, it will indeed be an exciting development for humanity, as the upside is tremendous.

But what if the scaling hypothesis turns out to be false? Would that mean we have lost the AI race? What would be our Plan B?

There is much to be gained by reframing the AI race to be not about better hardware, more data, and bigger models, but about computational understanding of meaning.

Simply put, the ultimate race for AI is not between nations, but between humanity and nature: how do we humans acquire meaning from the world around us, and how might we program something similar into an AI system? This reframing naturally leads us to a Plan B if the scaling hypothesis turns out to be false.

For thousands of years, human learning has relied on social education. We learn concepts through traditional schooling and perceive the world accordingly, guided by culture and education on how we come to understand meaning.

Children begin learning by observing examples, such as recognizing everyday objects like cars or cats. Their learning accelerates when they enter formal education, where they are taught not only through examples but also through concepts across a range of topics. It is therefore safe to conclude that the approach to AGI advocated by the current scaling hypothesis is quite unlike human learning.

To win the AI race, we must continue to explore a different hypothesis alongside the scaling hypothesis — let us call it the curated knowledge hypothesis.

My research, funded by a National Science Foundation award on Knowledge Axiomatization, seeks to formalize this alternative approach to AI development. Curated knowledge was also the focus of an NSF-funded workshop held as part of the 2025 Annual Conference of the Association for the Advancement of Artificial Intelligence, where I joined over 50 researchers in a brainstorming session.

One way the curated knowledge hypothesis could be operationalized is by observing that for each level of instruction in our traditional education system, there exists a finite set of concepts that can be defined and represented in computer code. Once captured, the program can be said to have acquired the necessary knowledge for AGI at that grade level.

Of course, the concepts can be expressed in an infinite number of ways using text, images, audio, and video. The hypothesis holds that it is possible to articulate and capture the concepts at a level that is independent of any specific modality of expression. For example, consider the number five: it can be shown as an Arabic numeral, as a picture that contains five cats, spoken aloud, or depicted in a video where a ball bounces five times. Each modality may involve a high degree of complexity and variation, yet the underlying properties of the concept of the number five remain the same across all of them.

To test the curated knowledge hypothesis, we need a three-part strategy: curation, testing, and application.

First, we must curate the concepts that a child is expected to know at each grade level in our formal education system. Research in the learning sciences already provides insights into what these concepts are. The task is to encode them into an AI program in a way that allows it to draw the expected conclusions.

Second, we must develop tests that measure the adequacy of the curated concepts in answering questions. These tests should rely on expert and extended interaction between humans and the AI program, rather than the currently popular benchmarks geared at automatic scoring of AI outputs.

Third, we must demonstrate the use of the curated knowledge in practical contexts. For instance, if we have curated the knowledge at the level of the first grade, can it be put to any practical use? Could it be applied to building teaching tools for first-grade students? Ongoing application is necessary to ensure that the encoded knowledge is relevant and of practical use.

The approach advocated here is unlikely to deliver quick results in the short term. Demonstrating payoffs along the way is essential to motivate ongoing investment of resources.

The tradeoff between the scaling hypothesis and the curated knowledge hypothesis echoes the long-standing debate between empiricism and rationalism in how we acquire meaning. René Descartes and David Hume once argued opposite sides of this question, and most modern philosophers now agree that both empiricism and rationalism are needed for us to understand meaning in this world.

If we are serious about winning the AI race, we must continue to advance what is achievable using AI grounded in curated knowledge. While much progress is bound to emerge from efforts to test the scaling hypothesis, humanity’s quest to understand how we acquire meaning is unlikely to be resolved just by better machines, more data, and bigger models.

Authors

Vinay K. Chaudhri
Vinay K. Chaudhri currently supports a National Science Foundation initiative on knowledge axiomatization at Wright State University. Previously, he led AI research at SRI International and taught knowledge graphs and logic programming at Stanford University.

Related

Perspective
Bigger Might Not Be Better: The Limits of Regulating AI Through Compute ThresholdsMay 21, 2025

Topics