Home

Donate
Perspective

America Needs Better AI Ambitions

Arati Prabhakar, Asad Ramzanali / Dec 11, 2025

US President Donald Trump displays a signed executive order during the "Winning the AI Race" summit hosted by All‑In Podcast and Hill & Valley Forum at the Andrew W. Mellon Auditorium on July 23, 2025 in Washington, DC. (Photo by Chip Somodevilla/Getty Images)

AI will revolutionize knowledge, unlock new horizons of creativity, and deliver unimaginable prosperity, they tell us. So far, though, AI is automating emails, summarizing meetings, and flooding the internet with low-quality content. But if you peer into today’s research, you can glimpse tomorrow’s possibilities. AI could help design new medicines for seemingly intractable diseases. The United States could finally close educational gaps among our children with AI for personalized learning. Lives could be saved with more accurate weather forecasts using AI.

We won’t realize these futures by just investing billions in data centers for more text and media generating apps and letting the tech industry run free from regulations, as President Donald Trump's expected executive order aims to do. What we need is clear-eyed action by the government both to mitigate AI risks and to develop applications that can change people’s lives.

The US has an unparalleled history of both driving and wrangling new technologies to create transformative positive change. In every case, that took both public and private action. Industrialization made goods cheap and plentiful, and government research uncovered the links between industrial pollution and disease so that public policies and eventually company practices could protect our health. Government research created the internet and made possible the chips, touch displays, voice interfaces, and GPS that dozens of companies – entire new industries – turned into our smartphones. Government-funded research is the foundation beneath virtually every new medicine that pharma companies bring to market, after they earn government approval. These are just a few examples of how the tug and tussle, the cooperation and the friction between public and private sectors can ultimately extract astonishing advances from new technologies while managing their downsides.

Today’s surge in generative AI also grew out of a foundation of government-funded research. From the 1950s onward, the Defense Advanced Research Projects Agency (DARPA) and other agencies invested in R&D that turned into the breakthroughs that undergird today’s AI – machine learning, neural networks, backpropagation, computer vision, and natural-language processing. The current AI discussion, though, largely assumes that the government can’t and won’t do much at all – or worse, that its only function is to clear the way for companies. Today, we are caught between flamboyant, fantastical claims and government leaders deferring to industry without the imagination to drive toward societal progress.

There is a better way forward.

***

It starts by managing AI’s harms and risks. Like every powerful technology in history, AI has a bright side and a dark side. As companies use AI to speed up all kinds of business functions and people use AI tools in daily life, we are already seeing the dark side: emotional dependence on chatbots leading to bot-encouraged suicides, deepfakes of nonconsensual intimate imagery, worker surveillance and job replacement, new forms of fraud, and more.

The problems are not hypothetical distractions along the way to a glowing future – they are happening, and they harm real people. But rather than dealing with them, companies are racing ahead, while policymakers have been too slow to counter the blitzkrieg of tech accelerationism.

But even if we address these risks – which is the work of researchers, government, and the public, here as for every other technology – left on its current trajectory driven by financial metrics for corporations, AI will not realize its promise. Why not expect more?

Just doing the same things more efficiently is too modest a contribution to our lives for this powerful technology. For example, one of the most widely cited first business improvements from AI is a 14% improvement in customer service productivity when an AI system offers the customer service rep suggestions and links to answer customer questions. That’s meaningful for any chief financial officer. But is your experience with any company’s customer service noticeably better? Probably not. Do we think customer service reps are getting paid more because of this efficiency? Probably not.

After all the hype and trillions in AI investment, if all we get is some productivity gains, chatbots replacing search engines, and slop on social media, that’s not a triumph of progress or societal transformation. That’s just mid.

***

If today’s AI tools aren't enough, what can we demand from AI? The two of us are part of America’s phenomenal research enterprise, and one of its great joys is the window it provides into possible futures. With R&D colleagues across government, we ran a White House project called "AI Aspirations" in the last year of the Biden-Harris Administration. Here is a sampling of three great possibilities with AI that are worth pounding the table for.

One is AI for drug development. Tens of millions of Americans suffer from one of the thousands of rare diseases that have no treatment. Well over half our population will contend with dementia or other chronic diseases, with treatments that are only partially effective at best. Every year, the Food and Drug Administration, FDA, approves about 50 new drugs. Do the math. We’re not on a path to be able to treat disease comprehensively within the lifetimes of your grandkids’ grandkids.

Drug design and approval is a long and convoluted process that typically takes years or even decades and costs $1 billion or more for a single new pharmaceutical. It begins with scientists coming up with a new molecule that might be effective at going after a biological target – like a protein or a gene – that they suspect plays a role in the disease. The challenges ahead are enormous: Will this molecule actually be effective in stopping or ameliorating the disease? What else will the new molecule do, causing safety problems or side effects? The proof comes in the FDA’s rigorous approval process. Its entire purpose is to test and validate safety and efficacy with the trials that took us beyond the days of charlatans hawking snake oil to proven treatments today.

The AI technology that could start a sea change here isn’t large language models or image generators. Instead, it’s AI bio-design tools – AI models trained on biological data. In 2020, Google’s DeepMind trained a model called AlphaFold on protein data that had been painstakingly researched, collected, and curated with long-term government support. AlphaFold accurately predicts the three-dimensional structures of a protein from its linear sequence of amino acids.

Researchers at the University of Washington – long funded by federal research agencies – took the next step by training an AI model to predict a protein structure for a desired function. These are biology holy grails so stupendous that Demis Hassabis and John Jumper of DeepMind and David Baker of the University of Washington have already won Nobel Prizes for them. And this work has fueled talk about eliminating all cancers and curing all diseases in a few short years.

For all of AI’s potential for biology, though, major barriers stand in the way of delivering lifesaving medicines. The most fundamental is this: an AI bio-design tool can generate more promising molecules, but a molecule isn’t a medicine until it has passed through rigorous clinical trials and won FDA’s approval.

What would it take to develop an AI model so good and so reliable at predicting safety and efficacy that the FDA could rely on it to both improve and speed up its approval process? AI models that can be trained on a critical mass of lab and clinical data – data about how prior molecules affected biological systems, and ultimately, humans – will be key. One of the clear lessons from LLMs is that great capabilities arise from today’s AI models only after they train on enough useful data. But it is pharmaceutical companies that own vast troves of valuable clinical data about new drugs. This is data they collected in prior drug trials, including trials that failed, and they are typically loath to share it. What comes next is R&D that requires many researchers, many years, and huge compute resources to develop models and – crucially – to fully validate them.

These are huge barriers that won’t be overcome by individual companies alone, valuable as their work is. What’s needed is a host of investments and actions by government: Funding to mobilize a broad and deep research community, including in universities. New mechanisms that enable pharma companies to share their valuable clinical data. Extensive testing and evaluation to meet rigorous FDA standards. And ultimately—in stages, and only after rock-solid evidence shows that the AI tools are a real improvement over long-trusted methods—the FDA itself making changes in its processes.

One more government responsibility comes with biological work too. With these powerful advances in AI bio design come new risks. The biological tools that scientists use to crack open cancer cells are also the devil's plaything for a bioterrorist. That means that research, policies, and practices to manage these accelerating risks are also necessary.

For a very different example, take educational technologies. Edtech innovators have promised to close the learning gaps among K-12 students for decades. But US reading and math scores on the National Assessment of Educational Progress (NAEP) plateaued with signs of stagnation or slight decline in the decade before the pandemic, which then set students back sharply. Some edtech tools have helped some students, but the country’s goal of enabling every student to achieve their full learning potential seems as elusive as ever. Meanwhile, chatbots are sweeping into students’ lives, bringing new ways to explore and new safety hazards, new ways of fudging homework and new anxieties.

Amidst this confusion, AI-empowered learning tools offer tantalizing prospects – not as a substitute for the human elements of learning but to augment them. Decades of research show that personalized tutoring can dramatically improve student learning – perhaps the single most effective intervention for K-12 education, but an extraordinarily expensive one. AI, based on LLMs in this case, offers the opportunity to provide that kind of deep personalization across a wide range of subjects at a fraction of the development cost. Many edtech companies are developing AI-powered tutoring assistants to implement this idea.

It’s easy to see how this can help the same students who were getting the benefits of earlier edtech tools. But it’s harder to see how AI per se does anything about the many barriers to reaching every student. First, education budgets vary widely, leaving many schools unable to afford new technologies. Second, even schools with edtech funding struggle to judge whether tools are safe, protect private information about students, and actually improve learning. Third, when schools do adopt technology, too many teachers simply find it to be an additional burden – more for the to-do list – without training and ways to weave new tools into their classrooms.

Here too, government must act to realize the full potential of AI. For a superintendent grappling with the particular challenges of their students on the one hand and the marketing pitches of edtech vendors on the other, the federal government could make a huge contribution to getting it right. That could include a “Good Housekeeping”-style seal of approval for edtech tools that meet stringent safety, privacy, and quality requirements, as well as carefully vetted analyses of which edtech tools work well under which circumstances. Federal funding to augment local budgets for AI-empowered edtech tools is also essential. These measures are how we can move past simply reacting to AI coming into schools, and instead seize it to positively change the lives of generations of children.

For a final example, consider weather forecasting. Extreme weather events, which are becoming more common and more damaging, are extremely costly. Each year in the 1980s brought an average of 3.3 weather disasters with at least $1 billion in losses (adjusted for inflation). In 2024, the US experienced 27 of these billion-dollar extreme weather events, an increase of more than eight-fold. Cumulatively, those 2024 disasters cost our economy $183 billion and took the lives of 568 people.

For decades, weather forecasting has been done by using physics-based models of Earth. They predict how temperature, pressure, winds, and moisture change over time. Investments in improving these models have yielded meaningful results. Today’s four-day forecast is as good as one-day forecasts thirty years ago.

And now, AI models trained on observational data from sensors on weather stations, ships and aircraft, satellites, buoys in the ocean, and weather balloons are showing promise to outperform the traditional models. Where physics-based models can take an hour or two to generate a 24-hour forecast, an AI weather model can do it in a second or two, generating very good results. That’s tantalizing – and yet, as with any statistical model, “very good” is not “perfect”. When an AI weather model misses, lives can be at stake.

To turn these research results into daily forecasts that reach and are trusted by millions of people requires government action. The data for training AI weather models, and the input data for generating a new forecast, come in large part from the National Oceanic and Atmospheric Administration (NOAA). And as with every high-consequence AI application, deep and rigorous testing and evaluation across an extraordinarily wide set of weather conditions will be needed before asking millions of people to rely on these forecasts.

***

These are domains where progress is far from guaranteed. AI bio design tools may fall short and provide only incremental advantages. AI tutoring tools could find the same fate as prior generations of edtech. And a few big incorrect predictions could tank trust in AI weather models.

But as the saying goes, “No guts, no glory.”

Elsewhere, we have been critical of the second Trump administration’s approach to AI for, among other things, its dangerous weakening of public data, cuts to federal funding for research, and its opposition to laws that protect people from the risks and harms of AI.

More generally, this administration is actively undermining the necessary functions and capacities of the government that will be needed for any of these big ideas to improve lives.

The Department of Health and Human Services (HHS), which is the home of the FDA and health research agencies, is in complete disarray, having already removed 20,000 employees and stopped critical research. Worse, facts have become purely optional. The tens of millions of Americans with untreatable diseases won’t get help unless a functioning HHS funds deep research and rigorous testing, and uses facts to make decisions.

The Trump administration is also gutting the Department of Education and NOAA, as promised in the Project 2025 plan. The Department of Education has already shed half of its staff, and NOAA has lost as many as 1,300 people. The immediate consequences are weaker schools and Americans in disaster-prone areas facing peril. In the days to come, the cuts look like a teacher without federal support to pick a safe and effective AI tool, and an AI weather model that doesn’t have data from a weather balloon because it never launched. These cuts – as much as the federal policy failures around AI – sacrifice the future that AI could help us achieve.

***

Medicine, education, and weather forecasting are just a few of the areas where we can wield AI for massive transformations that change lives for the better. It can also open the door to far better infrastructure for safer and smoother transportation, fast and seamless government services after a disaster, new materials that enable advanced semiconductors while curtailing environmental risks, the next-generation electrical grid that enables both energy security and decarbonization, and more effective disease prevention and population health.

For-profit companies don’t have the incentive to invest in foundational research, and it is the public sector that bears the responsibility for safety regulations and vital data to serve all Americans. Waiting for industry alone to solve these problems simply won’t work. As in prior technology revolutions, we’ll need an active government with nimble public policymaking and robust research investment. That means stopping this administration's destruction and also moving past the incremental thinking of current policymaking and the state of enthrallment the industry is holding so many in. We need major public investments in data and compute, standards and testing, and research and development to build and validate new AI capabilities, in addition to active government roles in shaping fair markets, protecting Americans’ rights and safety, and building trust.

Other countries, especially China, are racing to build AI futures that reflect their values and national goals. That means doing this work here isn’t just feasible – it’s necessary. The US has invented, deployed, and governed the technologies that changed the world, inspired generations, and made us the world’s leader for nearly a century. We didn’t get to this position by gutting science and technology and limiting government to timid incrementalism. It’s time for the US to lead again – with the ambition and the confidence that has defined it.

The authors thank Anima Anandkumar, Robert Huang, Deirdre Mulligan, Arvind Narayanan, Allison Preiss, Ellen Qualls, Wade Shen, Ganesh Sitaraman, and Bina Venkataraman for helpful comments and conversations that informed and improved this essay.

Authors

Arati Prabhakar
Arati Prabhakar served in President Biden’s cabinet as his science and technology advisor and as the Director of the White House Office of Science and Technology Policy (OSTP). She is currently an Executive Fellow in Advanced Technology Policy at the University of California, Berkeley. Arati previou...
Asad Ramzanali
Asad Ramzanali is the Director of AI & Technology Policy at the Vanderbilt Policy Accelerator, Vanderbilt University. He previously served as the Chief of Staff and Deputy Director for Strategy at the Biden-Harris White House Office of Science and Technology Policy (OSTP), with the designation of Sp...

Related

Perspective
Trump’s AI Strategy Is At War With ItselfAugust 7, 2025
News
Unpacking Trump’s AI Action Plan: Gutting Rules and Speeding Roll-OutJuly 23, 2025

Topics