Transcript: House Hearing on DeepSeek
Justin Hendrix, Ben Lennett / Apr 9, 2025On Tuesday, April 8, 2025, the US House of Representatives Subcommittee on Research and Technology hosted a hearing: DeepSeek: A Deep Dive.
Witnesses included:
- Adam Thierer, Resident Senior Fellow, Technology and Innovation, R Street Institute
- Gregory Allen, Director, Wadhwani Center for AI & Advanced Technologies, Center for Strategic and International Studies
- Julia Stoyanovich, Institute Associate Professor of Computer Science & Engineering, Tandon School of Engineering; Associate Professor of Data Science, Center for Data Science; Director, Center for Responsible AI, New York University
- Tim Fist, Director of Emerging Technology Policy, Institute for Progress
What follows is a lightly edited transcript of the discussion. Please consult the official video for the hearing when quoting.
Rep. Jay Obernolte (R-CA):
The Committee will come to order without objection. The chair is authorized to declare a recess at any time. Welcome to today's hearing entitled “Deep Seek: A Deep Dive.” So we'll start out by opening with a five-minute statement from myself. I'd like to welcome everybody to our first research and technology subcommittee hearing. I look forward to engaging with our distinguished panel witnesses on what to me is a critically important topic. It's clear that artificial intelligence will have a profound transformative effect on our country. My experience leading the bipartisan AI task force last year strengthened my belief that maintaining American leadership and AI development and deployment is not only an economic imperative, but also a national security requirement that affects every sector of our economy and our society. As we examine the implication of DeepSeek’s recent AI models, our nation is at a critical juncture in the global artificial intelligence landscape.
The introduction of DeepSeek represents a concerning milestone. It's the first non-American reasoning AI model. This capability, pioneered by American companies, is now being replicated by a company directly influenced by the Chinese Communist Party. This development should raise concerns for all of us. We must consider what's at risk. Americans and people worldwide are increasingly sharing their private and personal data. With AI systems. The deployment of DeepSeek provides the CCP with a backdoor to this sensitive information. This risk will only grow as we enter the era of agentic AI, where AI systems will actively book our travel, manage our finances, analyze our health records, and handle other sensitive personal affairs on our behalf. We can't allow DeepSeek and other CCP-controlled entities access to this information. However, there's also a silver lining in this situation. DeepSeek reportedly distilled their models from open AI systems, demonstrating that Chinese AI development remains reliant on our innovations.
Furthermore, despite the claim that DeepSeek R-1 achieves similar results to American models at a lower cost, Google recently announced its open-weight Gemini three model, which reportedly achieves 98% of DeepSeek R-1's performance for just 3% of the cost. American ingenuity continues to lead the way, but we cannot take our continued leadership for granted. Open-weight models underpin much of the AI and technology infrastructure worldwide, including in the United States. If we allow China to surpass us in open weight models, we risk seeding leadership in global AI infrastructure to the CCP. It's crucial that we understand the capabilities of these models, the CCP goals they could propagate, and their potential vulnerabilities in order to encourage the adoption of American models over those developed in China. This is precisely why the federal government and American industry must collaborate to ensure continued American leadership in the development of AI standards.
If the United States does not set these standards, then China will. China's approach to AI development has also raised serious ethical and security concerns, especially relating to the prevention of harmful applications of AI. For example, according to an evaluation, philanthropic DeepSeeks model was found to be the least effective at blocking information about bio weapons amongst all the models that they tested. While Chinese AI has so-called safeguards against providing information about Tiananmen Square and the Uyghurs, it lacks safeguards against actual malicious uses of AI. We must ensure that Chinese AI, which operates under these flawed standards, does not come to dominate the global market. The United States must take the lead in developing the most advanced AI systems while also fostering a light-touch governance model that safeguards against malicious use while simultaneously encouraging innovation. We cannot afford to stifle our innovators with burdensome regulations when competitors like China are racing ahead with fewer constraints rates.
Promoting innovation and AI development is the key to maintaining American leadership in this field. To support this, I, along with some of the people on our committee here, have introduced the Create AI Act again in this congress, which establishes the national artificial intelligence research resource. This will provide researchers and developers across the computational and data resources they need to create competitive American AI systems that embody our values rather than those of the CCP. I'd like to thank our witnesses for being here today. I look forward to your testimony and to having a productive discussion on this critically important topic. So all, you'll back the balance of my time. I now recognize the Ranking Member, the representative from Michigan, for her opening statement.
Rep. Haley Stevens (D-MI):
Thank you, Chairman Obernolte, someone I like to recognize as a friend, actually, and someone who I've gotten to know over my years in Congress, over his years in Congress, and particularly in his leadership role last term in the Congress, running the artificial intelligence task force. And now here he sits in a chair that I was once privileged to sit in as the Chairman of the Research and Technology subcommittee hosting and bringing together our first subcommittee hearing of the hundred and 19th session of Congress, a deep dive into what we call DeepSeek, which is a brand of artificial intelligence that has been developed by the Chinese Communist Party, CCP. And I'm explaining that because we have an incredible group of teenagers in the audience here who hail from the great state, arguably the best state in the Union, Michigan. We have reverends from Detroit, Rabbi from West Bloomfield, and students who represent the incredible black Jewish student alliance that the Anti-Defamation League of America has brought together.
So it is certainly very significant to have them in this hearing and paying witness during this important moment in time as we seek to answer the questions of what is our vision for this nation where there is no vision. The people perish. And I'll give you this, that we are certainly seeing a vision right now from the current administration, and it doesn't appear to me as though it is the most successful approach to winning the future as it pertains to the China competition. And I say that with respect, but it is very clear now that the United States has a real competitor in artificial intelligence. And so the fact that we're having this hearing to deep dive into what the Chinese Communist Party is doing with artificial intelligence is very important. I sit on the China Competition Committee, that's another committee that I sit on and we are grappling with this and my friends.
We recognize that here we are in the year 2025, the quarter century mark that we have now hit as a civilization, as a society, and we are looking to the year 2050 very squarely. And to say that to the teenagers in the room, you will be well into your career when we are at the mid-century mark. And our goal must be that the rules-based order of the world needs to continue to be led by open, free-market capitalist democracies. And so, as we look to strengthen the breadth of the democracy, this beautiful country that Mr. Ulti and I both love, we must do so through the pursuit of science, technology, research, and development. It is not to make anyone who I respect in this room and on this committee, and the leadership that is brought by the Chairman, we have a problem on our hands when we are firing scientists.
We have a problem on our hands when we are squeezing out our scientific research agencies that pay for themselves over and over and over again. What do I mean by that? Yes, we take what we call taxpayer money. The money that our hardworking parents give to the treasury of the United States and we as members of Congress allocate to go into something like the National Science Foundation, something like the National Institute of Standards and Technologies that has a sub-agency focused on artificial intelligence and how we as the United States of America can set the rules for the road. So as the Chairman just articulated, as we are looking at artificial intelligence to potentially start dictating more of our life and that might be helpful because there are a lot of things that we have to deal with. Booking tickets, you were saying, sir, making appointments, managing, I don't know all the likes that you get or groceries you need to buy.
There's a lot. We are producing more data by the second than at any point in civilization, and Mr. Fist is nodding because he knows what I'm talking about. I was working in an IOT research lab before I got to Congress, my friends, the industrial Internet of things, and that was a decade ago. So our teenagers weren't even teenagers then. And the point is that this AI could really help us. It could help advance society, it could help us advance healthcare outcomes to students in the room. It can make you better and bolder and more productive and more successful in the future. We've spent a lot of time in this 21st century ushering in fantastic technologies, smartphone and cars that you can call from a device on your hand, and all of that. We don't want to be afraid of technology, but we want to be careful that it is our rules, our development of innovation, as the United States of America, that is pushing forward how we embrace this technology. And I say that with respect to anyone in the world working on this, but one thing we know with the Chinese Communist Party is they're oppressive, not the people are great, the people are fine, but the party is squishing their own people, and that's a challenge. So I think we'll hear a little bit about this today. Chairman's got his gavel. I'm over five minutes, and with that, I'll yield back. Thank you.
Rep. Jay Obernolte (R-CA):
I thank our Ranking Member. I'll now recognize the Chairman of the full Committee, the Representative from Texas, for his opening statement.
Rep. Brian Babin (R-TX):
Thank you, Chairman Obernolte, for convening today's hearing. I also want to thank our expert panel of witnesses. Thank you all for being here. Looking forward to your testimony and your participation. Today's discussion presents us with a critical opportunity to examine the impact of DeepSeek's artificial intelligence models on America's technological leadership innovation ecosystem and our national security. DeepSeek serves as a significant wake-up call. And on the day of President Trump's second inauguration, this Chinese-owned company released its R-1 model, which quickly surpassed ChatGPT to become the top free application in US app stores. The Chinese Communist Party presents a formidable and growing strategic challenge to our technological leadership. We know the CCP is aggressively pursuing plans to dominate next-generation technology, including through the theft of our research innovations and our sensitive data. Supporting DeepSeek is part of this plan.
In fact, several US government agencies, such as NASA, the US Navy, have banned DeepSeek on federal devices because of its serious data privacy concerns. The emergence of DeepSeek is particularly troubling given reports that it developed these models using American advanced semiconductor chips, including ones that the Department of Commerce had banned from being sold to Chinese entities. However, what's more alarming is that without US leadership and open weight models, DeepSeek could become the foundation for global AI applications that promote CCP values and potentially contain hidden vulnerabilities. And if the United States sees leadership in AI to the CCP, we risk not only our economic future, but also our national security. The development and deployment of critical technology such as AI, quantum, and advanced semiconductors could be compromised by the values of the CCP, resulting in a failure to uphold American ideals of fairness and transparency.
But let's be clear, we cannot surpass China by imitating its approach. We must build systems that promote coordination and cooperation across the entire US science and technology enterprise, both public and private. While leveraging the benefits of the free market, we should continue to follow the recipe for success that has led to American leadership and other emerging technologies such as information technology, quantum biotech, space, and energy. This includes tearing down government barriers to private sector innovation, lowering taxes, protecting private property, reducing energy costs, and leveraging standards and best practices rather than aggressive regulations in market-corrupting subsidies. America's strength has always been our innovative spirit, our entrepreneurial drive, and our commitment to individual liberty. The Trump administration acknowledged this fundamental truth in a January executive order, which stated that the policy of the United States is to sustain and enhance America's global AI dominance in order to promote human flourishing, economic competitiveness, and our national security.
This approach underscores the significance of American leadership without unnecessary government intervention. We've already seen progress in this direction with the announcement of the Stargate project and the proliferation of little tech investments by the private sector. These trends build upon the substantial advantage we have as a nation. We have robust capital markets, innovative companies, cutting-edge research institutions, and bountiful natural resources that fuel our emerging technology enterprise. But DeepSeek is a reminder that we must be vigilant and never take our leadership for granted. Evaluating the capabilities of Chinese models is critical to understanding the competitive landscape, maintaining our technological advantage, and assessing the risks they pose to national security and how we respond to challenges like DeepSeek will determine whether the United States continues to lead in the world and AI or whether we allow the CCP to overtake us. I look forward to hearing from you, witnesses, today about DeepSeeks models and how the US public and private sectors can accelerate American AI leadership into the future. And with that, I'll yield back. Mr. Chairman. Thank you.
Rep. Jay Obernolte (R-CA):
Thank you, Chairman Babin. I now recognize the Ranking Member of the full Committee, my colleague from California, for her opening statement.
Rep. Zoe Lofgren (D-CA):
Well, thank you, Mr. Chairman, Ranking Member, for this hearing. I look forward to a productive discussion about our country's progress and artificial intelligence. In the last Congress, the science committee developed several good bipartisan bills seeking to advance the nation in AI. We understood that our leadership was threatened even then, and today the threat is plain as day for all to see. Two months ago the release of the B-3 and R-1 AI models by the Chinese firm DeepSeek cause many Americans to awaken to the realization that we can't take our leadership in emerging technologies for granted. Securing our research and applying targeted export controls are necessary, but they're insufficient steps to ensuring American leadership in AI to win the AI race, we have to realize our potential by enabling and supporting scientific advancement through the infrastructure institutions and most importantly the people who truly drive American innovation.
Put another way, we need both a defensive strategy and an offensive strategy. Now the science committee is responsible for supporting and overseeing non-defense federal AI R&D. One key agency under our purview is the National Science Foundation, which focuses on advancing the fundamental science of AI. As of last year, NSF was on track to spend about $750 million on AI related research to understand how this funding contributes to AI advancements consider the funding of PhD student Andrew Bartow whose work in computational reinforcement learning laid the foundations for modern AI. Earlier this year, Dr. Bartow received the prestigious Turing Award for his contributions. His early work was NSF-funded, as has been so much of the early research, enabling today's AI breakthroughs. The NSF AI Institutes program, a $500 million investment, connects over 500 funded and collaborative institutions across the United States to form the nation's largest AI research ecosystem.
I'm very excited by the work of the AI Institute for Next Generation Food Systems at UC Davis, the people, institutions, and infrastructure supported by NSF, the Department of Energy, NIST, and so many more federal science agencies form the backbone of American scientific leadership. I have to say it's unfortunate that our nation's great scientific minds are now facing an environment of uncertainty, distrust, and reduced funding. President Trump with his Musk Doge hackers, our wreaking havoc on our scientific enterprise, not just endangering our ability to lead AI in so many critical fields, but I think actively sabotaging our leadership. I'll name just a few of these setbacks. The Trump administration has fired thousands of government professionals in critical technology roles. These talented individuals are now being recruited by other countries, including China. They've terminated countless active research and education training grants, and we anticipate further substantial cuts in the FY 2026 budget requests.
They've deported graduate students for exercising First Amendment rights, effectively broadcasting to foreign students and scientists to stay away in the weeks leading up to this, hearing that universities across the country have significantly reduced incoming PhD student admissions because of funding uncertainty. Who among the next aspiring students turned away? Might've been our next Andrew Barton. We'll never know. Just two weeks ago. I think this is quite chilling. A journal found that 75% of PhD students polled were considering leaving the United States. When asked where they'd go, one response captures the grave concern that we should consider today: anywhere that supports science. If we seek to be global leaders in AI, we have to collectively wake up to the reality of the harm that is being done under our watch, the harm being done to the science enterprise in the United States. I hope that the US still has time to reverse course on our worst impulses and win the AI race. I look forward to having this discussion today about paths forward, and I thank you, Mr. Chairman, and I yield back.
Rep. Jay Obernolte (R-CA):
Thank you. Ranking member Lofgren. We'll now go to witness testimony. Our first witness today is Mr. Adam Thierer. Mr. Thierer is a senior fellow for the technology and innovation team at the R Street Institute. He previously spent 12 years as a senior fellow at the Mercatus Center at George Mason University. He also previously served as the president of the Progress and Freedom Foundation. Mr. Thierer, you're recognized for five minutes for your testimony.
Adam Thierer:
Thank you, Chairman Obernolte, Ranking Member Stevens, and members of the subcommittee. Thank you for the invitation to participate in this important hearing. My name is Adam Thierer, and I'm a senior fellow at the R Street Institute, where I focus on emerging technology issues related to this hearing. I recently released a three-part R Street series entitled “Ramifications of China's DeepSeek Moment,” and I have appended those essays to my written testimony. I'll begin with an admission. I am as stunned as just as all as you are about how fast China has caught up with America in artificial intelligence. In fact, in testimony, just 10 months ago before the Joint Economic Committee, I noted how lucky America was that our AI innovators were firmly in the driver's seat and not yet having to worry about China surprising us with a powerful new AI system that might represent a modern Sputnik moment.
And then, in late January of this year, my worst fears came true when that AI Sputnik moment happened with DeepSeek's January 20th launch of its open-source R-1 model. It's sent shockwaves or tech markets and policy circles alike and rightly so, DeepSeek’s model competes favorably with leading American models and in a lower cost. President Trump and many other policy makers have referred to it as a quote, wake up call for our nation. But what happened next was equally stunning. Just days after DeepSeek’s R-1 launch, Alibaba, another Chinese tech giant, announced the most powerful version yet of its Quinn model, which outperformed R-1. DeepSeek responded by announcing it would release an even more capable version of its model. Chinese tech giant Tencent Holdings then launched its T one model that outperformed both of those models and some American models. Finally, Manus AI, a Chinese startup, launched a powerful new general AI agent.
All of this happened in less than two months. Perhaps these developments should not have surprised us after all. The Chinese communist party has made its imperial ambitions clear with its stated goal to become the world's primary AI innovation center by 2030. And the CCP uses aggressive forms of innovation mercantilism in pursuit of that goal. The House Select Committee on Strategic Competition between the US and the Chinese Communist Party has highlighted how the CCP has pursued a multi-decade campaign of economic aggression against the United States and its allies. More problematically. The CCP is engaged in a concerted effort to export digital authoritarianism through its digital Silk Road effort, meant to spread influence through global investment. Experts now speak of a China-led authoritarian tech offensive and the rise of an AI axis of autocratic states looking to advance their control agendas through technology. I believe there are three lessons from the DeepSeek moment.
First, China clearly understands that AI is the most important general-purpose technology in dual-use technology of our era. AI is essential to both widespread economic development and national security goals. Second, China understands that AI is also the most important information technology of modern times and that it has profound potential to influence cultural values and speech policies globally. Third, AI China understands that rapid diffusion of AI both drives these objectives. In many ways, China is flooding the market with nimble but very effective AI systems, just as they have previously used low-cost rapid response strategies to gain greater market share globally and other sectors. These are the reasons why House Energy and Commerce Committee Chairman Brett Guthrie recently argued that China's rapid ascendancy and advanced computation really is an existential threat to our country and the world. If America is going to win the so-called AI Cold War, we need to understand that traditional containment strategies won't work.
We're not going to bottle up all Chinese AI advances with analog air controls, and costly and poorly targeted industrial policy Gimmicks won't help much either. We won't beat China by copying China. Instead, America must race ahead and stay in the cutting edge of the technological frontier to win. We must re-embrace the core advantages in this fight with China that we've always had: freedom, the freedom to innovate, invest, speak, learn, and grow using advanced technological systems. In the 1990s, we did this with a flexible governance approach that help America dominate the digital revolution, and we can do it again in a bipartisan fashion. Here's a very quick checklist of the pro-free technology agenda that we need to advance AI opportunity in America. We must embrace open source AI innovation and let it blossom, ensure diverse energy markets for AI, win the talent war by attracting the world's best and brightest scientists and engineers, ensure balanced copyright and data privacy policies, craft a national framework that preempts or puts a moratorium on the confusing patchwork of now almost 1,000 state and local AI proposals pending today, require federal agencies to review their existing policies to determine, determine how they might hamper AI, ensure agencies have the resources and training needed to address novel AI issues and of course defend the importance of free speech in the algorithmic age. This opportunity-oriented AI agenda is the way to beat China. We must not allow fear-based policies to impede American AI development and innovation or else China wins. Thank you for holding this hearing, and I look forward to hearing your questions.
Rep. Jay Obernolte (R-CA):
Thank you, Mr. Thierer. Our next witness is Mr. Gregory Allen. Mr. Allen is the director of the AI Center at the Center for Strategic and International Studies. He previously served as the director of strategy and policy at the Department of Defense Joint Artificial Intelligence Center. He also previously served as the head of the market analysis and competitive Strategy at Blue Origin. Mr. Allen, you're recognized for five minutes for your testimony.
Gregory Allen:
Mr. Chair, Ranking Member, distinguished members of the committee, thank you so much for the opportunity to be with you here today. Deep seek is an important milestone in China's progress in artificial intelligence, but this progress did not come out of nowhere. For one thing, if you were to go to the leading AI conferences around the world where top researchers meet, it was utterly routine even five years ago for Chinese researchers to have that win the top paper awards to present research results that are impressive to all of their peers in their field. China's progress in AI has been incredibly impressive and incredibly steady for the better part of a decade at this point, and they really do recognize it as the most transformative technology of our age for both economic and military power. In DeepSeek’s case specifically, this is a firm that is descended from a high-frequency trading firm.
It actually comes out of the finance industry. And if you've heard about the Michael Lewis book, Flash Boys, you will know that these types of companies are obsessed with their computing infrastructure. They build their own data centers; they come up with every potential advantage they can because they are chasing nanosecond-level advantages in beating the market in executing these trades. Well, these computational infrastructure advantages are a critical reason why DeepSeek had the technical talent pool and the technical infrastructure needed to achieve what it did with its V three and R-1 models, which are broadly equivalent to the global state of the art circa mid-2024. So in December of 2024, they are achieving what American companies had achieved in mid-2024. But I think an important question here is, where might they be if US policy had been different? And here speaking specifically about the Biden administration's export controls on advanced computer chips, and it is actually the case that the CEO of DeepSeek in a mid-2024 interview said that the number one challenge facing his company was American export controls, not getting high-quality talent, not getting access to more money.
The number one challenge facing his company was American Export Controls. As a previous Chinese government think tank paper put it over a year or several years ago, without AI chips, there's no AI. And this is the situation that we've tried to face the Chinese government by restricting their access to chips. Now, right now, Nvidia, which is the leading global provider of advanced AI chips, their growth as a company is supply-constrained, not demand-constrained. If TSMC, their preferred manufacturer for chips, could make more chips, they would be selling more chips. For every chip that comes off that assembly line, there are five people who would love to buy that chip. And what we said with those export controls is that we're not going to allow those advanced chips to go to China, where we know they will directly advance the party's agenda for economic and military power.
Right now, the most advanced computing cluster in the world is the Colossus Supercomputing cluster run by XI. In Memphis, it has 200,000 H 100 chips. This is an incredibly beefy supercomputer. If we did not have these export controls over the past few years, I think it is entirely plausible that the largest supercomputing cluster for AI in the world would be in China right now. But what are we going to do in the future? I would say export controls depend upon three critical factors: authority, capacity, and will. What we have right now are really good authorities between the IEPA law that's very, very old. And the more recent law that's passed the US federal government and specifically the executive branch, if they want to put in place export controls, they generally have legal means to do it. In terms of will, I think here is where the Biden administration fell short.
To give you just one example, in December 2024, the Biden administration implemented export controls on high-bandwidth memory chips. These are critical input goods to Nvidia AI chips and everybody else's AI chips. In fact, about half of the dollar value of an NVIDIA chip is these HBM input chips. Well, that export control was telegraphed in July and then implemented in December. So, what do you think Huawei did in those interceding six months? They amassed a multi-year stockpile of those chips. And this is why I say the administration fell short in terms of will. And the final issue is about capacity. Are we actually giving the government agencies that are responsible for administering these export controls the staff, the money, and the enabling technology that they need to successfully implement these controls? And I would say no. This agency's budget has gone down in inflation-adjusted terms since Russia's invasion of Ukraine. Since these Chinese export controls, their job has gotten massively harder, and the resources to support them have dwindled. I believe there's no more powerful return on investment opportunity available anywhere in US national security than strengthening our export controls enforcement and administration capacity. And with that, I thank you for your time.
Rep. Jay Obernolte (R-CA):
Thank you, Mr. Allen. Our next witness is Dr. Julia Stoyanovich. She is the Institute Associate Professor of Computer Science and Engineering, Associate Professor of Data Science, and Director of the Center for Responsible AI at New York University. She is a recipient of the Presidential Early Career Award for scientists and engineers, awarded by nomination from the National Science Foundation, and a senior member of the Association for Computing Machinery. Dr. Stoyanovich, you are recognized for five minutes.
Dr. Julia Stoyanovich:
Chairman Babin, Ranking Member Lofgren, Subcommittee Chairman Obernolte, Subcommittee Ranking Member Stevens, and other members of the committee. Thank you for the opportunity to testify on the national security and technological implications of DeepSeek in the few short months. What was once unthinkable has become a cliche, and we have already heard this cliche today, but I'll repeat it once again, but with correct pronunciation, we are in a Sputnik moment. The rapid rise technical openness and competitive performance of DeepSeek have challenged the longstanding assumption that the US would retain its global leadership in AI. By default, DeepSeek’s latest models rival top US systems while requiring fewer resources to build, reshaping global expectations around efficiency and access, and signaling the risk of falling behind for the United States. In this strategically vital domain, this demands bold federal action, sustained investment, coherent policy direction, and a commitment to open high-impact research that serves the public interest.
US leadership in AI is not just about innovation; it's about protecting our economic resilience, national security, and global influence. Let me turn to how DeepSeek compares to American models in terms of accessibility, transparency, and security. Technically, DeepSeek and US-based models share the same developmental structure, pre-training, fine-tuning, and alignment. However, DeepSeek emphasizes efficiency, openness in technical reporting, and specialization in domains like code and math. Its transparency, particularly in publishing the model architecture training methodology as well as evaluation benchmarks, meets or exceeds that of any US-based vendor. That level of transparency matters. It allows researchers to understand how models are built, assess safety claims, and replicate results.
It lowers barriers to entry and fosters inclusive innovation across academia, startups, and public institutions. But transparency is often misunderstood. Releasing model weights alone is not enough. While weights tell us how a model processes its input, they do not reveal why it behaves the way it does, what it has learned, or what knowledge it's missing. That insight comes from data transparency, knowledge of the data sets used in training, which can reveal risks related to accuracy, bias, privacy, and security. Today, neither DeepSeek nor leading US vendors provide adequate data transparency, and this limits meaningful evaluation and raises concerns about accountability and misuse. And this brings me to the core issue, and there is data governance and national security. Deep Seek's privacy policy indicates that user data, including prompts, device information, and IP addresses, is stored on servers in China. There is no public documentation about whether user inputs are retained, reused, or subject to opt-out.
And this creates a serious vulnerability. US citizen and enterprise data could be repurposed under the jurisdiction of a strategic competitor. And this is not just a matter of consumer privacy. As LLMs become embedded in productivity tools, enterprise systems, and education platforms, how user data is collected and managed becomes strategically vital. If widely used models capture data from US engineers or regulators, they could expose sensitive coding practices, priorities, or strategies stored abroad. Such data presents not only a privacy risk, but a loss of data sovereignty. Unlike accidental leaks, the surveillance could be deliberately engineered with no visible trace to the user. And without enforceable protections or transparency, we can't reliably detect or prevent it. To be clear, DeepSeek's opacity is not unique. US companies also vary in their practices, with limited transparency on how data is stored or reused. In short, no major provider offers appropriate transparency today, but none operate under the same legal obligations or pose the same jurisdictional risks as DeepSeek.
It's important to emphasize that data governance and transparency are not in conflict. They are mutually reinforcing. Strong data governance establishes the policies and safeguards that protect sensitive information while transparency ensures that those protections be evaluated, verified, and improved. Without transparency, we can't check whether data is being handled in accordance with legal and ethical standards. Conversely, transparency without governance can expose data to misuse; together, they form the foundation for responsible AI. Transparency enables oversight, and governance ensures accountability. So what should the US do? I have three recommendations. The first is to foster an open research environment to close the strategic gap, and this includes robust funding for fundamental AI science, public datasets, model development, and compute access. The national AI research resource is essential here, providing academic institutions with data, compute, and training that is necessary for us to compete. Federal support for the National Science Foundation and other agencies is vital for sustaining and advancing our research and skilled AI workforce. My second recommendation is to incentivize transparency across the AI lifecycle, and I can elaborate on this during questions. And my third is to establish a strong data protection regime. The US must lead not only in AI performance, but in responsible privacy-preserving AI infrastructure. And there are many examples that we can learn from, but we have to develop our own models here. Thank you, and I look forward to your questions.
Rep. Jay Obernolte (R-CA):
Thank you, Dr. Stoyanovich. Our final witness is Mr. Tim Fist. Mr. Fist is the director of emerging technology at the Institute for Progress. He's also a senior adjunct fellow at the Center for New American Security. Mr. Fist, you're recognized for five minutes.
Tim Fist:
Thank you, Chairman Obernolte, Ranking Member Stevens, and other members of the subcommittee. Thank you and good morning. And today I'm going to be talking about responding to the DeepSeek moment through the lens of both state capacity and r and d. My organization, IFP, focuses on accelerating scientific progress and innovation. So I see this as a suitable lens, and I'll start by noting that many of my fellow witnesses have described the DeepSeek moment as both a Sputnik and a Sputnik moment. And while I think this comparison is powerful, I also think that it is somewhat incomplete, and I think that particular ways that it is incomplete can help inform us about what lessons to take away. So I'm going to cover that three lessons that I see as most important lessons. Number one, to win against China in open source AI, we need to secure every layer of the tech stack.
So if we look at the rocket technologies behind Sputnik, they were closed; we couldn't use them. In contrast, DeepSeeks AI models and applications are freely available to the whole world. So not only can they use them against us, we can use them against us. So what are the actual risks here? Well, the DeepSeek app has already shown a tendency to spread CCP propaganda, but if we were only worried about communist chatbots, we probably wouldn't be here today. The bigger risks that are on the horizon come from this new paradigm in AI known as agents. You can think of agents as AI models designed to operate autonomously, completing tasks by themselves. Research shows that these models can have back doors, so you could have your software engineer agent happily programming away, and then suddenly, when a trigger condition is met, like if it is deployed in the networks of an American organization, suddenly it can start outputting insecure code.
And crucially, these kinds of back doors are currently impossible to detect. And this may not seem like a big deal now, but it will be a big deal when we want to use AI not just to give us cooking recipes, but also to write all our code, manage our health and energy systems, conduct scientific research, and make military decisions. This world is fast approaching and in some fields like software, it's actually already here and the way that we outcompete China in this world where everyone wants to deploy AI agents, but everyone is worried about the vulnerabilities they create is not just to have the best AI, but also the most secure AI and the most reliable AI. For example, investing in AI interpretability research could help us detect back doors but also understand the capabilities of Chinese models and make our own models more reliable.
And we need security not just at the software layer but also at the hardware layer. If the foundation of American science and industry is going to be built on AI, we also need to ensure that the computer chips and that data centers that our models are running on can't easily be sabotaged. Federal RD can and should be used to accelerate the build-out of this secure and reliable AI technology stack. This work could be supported through NIST and the national labs. In my written testimony, I provide specific examples of projects and funding mechanisms. Lesson two: We need a better alarm system. In the Sputnik case, the Soviets blew right past us. Their rockets had 50 times the payload capacity of our own. In AI, DeepSeeks models still lag behind American models. This means that we actually still have time to prepare for more significant breakthroughs from China.
But AI is moving fast, and actually, our initial reaction to DeepSeek was based on their analysis of their own capabilities rather than our own. Next time, it needs to be different. We need a team of technical experts within the federal government to help the government see what's coming and respond proactively with things like export controls. This team should be collecting data from industry as well as the intelligence community and analyzing the capabilities of Chinese models and chips before they come out. NIST could be a great home for this team, and it's good at hiring technical talent, and it already has the trust of industry, but NIST normally works on things like standards and guidelines rather than rapid analysis. This could be solved by giving NIST, ideally the AI Safety Institute at NIST. A clear tasking. Final lesson three. We need a faster engine. Unlike Sputnik, DeepSeek was built using American technologies.
As my colleague Greg has mentioned, this gives us leverage in the form of export controls, and indeed, DeepSeek's founder said it best. The only thing holding them back is access to American chips, but it's not just enough to slow down their r and d engine. We also need to speed up our engine. We need to ensure that American developers are building the world's best open source models, not just for chatbots, but also for AI used in science, robotics, and manufacturing. In my written testimony, I talk about cost-efficient ways we could support open source using prize competitions at NIST, combined with model and dataset hosting on the national AI research resource. A faster engine also means making sure that we have enough electricity to support the next generation of cutting-edge models. The demands of AI development are already straining America's electricity infrastructure. First, we need to reduce timelines for environmental permitting. Then we need to help developers take on the technical risks that are involved in rapidly building new power plants, which should include natural gas, geothermal, and small modular reactors. One idea currently being pursued by the Department of Energy is making federal lands available for building power plants and data centers. The Department of Energy should complement this with research into technologies like rapid construction of modular data centers, carbon capture for natural gas, and mechanisms for quickly scaling up nuclear supply chains. I'll conclude my remarks there, and I look forward to the discussion.
Rep. Jay Obernolte (R-CA):
Well, thank you, Mr. Fist, and thank you to all of our witnesses. We'll move now to questions and answers from members of our committee. I'll lead off by recognizing myself for five minutes. Mr. Thierer, thank you very much for your testimony. You had mentioned you gave a pretty comprehensive list of the ways that we need to act to ensure continued American leadership in AI development, and one of the things that you mentioned that caught my attention was the nearly thousand pending bills in state legislatures across the country on the topic of AI regulation. Could you talk a little bit more about why you think that that's a threat and what you think that we in Congress need to do about that?
Adam Thierer:
Absolutely. So Mr. Chairman, I know this has been a priority of yours and in the House AI Task Force report, this was something that was highlighted specifically both in this section on federal state relations, but also in the section on small business where the House AI task force bipartisan report basically made clear that the growing complexity of all of the differing definitions and regimes for AI regulation could eventually become a formidable barrier to new types of entry and the so-called rise of Little Tech or open source AI systems. I think that's a chronic problem when you have nearly a thousand, as of this morning, 958 state bills or total AI bills, most of them being state are pending. They haven't all passed yet, but a lot of activity, and again, a lot of different definitions of even how to define AI. If you're a small business person in this country and you are making an attempt to break into a marketplace, but you're confronted with that many different regulatory regimes and all that red tape that is going to really set back your ability to become a leading edge competitor on the fly to compete with the things like DeepSeek and others that we see coming out of China for all of the restrictions that Chinese AI operators face, a lot of the smaller entrepreneurs have a lot of leeway to go out there and do things in a really flexible, nimble, low cost fashion.
Rep. Jay Obernolte (R-CA):
So, how should we in Congress respond to that?
Adam Thierer:
I think you've teed up the idea, Mr. Chairman, in the past of some sort of preemption. This is squarely interstate algorithmic commerce and speech we're talking about with a lot of this stuff. And we in the 1990s had a bipartisan agreement with this Congress, which was Republican at the time and the Clinton-Gore administration, to have a national framework be part of our Telecommunications Act of 1996, and then our 1997 framework for global electronic commerce, and then even a moratorium. We had a year after that with the Internet Tax Freedom Act of 1998. That was a really wonderful model, established a national framework, said here are some rules that we're going to set for the road about how it works. The whole process has now been inverted. The states are driving, and Congress is waiting. Congress needs to step in and either set some clear guidelines for how the states do these things, or, in my opinion, we should have a moratorium on aggressive forms of new AI regulation to make sure we don't shoot ourselves in the foot as this race with China gets underway.
Rep. Jay Obernolte (R-CA):
Sure. Well, I mean, obviously, I very much agree with you. We had a whole chapter on preemption in the AI task force report, but one of the points that we make is we can't preempt something with nothing, which means that we need to get busy in enacting some of the recommendations in that report to create a comprehensive framework for AI governance at the federal level. And then I think we'll have the moral authority to preempt what the states are doing and to set what the guardrails are, because there'll be a role for them as well. I want to thank you very much for that, Dr. Stoyanovich. You've mentioned the importance of the National AI Research Resource, and obviously, I'm going to give you the shameless opportunity to plug our bill here. We've been trying to codify NAIRR. I was pretty disappointed that we couldn't get that across the finish line, especially as a legacy to our colleague Representative Eshoo from California last year. We're trying again this year. Why is getting this done so important?
Dr. Julia Stoyanovich:
Thank you for this really important question. I'm a big proponent of the NAIRR because I see a lot of enthusiasm for the resource in the research community in terms of both our increased access to research compute to data sets, as well as potentially to some training mechanisms that the N will provide. And I use the NAIRR also in my own research, and it has enabled me and my group to do things that we could not have done specifically on assessing large language models like the ones that we're talking about today. It's absolutely crucial for us to make sure that the near exists and that it's developed at the full scale. The NSF has been running it in pilot and they have been doing an amazing job considering the limited resources that they have had, but we need to really have access to the near at full capacity. And I think that it's important for us to think also about how small and medium-sized businesses can benefit from the near and furthermore, how we can strengthen its role as a resource for training and, upskilling, and reskilling in addition to its primary role, which would be to support academic research. In short, we simply cannot compete against China if we don't have a resource like the NAIRR, and there's a lot of enthusiasm in the community.
Rep. Jay Obernolte (R-CA):
Well, thank you. We'll continue to push on that, and hopefully we can get that across the finish line this year. I have about a dozen other questions, but I'm going to set a good example as the leader and wrap it up. I see my time's expired. I'll now recognize Ranking Member Stevens for five minutes for her questions.
Rep. Haley Stevens (D-MI):
Thank you. Another exciting and thrilling hearing. Mr. Allen, can you just talk to me about where you work? What is the Center for Strategic International Studies and this Wadhwani center that you run? Go ahead.
Gregory Allen:
Yes. The Center for Strategic and International Studies is a think tank based in Washington, DC.
Rep. Haley Stevens (D-MI):
Who funds 'em?
Gregory Allen:
I believe about 10% of the funding comes from the US government, and then another share comes from philanthropic, and then another share comes from corporate donors.
Rep. Haley Stevens (D-MI):
Okay. Okay. And then what's this Wadhwani?
Gregory Allen:
So, CSIS includes more than 35 separate programs that are organized into four departments, and the AI center is one of those programs. So, at CSIS, there's a ....
Rep. Haley Stevens (D-MI):
Is that a person? Is Wadhwani a person?
Gregory Allen:
Yes. And he is the anchor donor of our center. His name is Romesh Wadhwani, and he is the founder of Symphony AI. He is one of the origins of AI research, going back to Carnegie Mellon many decades ago. He's an American citizen, and this is part of him, sort of giving back to his country. I should note that one of his many virtues is that he's one of very few Silicon Valley tech billionaires who doesn't make any money in China.
Rep. Haley Stevens (D-MI):
Yeah. Well, that's exciting, and maybe we can talk to him at some point, and I'm sure you talked to him a lot about your research, and you always want to be able to have people doing pure research and coming up with these ideas. You were saying that we were lacking the will, and that caught my attention. So, how do you think that should have rolled out when we were putting those export controls on? Should we have made it closer to the date, like caught 'em by surprise a little bit more? Yes.
Gregory Allen:
I would describe the current approach that we have taken as there's an interagency debate, and the debate is should we have a surprise attack or should we have no attack? And then they compromise on attack without the surprise, which is the worst of all the choices that you could have made. And we make that choice again and again and again. So what I would say is that at this stage, there are costs to our export control policy. It's not fun to have an aggressive export control policy, but we are incurring all of the costs of a maximalist aggressive export control policy, and we are only incurring a fraction of the possible strategic benefits because of the way that we are going about execute it.
Rep. Haley Stevens (D-MI):
Because of the rollout? Is that what you're saying? Because of the rollout?
Gregory Allen:
Only one of the failure modes that I could have talked about. But yes.
Rep. Haley Stevens (D-MI):
Because we met with ARM overseas, actually, they got some people with Michigan there, but we were on the China committee and we were with ARM. And ARM was really pushing on these export controls for some of their products and some of the Chinese products that were going to go into our manufacturing or our AI. And they're saying they could still be observing if we allow these Chinese, it's not…
Gregory Allen:
Just to clarify, you're saying ARM was in favor of the export control?
Rep. Haley Stevens (D-MI):
Yeah, they wanted export, and I don't know if we ever did it. And it caught my attention because they wanted to block CCP technologies from coming here because it was open source, per se, and it wasn't as high quality. And obviously, this stuff is quite complicated. I mean, I've spent a lot of time in the semiconductor space for a long time, by no means a PhD expert, but I appreciated your articulation and Mr. Fist too. We want to talk to you about this AI Safety Institute. I don't know if you've heard about that with NIST, and what would the impact be on our ability to influence global standards if we were to stop participating in the international network for AI Safety Institutes?
Tim Fist:
Yeah, thank you for the question. So what I will say is I think NIST is critical within the AI space. I think there's two roles that NIST and the AI Safety Institute can fulfill.
Rep. Haley Stevens (D-MI):
And they're low cost, by the way. They're critical, but they're low. We give them just a little drop, pennies on the dollar.
Tim Fist:
Indeed. Yeah. And AI is only a small fraction of the work that NIST is currently doing, as you know. But yeah, I see there being two key roles that NIST and the AI Safety Institute can play. One is this standard-setting exercise. So, sort of, creating standards that allowed technologies to be from AI to be integrated into the American economy and influencing those global standards on the world stage is obviously important for creating an export market for those technologies too. I really see it as a path to speeding up diffusion and adoption of US technologies built on good standards. The other is this role that I talked about in my testimony, which is we need some sort of capacity to measure and evaluate the capabilities of AI systems so we can do more proactive policymaking. I think DeepSeek unfortunately caught a lot of policymakers by surprise, but it's not like it should have been a surprise. As Greg mentioned earlier, the research was available, we sort of had access to it.
Rep. Haley Stevens (D-MI):
Thank you. Thank you so much.
Rep. Jay Obernolte (R-CA):
Okay. I'll now recognize the Chairman of the full committee, Congressman Babin, for his questions.
Rep. Brian Babin (R-TX):
Thank you, Mr. Chairman. Appreciate it. One of the challenges in public discussions around emerging AI companies like Deep Seek is separating the fact from fiction. Mr. Thierer, I wanted to ask you the first question. Did DeepSeek develop its models for just $5.6 million? Is that true?
Adam Thierer:
That's what people believe. We don't know the exact numbers because, of course, they won't share them. But yes, that's the estimate. People have put out there five to six.
Rep. Brian Babin (R-TX):
But you can't verify that.
Adam Thierer:
There's no way to verify
Rep. Brian Babin (R-TX):
That, right? It's
Adam Thierer:
Impossible.
Rep. Brian Babin (R-TX):
And Mr. Allen, did DeepSeek use Chinese chips or did it use US chips? We're hearing rumors there either by acquiring them before export controls took effect or through evading those export controls.
Gregory Allen:
Mr. Chairman, if you don't mind, I'd like to also address the previous question.
Rep. Brian Babin (R-TX):
Sure.
Gregory Allen:
The cost when you're doing drug discovery as a pharmaceutical company is not just the cost of the clinical trial that worked. It's the cost of all the clinical trials that didn't work right. You're searching for what is the right drug, and that is the same with DeepSeek. That training run, that was for the training run on that AI model that worked, but you had to do a bunch of experiments that didn't work to find that one that did work. So I actually believe DeepSeek when they claim that the cost of training that AI model was $5.6 million. But was that the cost of their AI training infrastructure? No. Was that the cost of all the experiments that it took to lead to that success? No. I would estimate that was in the hundreds of millions or billions of dollars.
Rep. Brian Babin (R-TX):
Okay. And then, if you'll answer my question about the chips, how did they acquire 'em?
Gregory Allen:
Yes. The best reporting on this topic right now is that DeepSeeks AI infrastructure includes 10,000 H 100 chips, 10,000 H 800 chips, 30,000 H 20 chips, and those are all American chips, but only one of them has always been illegal to export to China, and that is the H 100. Now, this estimate, which comes from semi analysis citing sources that they've talked to in China's supply chain, I find it to be a credible projection. However, I also believe it is credible that Deep Seek did train this specific model using the H 800 chip, which was a chip that Nvidia specifically crafted for the Chinese market as a way to evade US export controls by degrading one aspect of technical performance below the export control threshold. And this is another instance in which we took far too long to export, sorry, to update our export controls in the face of new developments.
So there was a year-long period in which it was legal to sell these very high-performing H800 chips, and Deepsea bought a lot of those chips during the period in which it was able to do it. And just because you make something illegal to sell doesn't mean everything that you've already sold magically disappears. So when you look at Deep Seek's progress, you are really looking at the lagging impact of the poor design of the first tranche of Biden administration export controls, and the year that it took them to update the technical performance specifications between October 2022 and October 2023. But as the CEO of Deep Seek has said, these export controls are still having a significant impact on his company, on the Chinese AI ecosystem; it could just be a much larger one if we were doing a better job.
Rep. Brian Babin (R-TX):
Thank you. Mr. Fist, does DeepSeek suppress information on certain topics? Can users trust that their data will be protected from foreign exploitation?
Tim Fist:
I haven't seen a super systematic analysis of this, but there have now been dozens of reports that it does seem to suppress information on things like Tianmen Square, the Uyghur population, and other examples of classic CCP propaganda.
Rep. Brian Babin (R-TX):
Okay. Thank you. And then let's see. The Stanford Institute for Human-Centered Artificial Intelligence recently released their 2025 AI index report, which indicated that private investment in US AI was 109.1 billion in 24, more than 10 times the amount invested in China, 9.3 billion earlier this year. The Stargate Project announced a 500 billion investment in American AI. Mr. Thierer, what is more important for US leadership and AI, federal funding or a permissive environment that allows for private sector investment and innovation?
Adam Thierer:
Yeah, we obviously need that private sector investment to be high, as you just reported, congressman, and basically the number you cited there. What's even more exciting about that being 10X higher than what China's invested in terms of private AI investment is that that's over 10 10-year period. It was only 4X. We were averaging about four times as much as China. That's still great news, but last year it was almost 11 times as much, actually. So we need more of that. And what yields that kind of private sector investment is a more permissive, permissionless innovation type environment where we encourage innovators to go out and do bold things. Exactly.
Rep. Brian Babin (R-TX):
Thank you. I yield back, Mr. Chairman,
Rep. David Rouzer (R-NC):
The gentleman yields back. Ms. Lofgren, you're recognized.
Rep. Zoe Lofgren (D-CA):
Thank you very much. This has been a useful hearing, and I appreciate the testimony of all the witnesses. It's interesting. Mr. Thi, you talked about the role of states, and it reminded me of the issue we in the last Congress where Congresswoman Eshoo and I played a very active role in discouraging California from preempting in a way that would've been very adverse. And ultimately, we did succeed in getting California to back off. And it's worth noting that they have now come up with a framework that really matches the objections we made. So the states have a role to play, and we have a role to play to keep them from doing something that is adverse to innovation.
I have a concern I will say about some of the way we have developed our export controls. I am for strong export controls on hardware and chip manufacturing in China, and I think the rollout has been lagging in some cases, and that is something we should deal with. I'm also concerned, however, that we are not currently engaged as a nation in multilateralism. And if we impose export controls with US only and international companies have the capacity to sell to China, the essential, the equivalent, we haven't really achieved what we're trying to do. And so, for example, I've just learned that ASML has just announced they're going to build a chip manufacturing in China for the first time. So, Mr. Allen, what should the US be doing to support multilateral controls on China's access to chips and chip equipment?
Gregory Allen:
Well, thank you so much for that question, and I think it's just worth pointing out. The overriding logic is very obvious here. Why would you not sell them chips but sell them the equipment they can use to make those chips by themselves? Right. One without the other is a terrible, terrible strategy. And the challenge that we face is while the US is the overall leader in semiconductor manufacturing equipment by a large margin, we're also not alone up there at the top. And in particular, the Dutch and the Japanese industries are both quite strong, and other sub-component players are also very strong, like Germany, which makes some critical technologies here. So the Biden administration pursued a multilateral approach very aggressively and at some cost; it takes time to make multilateralism work. There are options for going unilateral, such as the imposition of the foreign direct product rule, which basically takes advantage of the fact that, for example, some key components of a SML technologies are made in the United States, so you apply US export controls to that content of the machine. Or more recently, what the United States did is point out that all semiconductor manufacturing equipment, even Chinese semiconductor manufacturing equipment, includes inside IT chips that were made using American semiconductor manufacturing equipment. So there are options, but…
Rep. Zoe Lofgren (D-CA):
We have competitors for that as well.
Gregory Allen:
Well, I think the key point here is we can go the unilateral option, but that is incredibly imposing upon our allies and they have to be willing to do that because even for the things that only America sells, Japan usually has a company who could get into that business if they want it to. And so if we impose export controls and our allies do not impose matching export controls, the cost of this strategy, the efficacy of this strategy, the cost goes way up, the efficacy goes way down.
Rep. Zoe Lofgren (D-CA):
Thank you very much. AI models improve with data, and access to diverse high-quality data sets is a strategic advantage. If DeepSeeks models are collecting data from American users, this could serve as a pipeline for the Chinese to improve its AI capabilities. And I was very interested, Mr. Fist, in your comments about the agents or back doors, what technical countermeasures, if any, could be in place to prevent adversarial data access? And by the way, also the backdoor issue that you mentioned.
Tim Fist:
Yeah, thank you for the question. So a lot of the measures might just look like standard data privacy regulation that you might put in place if they're at the levels of companies or individual jurisdictions on the technical side. Yeah, I think the key problem here is just actually understanding why these models behave the way that they do. So I mentioned AI interpreted really research, which is essentially opening up the black box of an AI models digital mind and peering inside and trying to develop a mechanistic understanding of why it does the things it does. I think this research is going to be critical not just for detecting when an AI system developed by an adversary has a back door, but also for making our own models more reliable as well, and sort of competing on global markets.
Rep. Zoe Lofgren (D-CA):
I'll just note that this committee, on a bipartisan basis, advance quite a few important AI bills in the last Congress that were never put up for a vote on the floor. So I'm hoping that we have a better opportunity for our bipartisan approaches to actually become law in this Congress. And I yield back.
Rep. David Rouzer (R-NC):
Representative Issa.
Rep. Darrell Issa (R-CA):
Thank you, Chairman. This will probably go to a couple of you, but Mr. Thierer, I'm going to start with you. It's fair to say that labor knowledge investment are the three things that you need to add to our current leadership position if we're going to stay where we are or even increase what was earlier described as now a sub-one-year lead. Is that correct?
Adam Thierer:
That's exactly right, Congressman, and I'll just point back to the excellent work that this House AI task force did on this issue and its chapter on labor, STEM, and education.
Rep. Darrell Issa (R-CA):
And Mr. Fist, I'm going to ask you the other half of the question. When you talked about agents, and all of you were talking about it when Sputnik or Sputnik went up, it was basically a piece this big that went beep, beep, beep. It was not a step ahead of us, but it was a wake-up call that the step was just behind us. Is that fair to say from a standpoint of history? And if so, then your comments on agents and espionage are the other two elements that I would have to put in addition to labor knowledge and investment. Is that fair?
Tim Fist:
I think that's fair, yes.
Rep. Darrell Issa (R-CA):
Okay. So from a standpoint of this Congress, this Committee and this administration, is it fair to say that we have to have a statement of long-term policy that our allies, including those that we would use BIS and other entities to keep from giving labor knowledge? We can't stop investment with China, but we can certainly have something to do with the labor, and the knowledge, and the access to espionage.
Tim Fist:
Sorry, could you clarify the question?
Rep. Darrell Issa (R-CA):
Well, it's fair to say that we have to have a long-term strategy, and it has to be stated if we're going to achieve that, if you will pull ahead of what DeepSeek showed us is a Sputnik.
Tim Fist:
Yes, I very much agree.
Rep. Darrell Issa (R-CA):
Okay. Mr. Thierer, am I pronouncing that close enough? If what we need is in fact to beat China, and China has 1.2, 1.4 billion people, and 300,000 of 'em are currently studying in the us. In fact, do we need to look at our policy toward educating Chinese in no small portion in the curriculum that they take back to use against us in competition?
Adam Thierer:
Well, I believe that brilliant people want to come here and study and work, that's wonderful for America. We did this with the Soviets, we brought their best and brightest here, gave them opportunity, and encouraged 'em to stay. I'd like to see that same policy for Chinese students.
Rep. Darrell Issa (R-CA):
You hit the major point that I was getting to, encouraged to stay has to be part of that policy, or we are simply helping 300,000 disproportionately STEM students take back with them both the overt knowledge and the covert knowledge that comes from being in our university. That's right. From a standpoint of investment, we'll just assume that we're going to continue to have a lead in investment in the US over China. But I want to go back to one more item, and it really goes back to the agent's question. If we do not have a pure AI, do we, in fact, almost guarantee that one agent could, over time, invite in as many agents as necessary? In other words, if you're not completely pure, you are in fact infected. And just like any other malignancy, it only takes one to bring in the others. Is that fair to say about being, if you will, foreign agent free?
Tim Fist:
That's for me.
Rep. Darrell Issa (R-CA):
Yes, sir.
Tim Fist:
Yeah, I wouldn't overstate the complexity of current AI systems. I think today's agents are fairly rudimentary. They can't really succeed at tasks that require long-term planning. But if you sort of plot this out and look at the improvement of these systems over time, yes, you could have agents behaving as an advanced persistent threat living in computer networks and form sort of a much larger cyber risk than those today.
Rep. Darrell Issa (R-CA):
And lastly, with my remaining time, this committee and the other committee I serve on judiciary and for that matter a little bit of energy and commerce. We've been looking at this whole question of what can be ingested and how do you pay for it. Is it fair to say that I heard unilaterally from all of you, China doesn't give a shit, they simply take it all including all of our data and they pay nothing for it. Is that fair to say? And as a result, we need to have a strategy, regardless of being paid, that in fact ensures that all data is available to be ingested by our learning models to compete with China quickly.
Dr. Julia Stoyanovich:
Yes. So if I may respond, yeah, it's not just China that doesn't give a shit. Our domestic models also ingest all of our data, and these are commercial models, and they don't tell us how they're using this data. So for us to be able to take the moral high ground and say that China shouldn't be doing things in particular way, we have to start with our own environment here.
Rep. Darrell Issa (R-CA):
And I'm closing by just saying I don't think any kind of a regulation obeyed by us is going to be obeyed by China. So the idea that an agreement here is binding on China, in all fairness, I don't buy that. Mr. Thyer, I'll take that as a absolutely shaking your head. Yes. Thank you for the extra time. Mr. Chairman, I yield back.
Rep. David Rouzer (R-NC):
Representative Subramanyam.
Rep. Suhas Subramanyam (D-VA):
Thank you, Mr. Chair. I was looking at the capabilities of DeepSeek and many others, comparing them to tools that we have here in the US, like ChatGPT, and it was quite impressive. I think there's some strengths and some weaknesses, but overall I was impressed by both the capabilities as well as the cost compared to the capabilities. And we've been talking a lot about an AI cold war today, and I would just put the question to the panel as short as you can: has China already passed us in this AI cold war? And if not, how close are they ago? Sorry, I missed it there.
Adam Thierer:
Well, I think obviously the race is tightened. I mean, two years ago when we had two and a half years ago when we had the ChatGPT moment, that was the reverse Sputnik moment for China. That's when they were surprised and taken aback, and we were clearly sort of in the lead at that point, but before that time, we all thought China was in the lead. You read the books from like 2017 to about 2020. It was all about what China was doing. And now we've entered that era again where China is caught up nimble, low-cost systems. They do this very effectively, especially with their utilization of open weight models. And so that is a very real, legitimate neck-and-neck race. These were two prizefighters going at it. So this is why our policies, we have to get it right now.
Rep. Suhas Subramanyam (D-VA):
Mr. Allen.
Gregory Allen:
I don't think it's fair to say that China is in the lead or that DeepSeek suggests that they are. It's more like a US model. Progress occurs a little bit more in a stair-step kind of a fashion than it does like an incredible hockey slope. And so they just sort of caught us on the flat part of the stair and caught up to where we were at the beginning of that upward step. And we are now about to take another leap with the Nvidia Blackwell chips and also with GPT-5 and the sort of next generation of models. And our hope is that the export controls are going to prevent China from taking that next step up the stairs. It's also worth pointing out that DeepSeeks advantages in cost, performance per cost. Those are all algorithmic and architectural in nature. There's no unique proprietary training dataset. There's no specialized computer hardware that they had that's superior. And so 100% of those advantages, as Tim and I have been talking about for a while, are now available to us companies thank you to basically do the same thing. Thank you, Dr. Stoyanovich.
Dr. Julia Stoyanovich:
I don't think it's to anybody's benefit to be engaging in the Cold War on this because people figure out a way to do things differently, cheaper, and better. I have no opinion in particular on expert controls, but an anecdote here is that how does one build rocketships in Russia versus in the US? In Russia, they make all kinds of lots and lots of cheap parts. They try to put them in, they don't fit, they throw it out, they put in another. In the US, we spend a lot of time on precisely building just the right part, lot of money. So what's the right paradigm? It's somewhere in between. We want to make sure that the academic community, the small scale entrepreneurs are able to help us build an environment where it's not a stepwise function where we have a lot of innovation, where we figure out how to do things better in a kind of free environment where we don't have to be either affiliated with a government entity or with a private company to be able to contribute to this work. Thank you. This is what distinguishes us in the us.
Rep. Suhas Subramanyam (D-VA):
Thank you, Mr. Fist.
Tim Fist:
Yeah, so just quickly, I would assess that we are approximately six months ahead in model capabilities and maybe two years ahead in hardware capabilities based on the most advanced chips that China currently has access to legally under export controls. And this, just to tease out a point that Greg made earlier, this is really important because if you can grow that hardware lead to more like five years, you're now leaning into trends like Moore's law and rapid design in hardware improvements, hardware design improvements, such that five-year lead can be a real exponential lead over time. So I think this highlights the importance of export controls.
Rep. Suhas Subramanyam (D-VA):
Yeah, no, I appreciate that and I agree with a lot of what was said. My concern is that some of our policies, especially in the past couple of months, are getting us to a point where not only is China catching up to us, but we're standing still or even moving backwards. We're talking about firing very capable AI experts in our federal agencies and cutting funding for research much as a research was actually the basis for a lot of our innovations in AI as a country. And then haven't talked about, talked a lot about export controls, but not a lot about how tariffs are actually going to make everything more expensive, including the infrastructure we need to improve our AI, and then national security policies, privacy policies, we talked about a talent war. I'm concerned right now that a lot of the policies that are being put in place are going to hurt our ability to stay ahead. And I think we are neck and neck right now, so I appreciate you all being here today, but I hope that we shift, and this administration shifts its priorities. Thank you.
Rep. Jay Obernolte (R-CA):
Gentlemen. Yields back. We'll hear next from the gentleman from North Carolina. Mr. Rouser, you're recognized for five minutes.
Rep. David Rouzer (R-NC):
Thank you, Mr. Chairman. AI is just an absolutely fascinating subject to me. It's kind of like space, I think about space and I think about what's beyond space, what's beyond that? I mean, it's never-ending, and AI has so much potential in so many ways. I just googled a question, in fact, and with the help of AI, I get this answer: approximately 54% of American adults read at or below a sixth-grade level, meaning that nearly half of the adult population struggles with basic reading skills. Is there a role for AI here in the education process? And not only that, but we need talent in order to advance in this space. Have y'all given any thought to that? I'm just curious.
Dr. Julia Stoyanovich:
Yeah, may I? Okay. So I do really think that we need to step up our literacy efforts when it comes to basic reading and writing skills, but also when it comes to AI literacy. And this is a point where federal investment is crucial. We don't have large-scale efforts right now that would help people understand what AI is, what it does, what happens to their data, etc. So this is a huge opportunity we're missing that is going to impact us today and in generations to come.
Rep. David Rouzer (R-NC):
Well, I think there's a huge opportunity in healthcare, too. You get one cure, you save trillions of dollars, certainly hundreds of billions of dollars, which obviously has positive impact as it relates to our deficits and our debt.
Adam Thierer:
Absolutely. Congressman, I have a multi-part series for the R Street Institute called AI and Public Health series, and it talks about all of the different types of amazing innovations that are coming about due to AI and machine learning and then specifically sites, the cost savings for our very expensive public health, our health system more generally. And so I think that's what you're going to there is we can find better efficiencies but also better cures, help people live longer, healthier lives.
Rep. David Rouzer (R-NC):
Any other comment on that?
Gregory Allen:
If I may? I just want to say that in both the health and in the education sphere, we should not assume that these benefits will occur if we just do nothing. I would sort of give an analogy here to smartphones in school, right? There is a way to use tablets or smartphones that is education-enhancing, but if you just sort of let things go the way they will, kids will probably just watch TikTok all day, and they will not be smarter at the end of the day. And I would say the same is probably true of AI. There are ways to use AI that will dramatically enhance the quality of education, but that's not an inherent feature of just throwing AI in there and assuming everything's going to go great. And in the same domain on healthcare here, I would say that the productivity improvements are so exciting in terms of coming up with candidate molecules for drugs in terms of identifying possible disease pathways that we sort of have to look at the rest of our health r and d regulatory ecosystem and basically say what are going to be the bottlenecks if we have a 10,000 x improvement in productivity over here, but we still have the exact same bottlenecks we've always had over here, then that 10,000 x productivity in candidate molecules is not going to lead to any faster pace of drug generation.
So what basically I'm saying is here is we should not assume that if we just let everything go, it's going to turn out fine. This actually requires action. And it's the same in the case of Sputnik, right? If we hadn't responded, the Soviets would have won the space race. It's not like it was our birthright to win. We had to work to win.
Rep. David Rouzer (R-NC):
And I think what you're getting to, you got to have the right regulatory framework. Going back to preemption, what are the key steps to making sure we have the right regulatory framework so that you have innovation, it's not stifled, but at the same time, you have guardrails in place.
Adam Thierer:
In 1997, I mentioned that the Clinton administration put forward the framework for global electronic commerce that many Republicans and Congress agreed with, which was a market, market-based idea driven kind of opportunity driven environment and ecosystem for the digital economy and the internet. And that worked wonders. You just look at the natural real world experiment we've had over the last quarter century on either side of the Atlantic between US and Europe. And you ask yourself a simple question, name Europe's leading digital technology innovators today, silence. We just can't. There's only a few. And the reality is the household names in Europe are American companies. Why? Because we got the policy prerequisites of growth and right for the internet. We can do it again for AI.
Gregory Allen:
If I may, I would just say my approach to the governance framework would be twofold. Number one is real regulation on the most worst case scenario type risks, right? We do not want AI making bio weapons not just within the reach of an evil genius, but within the reach of an evil moron there. I want hard regulation for everything else. I want probably more soft regulation. Basically, the government, like the AI Safety Institute, disseminating best practices, not draconian enforcing them, but making these resources available so that companies know what good governance looks like and have a lower cost to entry when it comes to implementing those types of maneuvers.
Rep. David Rouzer (R-NC):
Very good. Thank you. My time's expired.
Rep. Jay Obernolte (R-CA):
Gentlemen, yield back. We'll hear next from my colleague from California. Ms. Rivas, you're recognized for five minutes.
Rep. Luz Rivas (D-CA):
Thank you, Chairman Obernolte. Dr. Stoyanovich. One of the things that I've mentioned previously in this committee is the need for investment in our scientific and academic research ecosystem to fund the workforce and research for the future. The need and investments in our STEM education and workforce of the future is an opposition to what this administration has done, despite them touting the need for advancing the American AI industry. Instead of uplifting this work in the national interest, this administration has cut federal funding for research and fired science staff implemented tariffs in a chaotic way that has sent the stock market into free fall, making it harder not just for families to buy goods, but for AI and tech companies to buy the materials to build and create the physical data centers needed to power this technology and remove the previous administration's executive order on prioritizing AI safety, responsibility and fairness. Dr. Stoyanovich, in our competition with China, can you discuss how these policies and actions taken around AI safety and responsibility will actually set us back instead of advancing the American AI industry?
Dr. Julia Stoyanovich:
Thank you for this question, and I'm glad that we get to think about how to advance our national competitiveness not only with the kind of an isolationist lens where we're trying to protect the assets that already exist, but with the lens of participating and helping build an environment of international cooperation where we can all benefit from the kinds of advances in academic research, in entrepreneurship, in data assets, and also in figuring out what the right balance should be between transparency and privacy, protection and security. And of course, the United States is distinctive in its ability to create an inclusive academic environment where people come from all over the world and get to learn with essentially no strings attached and gets to make the world better for everybody. And I think that to maintain our leadership strategically in this important field of AI, we need to reaffirm this commitment to academic research, to education as well as to making sure that we bring everybody with us that this impact is not only on academia, but also that we invest in K through 12 education to include AI literacy as part of that conversation, and that we make sure that we also reach out to people who are adults today and lack some of that AI literacy and this helps them change professions, adopt an environment that changes, but also it helps protect our national security interests because people are going to understand better what the impact is of the data that they're sharing, for example, and what are the risks.
Rep. Luz Rivas (D-CA):
Thank you for those comments. I agree with you and I look forward to working with my colleagues to ensure that we have a research ecosystem and investments in education at all levels that allow us to continue our leadership in AI throughout the world. Thank you. And I yield back.
Rep. Jay Obernolte (R-CA):
Gentleman yields back. We'll hear next from the gentleman from Indiana. Mr. Baird, you're recognized for five minutes.
Rep. Jim Baird (R-IN):
Thank you, Mr. Chairman, and thank you, witnesses, for being here today. I always learn something when we have expert witnesses such as yourself. I guess my first question goes to Mr. Thierer, while protecting against foreign AI threats is crucial, excessive regulation or restrictive policies could slow down the US innovation and put American companies at a disadvantage. So, how can we strike the right balance between national security measures, ensuring that the AI innovation and entrepreneurship in the US continue to thrive?
Adam Thierer:
Well, thank you for that question, Congressman. And greetings from a fellow Hoosier. So I think the answer to that question is we first need to step back and understand what the nightmare scenario looks like. And that would be if the Chinese systems like DeepSeek and the successors and its other competitors become global standards across the world because they have created low cost nimble systems that, especially their open source systems that ultimately spread through concerted efforts like their digital Silk Road and the Belt and Road initiatives where they're actually trying to lure many other countries, especially in the global south, into a desire to be partners with them. And that's part of that AI axis that we talk about, the danger of that, and the spread of their sort of control regime through technological systems. So, American systems need to lead. We need to make sure that we're not shooting ourselves in the foot and that we become global standards just as we did in the internet age. But now we've seen a turn. We see Huawei and other companies becoming more predominant in the hardware side of things and offering investments to these countries. We need to make sure our companies are there first, so we don't restrict them too much in what they want to do globally.
Rep. Jim Baird (R-IN):
You always have a Hoosier welcome. So, anyone else? Anyone other witnesses have a thought on that issue on our national security?
Dr. Julia Stoyanovich:
Yeah, I do.
Rep. Jim Baird (R-IN):
Yes.
Dr. Julia Stoyanovich:
Yeah, so I think the only way for us to win at this and to make sure that our national security is preserved and our competitive advantage is preserved is through openness. We need to make sure that we figure out how open models and open innovation can be supported in this country. And open doesn't mean we give away all our secrets, and we give away all of our advantage. Openness means control because we set the rules for how data is shared, for how data is used, for what is disclosed, for whether we know how the systems were checked for safety, for correctness. Openness is key, and this is our advantage versus China.
Rep. Jim Baird (R-IN):
I can appreciate your thought there in terms of openness; it really stimulates innovation and knowledge from around the world. But on the other hand, it's difficult to control the data, how it gets out. And so that's a real challenge. I think
Dr. Julia Stoyanovich:
It's not impossible research needs to be done on this. We need investment in research, also, but it can be done.
Rep. Jim Baird (R-IN):
So if we do the right research, maybe we can develop the techniques to control that data. Is that right? Did. Yep. So I want to go to Mr. Fist and Dr. Stoyanovich, it is reported that Deep Seeq produce some technical innovations in the development of their recent models. So, what are those technical innovations, and how can the US use those technical innovations to our advantage and continue to innovate technically?
Tim Fist:
Yeah, so I'll quickly answer that one. So yeah, a lot of the technical innovations from Deepsea actually came out in the first model known as V three that came out in December. And these were essentially a range of innovations that made it a lot cheaper to use computer chips to train the model to a particular level of capabilities. So that represented an increase in what we call algorithmic efficiency. They were able to make more efficient algorithms that could make better use of computer chips. Those techniques that they openly published are now being adopted by US labs and around the world to improve our own efficiency. And there's also techniques such as this that US labs are using as well. So in general, this is a race to develop better algorithms and adopt other algorithms quickly. The part where they really behind is on the computer chips.
Dr. Julia Stoyanovich:
And then the other thing that I would add is that we know a lot more about what Deep Seek has done in terms of innovation than we do about the leading US-based models at the moment. People think that this mixture of experts approach, for example, that DeepSeek has used that allows them actually to train more efficiently and to do faster inference that everybody here also uses that. But there's no way for us to check this. So the biggest wake up call I think for our local models here was that DeepSeek is able to achieve comparable performance to us, but in a more open model. And this is what's really making things happen here.
Rep. Jim Baird (R-IN):
Thank you. And Mr. Allen, I didn't get to you, so I just say hello and say hello. Return. Thank you, Mr. Chairman. I'll yield back.
Rep. Jay Obernolte (R-CA):
We'll hear next from the gentlewoman from Delaware. Ms. McBride, you're recognized for five minutes.
Rep. Sarah McBride (D-DE):
Thank you, Mr. Chairman. It is an honor to be on the subcommittee with you and with all of my colleagues. I will say it's an honor to be on here with Ranking Member Stevens, although I did take exception to her comment earlier that Michigan, rather than Delaware, is the best state in the union. And thank you so much to our witnesses, Mr. Chairman. You've brought us together today because the United States and the world was rattled by news of a new AI model released by Chinese startup DeepSeek. This new model puts into question whether the United States is properly investing in AI to ensure we continue to be at the forefront of innovation. It's important that as we grapple with that question, we recognize the central role the US education institutions have in fostering innovation and growth from researchers to students to career scientists, people who have dedicated themselves to producing groundbreaking research by harnessing the power of science and technology.
I have met with students and faculty from Delaware who are concerned over the funding cuts and the instability. The current administration has fostered through its policies. I've heard from private companies concerned about the devastation the defunding of our educational institutions will have on their ability to recruit and build a strong workforce. And I've heard from students, both those born here and those who are studying here, that the demonization of diversity implicitly and explicitly insults their qualifications as though those two principles run in conflict. As a member of the Delaware State Senate, I was proud to advocate for funding that developed programming for young women and students of color interested in learning and refining their STEM skills. That workforce pipeline and the diversity within it is foundational for the US to compete and lead AI is the perfect example of a technology that benefits from diversity among those who are building and training it.
It's critical that the US continues to nurture young innovators and research from a variety of backgrounds to give us developed AI models range in input and output our country's diversity. And the unique capacity for us to build a diverse tech workforce is not just of symbolic importance, it's a strategic advantage for our country. The US can build tech that meets the needs of the world's diverse consumer base because we have the world's diversity here, but we need to seize on that asset. We undermine one of our competitive advantages when we demonize diversity, and we shoot ourselves in the foot when we defund educational institutions necessary to build that workforce and foster innovation. To continue this, we must fund our schools. We must fund research, we must support our innovators, and we must encourage curious minds of all backgrounds, both here at home and globally to bolster our domestic education and workforce pipeline.
It's unfortunate that the current administration's seems not to share that belief. So my first question is for you, Dr. Stoyanovich. You had touched on this a little bit before. Students from across the globe flock to the United States to learn from world-renowned experts housed at our educational institutions. Historically, we've encouraged competitive candidates to obtain their training here, and in return, they feel research and bolster our workforce pipeline, keeping us competitive. I'm curious how recent actions and funding cuts to our educational institutions could affect that pipeline. And how are educational institutions outside of the United States seizing on what could be a vacuum here because of funding cuts, or frankly, a message that we're not welcoming?
Dr. Julia Stoyanovich:
Thank you very much for this question, and I absolutely agree with the statement that you made that diversity is important, that being in an environment where you are judged on your abilities, your interests rather than on the color of your skin or your gender is what is conducive to creative thought to research and other types of creative thought. So very bluntly, the current instability in terms of research funding is making it so that we are no longer able to recruit as many doctoral candidates as we would've in other years. My own lab and other labs have lost in terms of yield. Quite a number of people who prefer to stay in Europe, for example, and pursue their doctorate degrees there or to do their post-doctoral studies there. And Europe is going to benefit, Western Europe in particular, tremendously from the instability here. Many students who would've studied here are staying in Asia, are staying in China, or in India. And this is going to have devastating impacts on generations of researchers, both in academia and in industry, in the United States. So we really do need to rethink our priorities. We're shooting ourselves in the foot here.
Rep. Sarah McBride (D-DE):
Thank you. And perhaps because of that, unfortunately, in 20 years, many of the tech innovators' names that people know will not just be Americans anymore, but folks from other countries living and working in other countries. Thank you, Mr. Chairman.
Rep. Jay Obernolte (R-CA):
Gentleman yields back. We will hear next from the Gentlewoman from South Carolina. Ms. Biggs, you're recognized for five minutes.
Rep. Sheri Biggs (R-SC):
Thank you, Mr. Chairman, for holding this important hearing, and thank you to the witnesses for being here today. So, applications from communist China have a history of collecting personal user data on a level that presents a national security risk. You only need to look at the scale of data collection from TikTok, and that Chinese companies are required to turn over personal data to the CCP at their request. DeepSeek's V3 and R-1 models exemplify this risk with DeepSeek overtaking American companies like OpenAI to be top-rated AI app on US App Store. So, to Mr. Thierer, does DeepSeek collect data beyond just the chat logs from its users? And does that data that's collected by Deep Sea present a clear threat to our citizens' privacy and our national security, and how can we mitigate that risk?
Adam Thierer:
Thank you for the question, Congressman. Congresswoman, we don't know the full extent of data collection on that side of things, but of course that danger is always very real. This is why some state governments and others have looked to make moves to stop deepsea from being utilized by government employees. And that's an understandable concern, but clearly that more study is needed on that important question.
Rep. Sheri Biggs (R-SC):
Thank you. And to Mr. Allen, the United States currently maintains a global leadership role in AI, and we heard from Mr. Fist a little bit about having the best AI, and a secure AI, and a faster engine, and thank you for that input. But Mr. Allen, what do you think the competition? We know it's intensifying, and China's catching up quickly. What are your primary factors that you think have allowed the US to maintain its lead this far? And where is China catching up to us the most rapidly?
Gregory Allen:
Well, thank you so much for the question. I guess I should point out, to begin, that AI is a very broad umbrella term. So, one can refer to AI really meaning large language models like ChatGPT, or one can refer to AI as an entire universe of computer science research. That includes everything from the kind of computer vision that powers autonomous cars to speech recognition to AIs that play chess and defeat world chess champions. So there's this entire field of artificial intelligence, and the reality is that the US lead is strongest in large language models, which is exactly what DeepSeek. Did DeepSeek come up with something that was near the world state of the art? Only six months after American companies had reached that sort of same level and in other fields of AI, China is also incredibly strong in autonomous cars. It is very much a neck-and-neck state of affairs, and there's a lot of advantages on the Chinese side in computer vision and speech recognition.
It is utterly routine for Chinese companies to have world-leading benchmark performance in these kinds of areas across this entire universe of AI. Really, China is pursuing an all-of-the-above strategy. America's number one advantage in all of this has been the strength of our venture capital ecosystem, our ability to create great companies, our ability to enable creative destruction of slow and old companies to make way for new and better ones. But I would also point out that even these kind of advantages are downstream of US government action. So, for example, while the vast majority of AI investment comes from the private sector, if you walk around OpenAI or you walk around Google and ask somebody who funded the research grant that put you through grad school, it's DARPA, it's the NSF again and again and again. So even that private sector advantage, those are the plants that came from the seeds that the US federal government planted.
That's why I do share the sort of concerns that others have expressed about are we continuing to plant the next generation of seeds? One final thing I'll say here is we've mentioned Sputnik maybe a dozen times on this, but nobody and Sputnik led America to do a lot of great things. We created DARPA, we created NASA, but we haven't mentioned my favorite thing that America did after Sputnik, which was the National Defense Education Act of 1958 when we said, we're going to triple the number of engineers that this company, sorry, that this country produces because we recognize that we're in a multi-decade competition where leadership and science and technology is critical. And if we could resurrect that playbook in the wake of DeepSeek, which by the way, I think the most impressive thing about DeepSeek is the fact that all of the technical staff that developed it did not work for US companies and was not educated in American universities. It was an all-China team, and it was a very impressive one, that is downstream of their education policy, which was every bit as bold as the law I just described in the wake of Sputnik.
Rep. Sheri Biggs (R-SC):
Thank you very much. And I yield back
Rep. Jay Obernolte (R-CA):
Congressman yields back. Well, we'll hear next from Congresswoman McClain-Delaney, you are recognized for five minutes.
Rep. April McClain-Delaney (D-MD):
So thank you to our chair and Ranking Members for organizing this hearing and on artificial intelligence, US competitiveness, and the role of DeepSeek in the development of AI models. And I want to thank each of the witnesses for your incredible testimony. So I proudly represent Maryland's sixth district, and we are home to the Institute of Science and Technology and its campus in Gaithersburg, Maryland, as well as, you know, there's one in Boulder, Colorado, as well. And many of you highlighted in your testimony that NIST is one of our premier agencies to enhance US competitiveness in AI through research and evaluation, and as noted by several of my colleagues, for highlighting the value of NIST. And I might also lift up other agencies in my district, like NIH, the National Institute of Cancer, and Fort Dietrich, which also do research and use AI, and of course, uplift how penny foolish and pound foolish it is to look at these research cuts and how they impact our US competitiveness.
So NS work in AI, which really ramped up during President Trump's first term, has proven to be a critical enabler for the US government and US industry to maintain AI leadership. And it released an AI risk management framework, which I think several of you spoke to, and the Department of Commerce protection of national Security has been bolstered through American leadership through a non-regulatory approach. This is something I know a little bit about. Since I helped lead the National Telecommunications Information Agency and I worked across with our sister agency NIST to ensure that we had methods to allow organizations to use AI in ways that ensure that technology is trustworthy, transparent, and unbiased, with increased public interest in AI and LLMs. We need more meaningful bumpers and safeguards to protect consumers from threat actors that would exploit AI powerful data collection and analysis capabilities. And I believe we must foster American innovation and build trust in our AI systems.
And I have to reiterate the word trust because we must foster standards that keep Americans safe while remaining globally competitive. So this is a pivotal moment in AI technologies in terms of American leadership, and it seems like we are failing to meet this moment. I will note that more than 70 staff at NIST have been fired this year. And while some have been reinstated, we're very concerned that more cuts may be on the horizon there and at other research agencies, and also our universities, with DeepSeeks' rapid growth and development quickly threatening US leadership. Have a couple of questions. I just really want to talk to Dr. Stoyanovich. In your testimony, you mentioned that DeepSeek models have raised significant concerns related to privacy, data governance, and national security. Your testimony also discusses a security-focused approach that maximizes economic potential and continues to foster innovation. Can you discuss the specific ways that we can utilize NIST's leading world expertise and standards in R&D to safeguard consumer privacy, ensure national security priorities, and support technical innovation? And I might add one more thing. My colleague Congressman Biggs also talked about some of these same issues with respect to TikTok. So I'm just curious about your thoughts about that.
Dr. Julia Stoyanovich:
Thank you very much for this question. NIST has a pivotal role to play in making sure that we both establish a data governance regime that is appropriate here, but also that we think very carefully about evaluation. And this is something that I haven't mentioned today, and I want to bring up that term. S,o how do we know whether a system works, whether an LLM works? It's the same as with Roomba, the smart vacuum. We have to have criteria according to which we assess whether it works. And what are these criteria? LLM evaluation is really, really complex because we want these models, we use them for general set of general purposes. So we want them to produce outputs that are correct, that are safe, that are not misleading, that are courteous, right? There are lots and lots of things that we want from these systems. And so there are many ways in which we need to check whether they deliver on these promises.
And one of these ways is benchmarking. And there are already robust benchmarking efforts. But this is something where NIST should really play a stronger role because as soon as a benchmark is published, it gets saturated because everybody is going to try to perform well on that benchmark, right? So it's a moving target. And in addition to kind of mechanistic benchmarking, we also need to make sure that we're incorporating softer criteria like human judgment about whether a response was appropriate, not only that it was correct, and it didn't disclose something. So, in short, we need to pay a lot more attention to evaluation and benchmarking in particular. And this is a perfect job for NIST, and it needs to be done by a third party. It cannot be done by a vendor.
Rep. April McClain-Delaney (D-MD):
Thank you. Mr. Fist, can you explain this role in measurement science and how the agency improves US competitiveness and what downsizing the agency staff might do to tackle these problems and maybe even the unique role of NIST in collaborating with industry?
Tim Fist:
Yeah, so just repeat the points just made in that NIST plays this essential role in measurement and evaluation, especially for AI in AI. I would just highlight that this is an extremely nascent field. We don't really understand how to properly characterize their behavior and properties of AI models and the kind of basic measurement work that NIST does is really critical for that.
Rep. April McClain-Delaney (D-MD):
Thank you. I yield back.
Rep. Jay Obernolte (R-CA):
Gentlemen yields back. We'll hear next from the gentleman from Utah. Mr. Kennedy, you're recognized for five minutes.
Rep. Mike Kennedy (R-UT):
Thank you, Chairman Obernolte, and I'm excited to be here in this committee, the research and technology subcommittee because keeping American innovation at the forefront is not only an economic imperative but also a national security one. I'm proud to announce that I did pass a few weeks ago my first bill through the House, the United States Research Protection Act, which seeks to stop China from stealing our innovations and beating us on developing these kind of critical technologies. And it's important that we learn from DeepSeek and how the United States can head off future technological surprises. I have a question for Mr. Allen first, if you'd be willing to entertain that. Could you explain the threat differences between the DeepSeek model and the DeepSeek app so that the American people are aware when using this technology?
Gregory Allen:
Certainly, and thank you for the question. DeepSeek is an open weights model, which means that if you would like to actually download the weights of the model and run that as a local instantiation on your own computing hardware, you can do that. Alternatively, if you would like to access it from DeepSeeks computing resources through their app, you can do that as a software-as-a-service type indication. So that's the difference between the model and the app. However, both are delivering you ultimately the same product if you download the most recent version of the model. But older versions of the DeepSeek model are also available to be downloaded. And this is how you can determine things like the fact that it was distilled using data from OpenAI, among other things.
Rep. Mike Kennedy (R-UT):
Is there a privacy consideration, though, between the app and the model?
Gregory Allen:
The app would have the more significant privacy considerations just because every single user interaction is very likely logged and then used as future training data or for whatever use case DeepSeek has in mind. It is possible that the model could have different types of risks, such as cybersecurity risks, and some of these risks are present and don't appear to have even originated in DeepSeek. It is possible to, for example, put data poisoning attacks out there on the open internet, and anybody who's just hoovering up the whole internet to train their model will fall prey to these attacks, which then lead latent cyber threats in the capabilities. Indeed, this has been demonstrated in DeepSeek’s case.
Rep. Mike Kennedy (R-UT):
Can you tell me more about the speech censorship aspects of DeepSeek, the app versus the model?
Gregory Allen:
So both the app and the model do take into account censorship-type concerns because DeepSeek, as a Chinese company, is subject to Chinese regulations, which relate to politically sensitive and propaganda-sensitive topics that China cares about. So both of those are going to be reflected in the models, but the app will always have the most updated version of that. So if anybody ever discovers a way to make the DeepSeek app behave in a way that the Chinese government doesn't like the app, you can bet will be updated much faster.
Rep. Mike Kennedy (R-UT):
Thank you for that clarification. That's excellent. Mr. Fist, I'll go to you next. DeepSeek reportedly trained their advanced AI model at dramatically lower costs. What does this imply about America's current approach to funding, training and deploying AI systems? And what should policymakers like me do to support more cost-effective innovation domestically?
Tim Fist:
Yeah, thank you for the question. Yeah, so I will just highlight a point that I believe Greg made earlier in that the costs that DeepSeek are reporting to us at cherry pick's number relating to the…
Rep. Mike Kennedy (R-UT):
That’s shocking, the Chinese Communist Party are not going to tell us the truth about what they really spent on this. But besides that point, what else do you have to say?
Tim Fist:
Yeah, indeed. So there's 5.6 million number a lot lower. It still is very impressive that they were able to train at this level of efficiency, and it means that we need to figure out how we compete on the efficiency metric as well. This is really important. US firms are innovating on these kinds of things all the time. I think one area that the federal government can help support is this kind of basic research into new hardware architectures that don't currently have near-term commercial promise but could unlock a huge breakthrough in efficiency. So I'm talking about photonic chips. So, using light as a medium for transmitting data or doing calculations instead of electricity on chip memory is sort of like a very interesting paradigm in this field as well. There's a number of basic hardware technologies that we should be investing in to figure out how we can get that next leap probative of efficiency above China.
Rep. Mike Kennedy (R-UT):
Thank you for that answer, photonic. Haven't heard that word for a while. Thank you very much. It sounds like a Star Trek. Mr. Thierer, with my remaining time, just a brief question. Can you tell us more about what regulation states are pursuing and how China's AI development efforts should affect state and federal policymaker attitudes towards regulating AI?
Adam Thierer:
Well, it's funny you should ask that, congressman, because in Utah there's actually a pretty good model. Utah passed a really good model for AI. That's
Rep. Mike Kennedy (R-UT):
Why I'm asking.
Adam Thierer:
That's a good setup. Well, I just think that that model that Utah's pursued is really smart because it obviously focuses on first studying the issue, being patient, forbearing from overregulating, focusing on agency capacity, and existing governmental uses of AI. And then also has that experimental sandbox idea built into the back end of it, where you actually encourage innovators to come and talk to state policy makers to figure out how they can maybe pursue exciting projects in a more collaborative way.
Rep. Mike Kennedy (R-UT):
Thank you very much for pointing out the regulatory sandbox, which I was part of in the state legislature a few years ago, and we're proud of that. Utah is a great model for this. And Mr. Chair, thank you very much. I yield back.
Rep. Jay Obernolte (R-CA):
Gentlemen, yields back, we'll hear next from the gentleman from Rhode Island. Mr. Ammo, you're recognized for five minutes.
Rep. Gabe Amo (D-RI):
Thank you, Chairman Obernoltei, for holding today's hearing on artificial intelligence. Last August, I held a round table on AI back in my district at Smithfield High School on the first day back from summer break. And my visit to Smithfield made it clear to me that educators and students are using AI inside and outside the classroom in our community at a far more accelerated rate than I had previously known. And therein lies an opportunity; nurturing the next generation's desire to learn is key to beating China in the AI innovation race. We can't out-innovate DeepSeek without that effort. And since 2018, China has established remarkably over 2000 undergraduate AI programs at more than 300 universities. We need to keep up, and frankly, I think we need to start earlier. That's why last Congress, I joined Congressman Kane in introducing the bipartisan Lift AI Act legislation that would direct the National Science Foundation to develop AI literacy curriculum for K-12 students. I'm proud, our bill that was reported out of this committee last year. And so my first question for Mr. Fist, China's making substantial investments in AI education, as I noted, and my question for you is, how are these investments having an impact on AI innovation in China? And additionally, what can we do here in the United States to ensure that our domestic workforce, but also pipeline, starts where we need to, so we can compete?
Tim Fist:
Yeah, thank you for the question. So we've talked today a little bit about China's massive programs to build up its STEM workforce, very ambitious investments, which my colleague Greg mentioned. This is an area of expertise for me personally in the United States, what we have done work on is how do we attract superstar talent from overseas as well. So, this is really America's superpower is being able to bring the best in the world here. If you look at numbers on the Forbes AI top 50, so top 50 startups in the United States from last year, 70% of those 70% had at least one immigrant co-founder. So I think finding ways to attract and retain top superstar talent needs to be a complement to education activities.
Rep. Gabe Amo (D-RI):
Thank you for that. And look, I think you've heard it from some of my colleagues already, but what we've seen out of the Trump administration thus far seems to be rolling in the opposite direction, right? The cancellation of STEM research grants, the fund AI training, the chilling effect for international students, the same folks we're trying to attract in STEM by deporting folks on a whim and casting away guidance to stop our AI models from perpetuating racial bias. These are things that aren't helpful and, in fact, I believe will give China an upper hand in the innovation race by scaring off the next generation of students from entering the STEM and AI workforce. I have a particular interest in the impact on healthcare. As you know, AI has been deployed effectively to support medical innovation and can do so much more. So the question that I have for you, Dr. Stoyanovich, is, can you give me a sense of your thoughts on where these actions have impaired researchers, particularly in fields like healthcare, to make us more innovative than we are today?
Dr. Julia Stoyanovich:
Thank you. This question, and also if I may, I want to add to the point that Mr. Fist made about attracting superstars.
We shouldn't focus just on attracting superstars. I include a bibliography in my written testimony of the DeepSeek papers, and you will see that most of them have over 100 authors that are all the product of the Chinese education system in AI. So let's not forget that we're now in a place where no single person, no matter how brilliant, can build the system from the beginning to the end. We actually need all hands on deck here. And that's why public literacy, embedding AI literacy into K through 12 is as important as ever, if not more important. So regarding the impacts on healthcare, indeed, we're seeing lots and lots of use cases that are making their way into clinical practice, even of AI, including also large language models. And healthcare is this perfect domain that raises all of these privacy and security concerns that we talked about.
But where there is also tremendous benefit potentially that these systems can give us in terms of efficiency, for example, so that doctors don't spend all their time transcribing encounter notes and instead are thinking strategically about diagnosis and treatment. But what's required for us to be able to put these systems into safe use specifically in healthcare is interpretability mechanisms as an enabler of kind of an accountable decision-making regime. Because when a doctor is advised by an LLM about a diagnosis, let's say, and it turns out to be incorrect, they can just say, Oh, the machine told me so, right? It's still their responsibility. So we have to build these machines in a way that communicate why particular recommendations are made, what is the certainty and the uncertainty? What was the data that was used? Is it appropriate in terms of its demographic composition, for example, for this specific hospital? So this really challenges us to develop novel mechanisms for privacy and data protection, for explainability, for auditing, and evaluation of these models. And we need large-scale investment, including also that would come through the NIH for example, where we are seeing tremendous cuts.
Rep. Gabe Amo (D-RI):
Thank you. My time has expired. And yield back.
Rep. Sheri Biggs (R-SC):
Thank you. The chair now recognizes Representative Forche from North Carolina for five minutes.
Rep. Val Foushee (D-NC):
Thank you, Madam Chair, and thank you to the witnesses for being here. With us today. I'm proud to have served during the last Congress on the bipartisan AI task force, along with several of our colleagues here today, and I look forward to continuing our work together on this committee to implement our policy recommendations and put forth sensible legislation so that the United States can remain the rural leader in artificial intelligence. To that end, I am deeply disappointed by what I'm hearing from my own district, north Carolina's fourth home to two NSF-funded AI institutes, the Athena Institute led by Duke University and the AI Institute for Engaged Learning, a partnership between North Carolina State and the University of North Carolina at Chapel Hill as a lead partner, as well as North Carolina Central University Institute for AI and Equity Research regarding the research and workforce cuts enacted by the Trump administration during its first few weeks in office, gutting federal research departments and slashing grants that fund world leading and cutting edge research, terminating biomedical research grants for being so-called woke simply because the researchers were seeking to address very real health disparities in our nation and deporting PhD students for disagreeing with this administration's views.
These actions derailed this administration's goal of maintaining US global leadership and science and innovation, and it also has a chilling effect on our ability to prepare and inspire the next generation of leaders. And it gives China a real and clear opportunity to fill in the void. I fear that these actions will have grave consequences that will negatively affect the United States’ ability to lead the world and our international partners on artificial intelligence, and to lay out a future and a vision that is in accordance with our democratic principles and moral values. Dr. Stoyanovich or Mr. Fist, why is it important for the federal government to support research and development initiatives at colleges and universities? If we are going to be and win the AI race?
Dr. Julia Stoyanovich:
I can go first, thank you for this question. One of the most attractive things for people who were not born here, like me, to come to the United States and do research is the freedom of thought. The academic freedom, the academic environment here is fueled by federal investment. We cannot rely on investment from industry to fill this void because that money comes with strings attached, right? And we don't want to be constrained by ideological agendas any more than we want to be constrained by business goals in our creative thinking about what the next system should be, what the next platform should be, what is the next big problem that we want to address. And this, once again, is the edge that the United States has, and that, unfortunately, we are now worried that we might lose. So I really encourage us to think carefully about the downstream implications worldwide, on us paying less attention to research federally.
Tim Fist:
Yeah, I would suggest that we see science, basic science, R&D funding at federal agencies and universities as less as an exercise in reducing total possible cost and more a return in investment. If we thought about research and development as a VC fund, if you measured a VC fund on how little it's spent, that would be a very poor-performing VC fund overall. What we really want to look at is what is the ROI of this kind of research? Federal RD funding in general returns around between four and $5 back to the economy for every dollar that we're investing. But historically, it hasn't really worked as well as it should, especially recently. If we look at, for example, the inventor of the MRNA vaccine, she was systematically ignored or treated with skepticism by the NIH, who wasn't willing to fund this kind of high-risk, high-reward research. So I think together we are thinking more in terms of return on investment. We should also think about how we reform our scientific institutions to focus more on this high-risk, high-reward science that can have massive downstream effects.
Rep. Val Foushee (D-NC):
Let me just end with this, as others have discussed today. Last Congress, we advanced several bipartisan bills in this committee, including my expanding AI Voices Act to broaden participation in AI research. And I sincerely hope that we can work together with you, Mr. Chair, to enact these important efforts into law. And with that, I'll yield back,
Rep. Jay Obernolte (R-CA):
Gentleman yields back, we'll hear next from the gentleman from Georgia. Mr. McCormick, you're recognized for five minutes.
Rep. Rich McCormick (R-GA):
Thank you, Mr. Chair, and thank you for all your witnesses' testimony. Intriguing topic, even for a crayon marine, I think AI and its development is something instrumental to our future. I think it'll impact us more than any technology we've seen in any history of mankind. Interesting. When you guys talked about the cost of developing AI and how China's modeled theirs after ours and done it very cheap, less than 10 million is what they said. And I understand that the real cost is not probably that same thing though with India. They came up with a spacecraft that got to the dark side of the moon for I think around $74 million. We couldn't even form a committee or a building for less than that. So there is a cost point that America hasn't really got over in this development, but like it says in the book, Chip Wars or when it refers to when you're copying somebody, you're already falling behind.
Russia was really on par with us and until they started copying us, and then they fell way behind. Same thing with Elon. Elon Musk once said, The only reason I get patents is so they won't keep me from using my own technologies because if they're copying me, they're already falling behind. So as we develop these AI models, would you agree, and I'll probably go with you, Mr. Theor on this, would you agree that if they're copying us, they're not really catching up now, they maybe can close the gap a little bit, but like you guys said, they're anywhere from six months to two years behind us because they're not coming up with their own technologies, they're not coming up with their own chips, they're not leaping ahead of us. They're actually just trying to copy us, which doesn't help them actually surpass us. Is that accurate?
Adam Thierer:
I generally agree with that, except that I think that the Chinese have made some, oh, I'm sorry. The Chinese have made some significant inroads lately in terms of doing genuine innovation and finding ways to diffuse it throughout the globe. So we can't just take it for granted that they're just strictly copying or using industrial espionage or mercantilist efforts to follow. They are taking the lead on some important things, including things we haven't talked about here today, like robotics and other types of autonomous systems, battery systems, EVs, things like this. They've made real progress there.
Rep. Rich McCormick (R-GA):
And I think you made a really good point, actually. They've been very, very strategic in the way they've gone after elements and cornering the market. And now that you talk about batteries, lithium, cobalt, things that we need to for future energy sources, I think that very much is a big concern. Not even technology that's just being smart in how you get those elements that we need to even produce chips, for that matter, silicon, and other things like that. So I agree with you, Mr. Fist. You actually said something very interesting in your open remarks, I thought capture my attention. When you talk about the back door and the design of AI and how we detect that, which is you said undetectable at this point, do you think not only do we need the ability to detect those things, but also maybe some really severe penalties for creating those things in the first place? Do you think that's a useful thing, or is that going to turn anybody at all?
Tim Fist:
Yeah, so I think this opens up a whole can of worms, especially when it comes to open source AI around what is the liability for a developer and how do you create a regime on that? And in this case, we're talking about a foreign model provider who's open-sourcing their models and making them widely available. What I would say is collecting the evidence of the risks and what this developers actually doing is important, both for informing American critical infrastructure providers about what kind of models they should be building into their systems, but also to understand what sort of punitive actions we should take as well. And potentially including export controls or restrictions on installing Chinese models into American infrastructure could be critical.
Rep. Rich McCormick (R-GA):
And then we also talked about, there's several people that are concerned about the cuts to different budgets and research. I am, too. I want to see United States lead the way in technologies. We've always been that way since we went the space race and their development of computers in general. But now we're seeing the private industry really take the lead and really do the heavy lifting when it comes to investing billions of dollars. We don't have, you can even say trillions of dollars that we're putting into now research and development of new technologies that will have a significant windfall in profits too into the future, so they know what they're doing. We actually have a time where we actually have one man who's putting more spaceships in outer space than all the other nations combined. That's real wealth and has benefits to us because obviously one man could also rescue you from stranded and outer space too.
My question is, though, as we go forward and we invest in the future, and we talk about the energy sources, which you also brought up in the opening statements, energy sources, cheap, affordable energy. We also can't even build a nuclear reactor on the cheap. My question is, as we're going forward, are we going to have to rely more and more on the private industry? Is that the right thing, or is that the scary thing to do when it comes to development of AI and energy sources and the resources we need to build these new future technologies?
Dr. Julia Stoyanovich:
Please, can I respond? Yeah. I think it's extremely dangerous for us to rely on a potentially benevolent, but maybe not always benevolent, one man or one industry to take care of some of these critical needs that we have as a country. I want to also say that China actually is leading in terms of a governance and regulation regime as compared to the United States. They have very clear guardrails in place. Now, we don't agree with the kinds of values that these guardrails embed, but this is really necessary for there to be progress. And this is another thing that we should be looking at. But I don't think industry can replace government investment because the goals may or may not align.
Rep. Rich McCormick (R-GA):
We'll see. Thank you. With that, I'll yield. I know I'm out of time.
Rep. Jay Obernolte (R-CA):
Gentlemen yields back with no members waiting to ask questions. That brings us to the conclusion of our hearing. I'd like to thank all of our witnesses for your valuable testimony today. I found it to be a fascinating hearing, and I think really informational and important. So, thank you for being here. The record will remain open for 10 days for additional comments and written questions from members without this hearing is adjourned.
Authors

