Transcript: OSTP Director Kratsios Testifies on Trump AI Action Plan
Cristiano Lima-Strong / Jan 15, 2026
Michael Kratsios, director of the White House Office of Science and Technology Policy, at a House hearing on "Advancing America’s AI Action Plan" held on January 14, 2026.
Michael Kratsios, an architect of the Trump administration’s artificial intelligence roadmap and the director of the United States Office of Science and Technology Policy, testified before the House on Wednesday about the White House’s plan to boost innovation in the tech sector.
During the hearing, Kratsios came under fire from Democratic lawmakers over the administration’s campaign to thwart AI regulations both at the state level and abroad and faced questions over the government’s ongoing relationship with Elon Musk’s xAI.
Key moments included:
- Kratsios defended Trump’s order seeking to block state AI rules, arguing that a fragmented state-by-state regulatory regime favored major tech companies. “If you have this tremendous patchwork of laws all across the country, the folks who actually are able to work within that system most successfully are the deep-pocketed large, Big Tech companies,” he said.
- Kratsios touted the administration’s efforts to pressure foreign governments to roll back what the White House views as overly onerous AI rules. “We continue to push back against global AI governance at the UN, G7, AIPAC and other forums and to defend great American companies from foreign nations stifling regulatory regimes,” he said.
- Rep. Zoe Lofgren (D-Calif.) sharply criticized the Trump administration for taking financial stakes in private companies in the tech and AI sector, such as Intel, likening these actions to that of the Chinese Community Party and arguing it undercut the Trump AI action plan’s stated focus on bolstering free markets. “Actually, this is socialism on the part of socialism.”
- Lofgren pressed Kratsios on whether federal agencies are still paying to use Elon Musk’s Grok chatbot amid the global firestorm over the app generating scores of consensual sexual images and in some reported instances child abuse material.
- Kratsios said he was “not personally involved with in any way” that procurement decision but declined to answer whether he has recently discussed the matter with the president or the Justice Department. “Essentially we’re paying Elon Musk to give perverts access to a child pornography machine,” Lofgren said.
Below is a lightly edited transcript of the hearing, “Advancing America's AI Action Plan.” Please refer to the official audio when quoting.
Rep. Jay Obernolte (R-CA):
The Subcommittee on Research and Technology will come to order. Without objection, the chair is authorized to declare recesses of the subcommittee at any time. I recognize myself for five minutes for an opening statement. I'd like to welcome everyone to today's Research and Technology Subcommittee hearing entitled Advancing America's AI Action Plan.
Today, we'll hear testimony from the director of the White House Office of Science and Technology Policy, Michael Kratsios, on the administration's artificial intelligence strategy. As I think we all know, AI is poised to become a foundational driver of innovation worldwide. From serving as a personal assistant to generating computer code, to advancing the frontiers of human science, the scope and impact of AI continues to expand at a truly extraordinary pace.
It is critical that Congress does its job in enacting an appropriate federal framework for this burgeoning new technology. It is also imperative that the framework maintains the position of the United States as the leading force in the development and deployment of worldwide AI. American leadership in AI is essential to sustained economic growth, technological advancement across a wide range of applications, and the protection of our national security, particularly as competitors such as the Chinese Communist Party seek to undermine US leadership in this space.
Through my experience as co-chair of the bipartisan House AI Task Force last Congress, I've seen firsthand the importance of maintaining US leadership in this area. AI-enabled cyberattacks are a growing threat that demands our constant vigilance. At the same time, as AI becomes more embedded in everyday life, Americans are entrusting vast amounts of personal data to these systems, heightening the risk that sensitive information could fall into the hands of malicious actors.
Against this backdrop, last July, the White House unveiled Winning the AI Race: America's AI Action Plan. This strategy is built around three pillars: innovation, infrastructure, and international diplomacy and security. It is critical that Congress works with the executive branch to craft a unified and effective national AI strategy.
In December of 2024, the bipartisan House AI Task Force, which was co-led by Congressman Lieu and myself, released a report outlining guiding principles for AI policy. This bipartisan effort resulted in 66 findings and 89 recommendations, many of which align with the AI Action Plan, including expanding access to computing power for researchers, investing in K-12 AI education, and advancing AI evaluations.
Among its recommendations, the task force called for codifying the National AI Research Resource, what we call NAIRR, and strengthening the science of AI evaluation and standards. My legislation, the CREATE AI Act, would codify NAIRR while the forthcoming Great American AI Act will formalize the Center for AI Standards and Innovation, or CAISI, at the Department of Commerce, to advance AI evaluation and standard setting.
I am very glad to see that the administration's AI Action Plan includes many of the same tenets as the AI Task Force report, including support for the continuation of NAIRR and for tasking the CAISI with the critical work of developing standards and evaluating frontier AI models. I commend the center for its strong report last fall, assessing the national security implications posed by DeepSeek.
The center serves as a critical hub for technical AI expertise within our government, and I look forward to its continued work. OSTP, under Director Kratsios's leadership, is steering the development of the AI Action Plan and will oversee its implementation as part of the nation's broader science and technology agenda. I hope that we can all agree it is critical that Congress now step up to the plate and work with the executive branch in developing a competitive vision for American leadership in artificial intelligence.
I'd like to thank Director Kratsios for being here today, and I appreciate his willingness to appear before the subcommittee. We welcome his testimony on this critically important topic. Thank you very much. I'll now yield back the balance of my time, and I'll recognize the ranking member of the subcommittee, Representative Stevens of Michigan, for five minutes for her opening statement.
Rep. Haley Stevens (D-MI):
Thank you so much, Mr. Chair, for holding today's hearing, and, of course, thank you to Director Kratsios for joining us today as well. OSTP has always held a very special place in my heart and is an agency or a division of the White House that we are grateful to have a connection to on this committee, and it is also very important that our thoughtful discussion today will be around implementing the administration's AI plan while protecting American workers.
We all know that you can't talk about AI innovation and American competitiveness without talking about Michigan manufacturing, and you can't talk about the future of manufacturing without talking about the National Institute of Standards and Technology, or NIST. NIST is the little agency that could, and it is at the forefront of our efforts in artificial intelligence, quantum, robotics, and advanced manufacturing.
Thanks to the CHIPS and Science Act of 2022, that many of us in this room helped write and pass, NIST is bringing semiconductor manufacturing back to America to develop the next generation of AI chips right here at home. NIST's semiconductor work is also critical to the administration's AI plan, and we don't want to see the agency being undermined, something that has long been supported by Democrats and Republicans alike.
Just to shine a light here, and this is the work of our committee, and this is what we do. We have authorizing ability, and it is important to just shine a light. The budget for the fiscal year '26, it slashed NIST funding by $325 million, $325 million, and we're eliminating 500 jobs from the agency's lab program.
We know, so many of us, how sacred this work is and how important it is even when you just have just a handful of researchers working on these matters, and the cuts hinder NIST's AI-related efforts. They're going to weaken cybersecurity and privacy standards, something I have legislation on, and limit advanced manufacturing, physical infrastructure, and resilience innovation.
And given the shared goal of supporting the growth of advanced manufacturing, our next generation, this is something we see alive and well in Michigan, continuing to grow through our supply chain, the design of not only products, but also our factory floors using AI applications. I'm really alarmed that the administration is trying to eliminate NIST's Manufacturing Extension Program.
They repeatedly tried to do that last year, and the MEP program is designed to support, well, not just Michigan, but all of the nation's small and midsize manufacturers to see them adopt advanced technology and compete on the global stage. And so, in 2024 alone, just the MEP Center in Michigan created or saved 5,000 jobs, yet like every other MEP Center, it was on the chopping block.
And what makes even less sense is that the administration's constant attacks on domestic chip manufacturing and the CHIPS and Science Act, which we should be all singing loud and proudly. The uncertainty that the manufacturers are facing, despite clear congressional desire to bring chip manufacturing back to America, has been really frustrating.
And I was surprised to learn that we're kind of seeing the arbitrarily canceled semiconductor-focused Manufacturing USA institute in December that that happened, and that's stalling a lot of progress. So I could go on, but we're losing talent and institutional knowledge. We're shrinking. And frankly, we're destroying our research capacity and undermining global competitiveness all while we're supposed to be touting how we can lead on AI.
And so, I'm going to do everything I can in this subcommittee, Mr. Chair, for accountability, to stand up for science, the science that Michiganders and our manufacturers rely on every day, so we can continue to innovate and build, and I just want to thank every NIST public servant, the people in this room, and the people in the department. And with that, thank you, Mr. Chair. I'll yield back.
Rep. Jay Obernolte (R-CA):
Gentleman yields back. I now recognize the chairman of the full committee, Representative Babin of Texas, for five minutes for his opening statement.
Rep. Brian Babin (R-TX):
Thank you, Chairman Obernolte. I appreciate Mr. Kratsios being here today, and we're looking forward to his testimony. We appreciate this entire hearing. Supporting the Trump administration's strategy to advance AI innovation, strengthen infrastructure, and reinforce international security is absolutely essential to maintaining US leadership in the global AI marketplace and promoting peace through strength.
The threat posed by the Chinese Communist Party, or CCP, continues to expand in the AI domain. Foreign AI efforts, particularly those pushed by the CCP, present serious national security risks, from research espionage and AI-enabled cyberattacks to the collection of Americans' sensitive data. The AI Action Plan is designed to confront these threats by empowering the Center for AI Standards and Innovation and the Department of Homeland Security to bolster cybersecurity protection for critical infrastructure.
As chairman of the House Science, Space, and Technology Committee, it's very clear to me that the Trump administration's vision for American dominance in AI closely aligns with the bipartisan work of this committee. In January 2021, National AI Initiative Act of 2020 became law as part of the National Defense Authorization Act for fiscal year 2021.
Led by the House SST Committee, this law charges key agencies within our jurisdiction to support AI research and development, many of which received additional policy direction through the AI Action Plan. As Chairman Obernolte noted, the findings and recommendations of the bipartisan House AI Task Force are also well-aligned with this administration's strategy.
One such recommendation calls for expanding access to advanced computing resources for researchers. The committee's jurisdiction over the Department of Energy places us at the center of this initiative, as DOE's national laboratories host three of the world's fastest supercomputers: Frontier at Oak Ridge National Lab, Aurora at Argonne National Lab, and El Capitan at Lawrence Livermore National Laboratory. Thus, this objective to expand access to computing power builds directly on these existing national capabilities.
Moreover, these national laboratories, along with DOE's Office of Science, house some of the nation's leading AI expertise. Other agencies within our jurisdiction, including the National Science Foundation, also support cutting-edge AI technical research and talent development. The Department of Commerce and the National Institute of Standards and Technology have a long history of developing nonregulatory standards for emerging technologies and providing independent model evaluations that the private sector can trust.
Tackling complex policy issues surrounding emerging technologies like AI is nothing new for the Science, Space, and Technology Committee. From civil nuclear energy, space exploration, cryptography, information technology, biotechnology and genetics, blockchain, and nanotechnology, we have put forward sound policy that ensured US leadership.
And while debates over specific AI policies will continue, our committee has an opportunity to advance commonsense legislation that provides access to federal computing resources, supports research and development, trains the next-generation workforce, offers independent model evaluations, and promotes consensus-based standards.
Many of our members have put forward legislation that addresses the very policies recommended by the administration and the AI Task Force. I look forward to advancing AI legislation in the coming weeks and hope very much that this hearing will provide an ample opportunity to inform this process. Congress must ensure that America leads in AI, and doing so will bolster our economic competitiveness, protect our national security, and promote our values of liberties and freedom. I appreciate your presence today, Mr. Kratsios, and welcome your testimony. With that, I yield back the balance of my time.
Rep. Jay Obernolte (R-CA):
Thank you, Chairman Babin. I now recognize the ranking member of the full committee, Representative Lofgren of California, for five minutes for her opening statement.
Rep. Zoe Lofgren (D-CA):
Thank you, Mr. Chairman, and thank you, Mr. Kratsios, for joining us today. AI has the potential to reshape the American economy and innovation ecosystem. I'm really excited about its promise and the potential opportunity it brings to improve productivity and to accelerate innovation. However, AI has also sometimes accelerated harmful and even criminal activities.
Just a little over a week ago, xAI's chatbot Grok made headlines by generating thousands of instances of nonconsensual sexual imagery, including child pornography, at the request of users on X's main platform. We've also seen major instances of chatbots encouraging users with mental health issues to murder and commit suicide. These are just two of many troubling examples.
Nearly half a year has passed since the administration unveiled its AI Action Plan, with a focus on innovation, infrastructure, international diplomacy and security, goals that I certainly appreciate and support. Unfortunately, the plan only minimally addresses the risks of AI, and even where it does, including with respect to deepfakes, the administration has failed to take meaningful action to address these risks.
To the contrary, this administration's actions raise major questions about the disconnect between the action plan's goals and reality, and I'd like to address just two specific areas where the administration's actions have raised concern. First, this administration has taken steps to prevent states like California from regulating against harms that have been created or exacerbated by AI systems.
In December, President Trump signed an executive order that directs the Department of Justice to sue states over their AI laws and threatens to withhold billions in federal broadband funding from these states. This order asserts that the administration, not Congress, not state legislatures, but the administration itself should have the power to decide what kind of state laws are too burdensome and to reallocate where billions of dollars should go as a result. I think this order is unconstitutional.
I'm not here to defend every state AI law that's been proposed or passed. Some of what California has adopted to protect its people is appropriate, and other legislation that was beyond the scope of what a state should do was vetoed by the governor with the support of the California congressional delegation, vetoed because it went too far.
But there was a reason Justice Brandeis suggested that states are the laboratories of democracy. What we should not do is preempt the states from taking necessary actions to protect their citizens, while here in Congress, we do nothing to pass legislation ourselves. It's not just Democrats saying this. Florida Governor Ron DeSantis has said that President Trump's executive order, quote, "can't preempt state legislative action." And in November, a bipartisan group of 36 state attorney generals wrote to leaders of Congress opposing slapdash federal preemption efforts, and I'd like unanimous consent to put this letter into the record.
Rep. Jay Obernolte (R-CA):
Without objection.
Rep. Zoe Lofgren (D-CA):
Secondly, I'm very concerned by the administration taking cuts of revenue or equity in companies. The AI Action Plan emphasizes free markets, deregulation, getting government out of the way of the private sector. But over the past year, the administration has spent billions of taxpayer dollars to take direct ownership stakes in private companies.
The government now owns nearly 10% of Intel, making it the largest shareholder. It holds equity positions in rare-earth mining companies and is negotiating similar deals with quantum computing companies. This trend is unusual and concerning. In our nation's history, the US government has taken equity shares in private companies only in times of war or economic crisis.
Actually, this is socialism on the part of the Trump administration. That's a charge that's sometimes made against Democrats, even though Democrats favor free markets, but it's what the Trump administration is actually doing, engaging in socialism. Now, how are the administration's actions different from Chinese state capital is my ask.
Mr. Kratsios, America can and must lead in the field of artificial intelligence, but it's going to take a whole lot more than empty promises and emulating the PRC to make that happen. I look forward to hearing from you about these challenges and others during today's hearing. Thank you, Mr. Chairman, and I yield back.
Rep. Jay Obernolte (R-CA):
Thank you, Ranking Member Lofgren. It's now an honor to introduce our witness. Our witness today is the Honorable Michael Kratsios, who is the director of the White House Office of Science and Technology Policy. He is President Trump's chief science and technology policy advisor. And as the 13th director of the White House Office of Standards and Technology Policy, Director Kratsios oversees the development and execution of the nation's science and technology policy agenda.
He leads the administration's efforts to ensure American leadership in scientific discovery and technological innovation, including in critical and emerging technologies such as artificial intelligence, quantum computing, and biotechnology. In the first administration, he served as the fourth chief technology officer of the United States at the White House and as undersecretary of war for research and engineering at the Pentagon.
Prior to his service in the White House, Director Kratsios invested in, advised, and built technology companies in Silicon Valley. A South Carolina native, Director Kratsios graduated from Princeton University, but we will not hold that against you, and served as a visiting scholar at Beijing's Tsinghua University. I now recognize Director Kratsios for five minutes to present his testimony.
Michael Kratsios:
Thank you, Chairman Obernolte and Ranking Member Stevens, as well as full committee Chairman Babin and Ranking Member Lofgren for inviting me to speak to you today about the president's AI policies. Last July, the Trump administration released Winning the AI Race: America's AI Action Plan, outlining a strategy to maintain global leadership in AI based on three pillars: innovation, infrastructure, and international partnerships.
Over the last six months, the administration has moved from strategy to execution, as I and my team at the White House coordinate implementation across the federal government. I'm excited to highlight where we stand now in executing this playbook, but first, let me thank the members of this committee for all that you have done for American AI.
As my office coordinates the administration's implementation of the action plan, I see many opportunities for collaboration with this committee and with Congress. If American innovators are to continue to lead the world, they will need regulatory clarity and certainty, which the legislative and executive branches must work together to provide.
The action plan focuses on three main priorities for the US government: removing barriers to innovation, securing energy dominance, and exporting our technology to partners around the world. In the past six months, we have seen tremendous progress in all these areas. In innovation, with the release of our request for information concerning regulatory reform on AI last September, we're working with industry and the American people to relieve innovators of undue regulatory burdens and to formulate policy frameworks that safeguard the public interest while enabling further revolutionary developments.
I'm particularly excited about the innovative potential of the Genesis Mission and AI for science. With the signing of that executive order, President Trump has mobilized the largest federal scientific effort since the Apollo program. By fusing massive federal datasets with advanced supercomputing capabilities, Genesis will help America's scientists automate experiment design, accelerate simulations, and generate predictive models for everything from medicine and energy to materials and agriculture.
In infrastructure, of course, the administration's effort to clear away red tape and facilitate AI construction have been enormously successful, but I'm especially excited about the energy capacity American industry is building, both to power the AI revolution and all the other incredible technologies this new age of discovery will unlock.
President Trump and the Department of Energy are committed to accelerating and enhancing the growth of American nuclear power, creating space to experiment, facilitating small modular reactor build-out, and continuing the pursuit of fusion energy. International partnerships and AI diplomacy, we continue to push back against global AI governance at the UN, G7, APEC, and other forums, and to defend great American companies from foreign nations' stifling regulatory regimes.
I'm particularly excited about the American AI Export Program. As the Commerce and State Departments roll that out, there will soon be a request for proposals shared with industry and a formal launch of the program in the coming months. Our goal is for US companies to provide modular AI stack packages that empower countries to develop sovereign AI capabilities with American technology.
The AI Action Plan is a giant leap forward, furthering first steps President Trump took for American AI dominance in his first term. With the call for this plan in his first week back in office, the president recommitted himself and the country to American AI leadership. The need for renewed effort was clear. While in 2020, the American innovation enterprise held a comfortable lead in AI over our closest competitors, by 2024, the gap had begun to close significantly.
But with his golden age vision of scientific rigor and technological progress, President Trump has restored a spirit of confidence to our innovation enterprise. We are approaching AI not with fear, but with responsible boldness. Looking ahead, there is still so much to be done working together. As our innovators continue to find novel applications of AI technology to everyday life, we can ensure they benefit all Americans through small business training, workforce development, and AI education.
I'm proud to say that more than 250 companies have signed the first lady's Pledge to America's Youth, committing hundreds of millions of dollars to investing in AI education, and I just returned from the Consumer Electronics Show, where there is obvious excitement about American AI, but also about what's ahead in quantum information science. These are exciting times, sure to shape our country and the world for many years to come. Thank you all for your leadership.
Rep. Jay Obernolte (R-CA):
Well, thank you very much for your testimony. We will now move on to question and answers from the panel, and we'll start with questions from myself. So I'll recognize myself for five minutes. Director Kratsios, I want to start with the points that Ranking Member Lofgren raised, which I thought were excellent and really pertinent as usual, and in particular, the point about preemption.
So to be clear, I don't think anyone believes that the states shouldn't have a lane in regulating AI, but I think what everyone believes is that there should be a federal lane and that there should be a state lane, and that the federal government needs to go first in defining what is under Article I of the Constitution, interstate commerce, and where those preemptive guardrails are, where regulation is reserved only for regulation at the federal level, and then outside those guardrails where the states are free to go be the laboratories of democracy that they are.
So the president recognized this in his executive order last month. He is calling for Congress to work with the executive branch in defining what those guardrails are. I think there was an explicit acknowledgement in that executive order that congressional action is needed. It's not something that could be done unilaterally. So can you talk a little bit about what the administration's vision for that is and where you think those guardrails ought to be?
Michael Kratsios:
Absolutely. The president first spoke about the need for a sensible national policy framework actually in July when we released the AI Action Plan. And in that speech here in Washington, he made it clear that having a set of 50 different laws all across the country is actually not the best thing for American innovation.
And as someone who's worked in Silicon Valley and worked with lots of startups and been at startups, I understand the challenge. And I think what is sometimes missed in this conversation is, if you have this tremendous patchwork of laws all across the country, the folks who actually are able to work within that system most successfully are the deep-pocketed, large big tech companies. The small innovators, the entrepreneurs, the people who want to start new businesses, forcing them to try to find a way to comply with 50 different sets of AI rules is actually anti-innovation and is something that I don't think anyone on this committee actually supports.
The president then took another step forward in December of last year where we laid out an executive order charging me and David Sacks to work to create legislation to help find a path forward for a sensible national policy framework, and that's something that I very much look forward to working with everyone on this committee for.
I think what was clear in the executive order specifically was that any proposed legislation should not preempt otherwise lawful state actions relating to child safety protections, AI compute and data infrastructure, and also state government procurement and use of AI. Those are ones that were called out in the executive order, but we look forward over the next weeks and months to be working with Congress on a viable solution.
Rep. Jay Obernolte (R-CA):
Well, thank you very much, and this is something that we have been grappling with now for several years. We had a whole chapter on preemption in the AI Task Force report. I think it was very helpful that the administration is also thinking about this issue and thinking about where those state lanes should be and how they intersect with the federal lanes. So let's keep working on that. I also wanted to, in addition to commending you for the good work of CAISI-
I also wanted to, in addition to commending you for the good work of CAISI, talk about the need for us in Congress to formally codify it. And the work it does in doing AI model evaluation is essential in creating what's a toolbox, a regulatory toolbox, for our sectoral regulators so everyone doesn't have to reinvent the wheel. But we've had this in successive administrations now into the Biden administration, we had the AI Safety Institute. The Trump administration, we have CAISI. There are many of the same functions that are performed by those organizations, but I think everyone would agree, it's unhealthy for us to have every successive administration spin up a brand new agency that essentially is doing something with a long-term mission that needs continuity. So can you talk about whether or not you think we in Congress need to go ahead and codify this as part of implementing our national strategy on AI?
Michael Kratsios:
I think CAISI, as was kind of discussed previously, I think is a very important part of the larger AI agenda. I think one of the important and big steps that the administration took at the beginning of middle of last year was to reframe what was previously a safety institute into one that is focused on standards and innovation. And I think from my perspective, the most important thing that can be done around CAISI at NIST is to focus on what NIST is really good at and what Mrs. Stevens was talking about. It's absolutely important that the legacy work around standards relating to AI are undertaken by CAISI and that's what they're challenged to do. And that's the focus that they should have because the great standards that are put out by CAISI and by NIST are the ones that ultimately will empower the proliferation of this technology across many industries.
Rep. Jay Obernolte (R-CA):
Well, thank you very much for your testimony. I'm looking forward to hearing the questions from our other subcommittee members. I now recognize ranking member Stevens for her questions.
Rep. Haley Stevens (D-MI):
Thank you. And so Mr. Kratsios, I missed it in your bio that the chairman read. Could you just remind me what you were up to in the period of the Great Recession, particularly the years '09 and '10? What were you doing professionally?
Michael Kratsios:
Yes. I was actually an investment banker for a year.
Rep. Haley Stevens (D-MI):
Yeah. Okay. So you know that period of time pretty well. Yeah, I was-
Michael Kratsios:
Quite personally.
Rep. Haley Stevens (D-MI):
I was in the Department of the Treasury working on the auto rescue and it was this humble initiative that was responsible for making sure the auto industry didn't go belly up. And we then created the White House Office of Manufacturing Policy coming out of that. And one of the steady and delightful partners that we had during that period of time was the manufacturing extension partnership. As you know from your investment banking career, access to capital across our supply chain was just being squeezed. And it was a terrifying time, particularly in Michigan. We were the nadir of that Great Recession. And so throughout my career, I have seen firsthand outside of just what the proof in the pudding is for NIST MEP, for every dollar spent $18 of output into the economy. We've seen directly how NIST MEP has helped the small to mid-size manufacturer, who is really the elixir of our communities and our economic drivers, the small businesses.
And we know that the cumbersome reality of technology development, as the chairman was just talking about CAISI and you so nicely answered about its importance and did recognize NIST and the critical role that NIST has played, that we have to have technology adoption throughout our supply chain and a recognition that NIST MEP plays a role in that. And so I just wanted to hone in that as we had put forth over a decade ago and Congress directed the National Science and Technology Council housed within OSTP to develop a national advanced manufacturing strategy and OSTP and NIST are in the process of updating that strategy. Are you in a position to commit to ensuring that MEP and other NIST programs, including Manufacturing USA, which is this robust network of R&D labs, that that will be central to its implementation?
Michael Kratsios:
So that's a program run out of commerce and it's a conversation I would love to continue to have with Secretary Lutnick and his team there. I think there's obviously important work across all these NIST programs and would love to learn more about this one specifically with the team at NIST.
Rep. Haley Stevens (D-MI):
Yeah. A couple years ago, they celebrated their anniversary and it's the Manufacturing USA, which works very closely with NIST and in some instances is funded through NIST and of course the MEP program. And this is something that has carried through throughout a multitude of administrations and was really designed to help us compete as a nation vis-a-vis other countries that have this co-location of R&D, tech transfer, and workforce development. So I do appreciate you reaching out to the commerce secretary about it. We'd love to show you the one that we have in Detroit called LIFT, the workforce development is incredible that happens there, but also the co-location of large, small, and mid-size manufacturers that are all developing their research and development. And then when you come and visit LIFT, you can come and see our MEP center. And I want them to show you directly how they are leading these manufacturers.
They are really one of the first enterprises that had down industry 4.0 and had a directive on that. And when we're looking at cutting costs, which we all know is so important for the taxpayer, and when we're going to look at all this great stuff in AI and how it gets applied and how it gets used, and nice that you have this framework cooking here. We're not going to leave people behind and we want to do it along our small and mid-size manufacturers. So thank you so much. And with that, I'll yield back, Mr. Chair.
Rep. Jay Obernolte (R-CA):
Gentlewoman yields back. I now recognize the chairman of the full committee, the gentleman from Texas, Mr. Babin, for five minutes for his questions.
Rep. Brian Babin (R-TX):
Thank you very much. And again, welcome, Mr. Kratsios. We appreciate you being here. The AI action plan recommends that NIST revise its AI risk management framework to remove bias and discrimination. First off, how do these recommended changes help ensure that government guidance remains neutral and consistent with American values of freedom and liberty?
Michael Kratsios:
Thank you so much for that question. I think one of the recommendations out of the action plan was to do just that. I think as we've spoken a lot about NIST, it's an agency that is very near and dear to my heart and to the entire OSTP team. We want NIST to be focused on advanced scientific metrology. Inserting political rhetoric into their work is something that devalues and corrupts the broader efforts that NIST is trying to do across so many important scientific domains. So our hope through this effort is to depoliticize NIST as much as possible and to put it in a place where it's promulgating standards, which can benefit all scientific innovators across the country.
Rep. Brian Babin (R-TX):
Great. Thank you. The Genesis mission proposes to modernize our power grid through the use of AI-driven technologies. First, how will this contribute to America's energy dominance?
Michael Kratsios:
Well, broadly speaking, energy dominance and energy leadership has been a top priority for the president. Broadly, this theme of energy abundance. For us, we believe that there's great technologies that are ultimately going to drive broader energy abundance. It can be everything from fusion technology that we hope to have in the decades to come, and even the use of AI to optimize grids and so many other ways to do it. So for us, and especially for this president or National Energy Dominance Council, energy abundance is key and Genesis will be a big part of that.
Rep. Brian Babin (R-TX):
Exactly. And then second, what specific short-term outcomes can we expect from the Genesis mission over the next two to three years?
Michael Kratsios:
So on the short-term front, I think what we hope to do over the next year is to broaden the participation of Genesis across the government and across the international community. As you may have seen, the first big announcement around Genesis was the addition of partners across the private sector here in the United States. The tasking of the executive order to me and to OSTP as a director has been to try to bring in and incorporate other agencies. And why that's so important to Genesis is the underlying thesis of Genesis is that the government has extraordinarily valuable scientific data that when pooled together and collaborated with AI can drive tremendous scientific discovery and acceleration. So our next step is how do we bring important potentially healthcare data or data from the National Science Foundation or from even NIST or other scientific agencies? And then the next short-term goal is to figure out who are our international partners in the scientific community that have things that they can offer the Genesis mission, which accelerates its ultimate goal.
Rep. Brian Babin (R-TX):
And then looking further ahead, what is the long-term vision for the Genesis mission beyond its first decade and how do you anticipate its impact evolving over time?
Michael Kratsios:
I think the overarching goal that we have said consistently is that the Genesis mission's goal is to double the output of US R&D in a decade. And we want to be able to do that by focusing on some very large grand challenges, everything from across energy to biotech, to drug discovery, nuclear fusion, critical minerals, and that's kind of our north star.
Rep. Brian Babin (R-TX):
Absolutely. And then to maintain a competitive advantage over our bad actors like the Chinese Communist Party, President Trump has emphasized the need to Build, Baby, Build across the US economy and our critical infrastructure. And from your perspective, Director Kratsios, what elements of the AI action plan most effectively strengthen the US economy and position the United States as the global leader in AI?
Michael Kratsios:
Wow. Well, I'm very biased because I love the whole report. So I think each of the three pillars plays a big part in what you're arguing for. I think on the building front, obviously, pillar two around infrastructure and data center build out is extraordinarily critical. We want to develop a regulatory environment that allows the sort of standing up of this AI infrastructure to happen, and also in a way that doesn't necessarily adversely affect American rate payers. The second piece is around the regulatory structure. We want to create a regulatory environment where our greatest innovators can develop and promulgate their AI technologies here in the US in a safe way, and we think we're charging towards that. And the last piece, which has been sort of a big sort of priority for me, even from my time in Trump 45, we have to continue to invest in critical AI research and development funding as a federal government.
And there's been a lot of talk about federal budgets and the president's proposal the last year, but the key thing that I always try to remind people is even in our attempt to try to right size the budget, the one area where we have kept a consistent amount of proposed budget funding has been in AI. We believe that this is a critical research priority for the administration and it's something that we're going to continue to fund.
Rep. Brian Babin (R-TX):
Absolutely. Thank you so very much and my time's expired. Mr. Chairman, yield back.
Rep. Jay Obernolte (R-CA):
Gentleman yields back. I now recognize the ranking member of the full committee, Representative Lofgren of California for five minutes for her questions.
Rep. Zoe Lofgren (D-CA):
Thank you, Mr. Chairman. I want to return to a point that I raised in my opening statement. The non-consensual manipulation of people's photos and the rampant uncontrolled dissemination of sexualized images of children is evil disgusting stuff. And instead of acting quickly to stop it, Elon Musk seemed to revel in it until public pressure became too intense. And then what did he do? He has now said only paid subscribers can have access to Grok's image generation feature. So pedophiles could previously conjure up these images for free, but he's now monetized the feature and will be collecting $3 monthly from depraved individuals who choose to subscribe. He's created a revenue stream for child pornography. It sure looks like laws against the creation and dissemination of child sexual abuse material are being broken. However, President Trump is sending American taxpayer dollars to xAI through a deal made in September.
Essentially, we're paying Elon Musk to give perverts access to a child pornography machine. Now, you're the president's chief science and technology policy advisor and the administration's point person on AI, so we want you to give us answers on this. I'm going to be sending a letter shortly to initiate our investigation into this issue, but anyone with a stake in this should be working to determine what went wrong and to ensure it never happens again. I welcome all of my colleagues, both Democrats and Republicans, to join with me in sending this oversight letter, which I shared with Chairman Babin yesterday. Stamping out child sexual exploitation should not be a partisan issue. Now, I have several questions for you, Director. Were you involved in the decision to enter into the partnership with xAI, allowing federal agencies to utilize Grok for government work? Is the General Services Administration partnership with xAI still active and being used to integrate Grok into federal agency work streams, even with the disgusting news of the past few weeks?
When the news broke regarding the child sexual abuse images being created by Grok, did you take any steps to ensure that federal agencies were no longer able to use Grok, at least until we are sure that the issue had been addressed? Have you personally briefed President Trump on the recent events involving Grok? You're the highest ranking science advisor in the White House. You should be heavily involved in federal action on this matter. Have you engaged with the Department of Justice on this issue, with the FCC, the FTC, the GSA?
Michael Kratsios:
So I'm not involved in procurement decisions at the GSA. I think broadly speaking, the Trump administration is committed to protecting the dignity and the privacy and the safety of children in the digital age. The misuse of AI tools requires accountability for harmful or inappropriate use, not necessarily blanket restrictions on the use and development of that technology. On the issue of Grok, which I'm not personally involved with in any way, if the federal employees are misusing a government AI platform in a harmful or inappropriate way, that should be addressed and those people should be removed from their positions. I don't have any specific insight into the federal procurement process, but I would refer you to an April 2025 OMB memo titled Accelerating Federal Use of AI Through Innovation, Governance, and Public Trust. That memo provides guidance on how agencies can promote the responsible adoption, use, and continued development of AI while ensuring appropriate safeguards are in place to protect privacy, civil rights, and civil liberties, and to mitigate any unlawful discrimination.
Now, as many people on this committee know, the Trump administration has clearly demonstrated our willingness to implement guardrails against inappropriate use of AI as evidenced by the signing of the Take It Down Act by the president that was signed last May.
Rep. Zoe Lofgren (D-CA):
If I may, Mr. Kratsios, yes, the GSA is doing this, but we are essentially paying Musk and we're going to have a problem here. I mean, inevitably there's going to be a misuse, but even if we fire federal employees who create pornography using Grok, that we would be paying Mr. Musk until he creates a barrier to this misuse is very troubling to me. I'm for technology. I'm from Silicon Valley. I grew up there and I still live there, but if we do not resolve misuse, we are going to have a serious impediment to the development of AI. In addition to the impact on employment, I'm just noting that the ranking member of the subcommittee mentioned employment. Manufacturing employment is down 68,000. We lost 161,000 blue collar jobs last year. So with that, Mr. Chairman, I yield back and I will make available this letter to you, which I've already shared with Chairman Babin.
Rep. Jay Obernolte (R-CA):
Thank you, Representative Lofgren. We'll hear next from the representative from North Carolina. Mr. Rouzer, you are recognized for five minutes.
Rep. David Rouzer (R-NC):
Thank you, Mr. Chairman. Mr. Kratsios, thank you so much for being here. First question is, what would you consider the main hurdle to innovation? Would that be the lack of federal preemption? I would assume that'd probably be in the top three. Just wanted to see what your answer is.
Michael Kratsios:
That is certainly up there. I think for us, providing regulatory clarity to America's innovators is extraordinarily important. I think the other piece of the puzzle, which I always talk about is we want to create a regulatory environment that provides a level of clarity and a level of understanding for all of our innovators. And the most important part of that is promulgating and working towards a use case sector-specific approach to AI regulation. Creating a one-size-fits-all regulation around AI is not the way that we can best deal with all these new AI technologies.
Folks that are developing AI-powered medical diagnostics should continue to be regulated by the FDA, for example. Anyone who's developing a drone should continue to be regulated by the FAA. And I think some of the impediments, especially there's been great examples globally about the things that we should definitely not do. Things like the EU AI Act is a terrific example of a mistake that can be made where you try to create a one-size-fits-all regulatory regime, and ultimately you end up with less innovation.
Rep. David Rouzer (R-NC):
A core pillar of the AI action plan is infrastructure, including our nation's ability to develop a skilled workforce to use cutting-edge AI systems. North Carolina, as you know, has a growing ecosystem of higher education preparing the next generation of AI professionals. UNC Wilmington and Fayetteville State University and my district offer AI focused education with hands-on experience and community colleges such as the Cape Fear Community College in my district is advancing AI literacy and technical skills. Employer access, obviously, to these next generation workers with these needed skills is critical for economic development and competitiveness. Can you comment how is the federal government working with these colleges and universities across the country, including in North Carolina, to strengthen AI focused workforce development programs?
Michael Kratsios:
Yeah, this is so important. And for us, we believe that AI education is something that begins in K through 12 and extends well past when you even graduate from college. And at each of those levels, we have different programs and initiatives to drive that AI reskilling and upskilling to happen. In K through 12, the administration through executive order launched the AI Education Task Force, which I chair with the First Lady. And there's been a tremendous amount of effort around how we can teach America's youth to understand and essentially demystify this technology. It's not just about teaching students how to use the technology, but understanding what the technology is. What is AI good for? What is it not good for? When should you be using it? When should you not be using it? When does it go awry? And I think that's the type of programs that we're trying to do for K through 12.
And we've partnered with so many companies, nonprofits, and civil societies that have donated hundreds of millions of dollars of free resources to K through 12 students to benefit from it. If you move up the chain, the great community college work and stuff that you're talking about, it's very important to be able to train students who want to enter into these 21st century economy jobs that require AI skills. And a lot of work is being done by the Department of Labor. They've awarded nearly $100 million in AI skills training programs. And these are the things that are equipping our next generation of Americans to take full advantage of this technology.
Rep. David Rouzer (R-NC):
In my last minute and 20 seconds here, AI has the potential to revolutionize the industries, all industries really, but particularly of interest to me is agriculture and transportation. Everything from precision farming and crop yield optimization to traffic management and autonomous vehicles. And across the country, these sectors are showing growing interest in leveraging AI obviously to improve efficiency and drive innovation. Can you talk about federal agency coordination in that space?
Michael Kratsios:
Absolutely. I think what's so unique about this technology and why I particularly like the role that I have in my office is that, as you say it very correctly, this is a technology that is touching so many different industries. The teams at Department of Agriculture that are working on things related to the way that it impacts ag, to the folks at FAA that are working on drones and DOT that are working on autonomous vehicles, we try to be the hub to kind of bring all that together. And through a National Science Technology Council, we bring agencies together to talk about those issues. And what we tend to see and where we try to deconflict is where there's agency overlap. If you're trying to use drones in your sort of farm, for example, there's FAA equities, there's DHS equities, there's ag equities, and we're the agency that can kind of help bring all those people together at the table, but we see a huge potential impact for the country.
Rep. David Rouzer (R-NC):
Well, thank you for your great work. I yield back.
Rep. Jay Obernolte (R-CA):
Gentleman yields back. We will hear next from the gentleman from Virginia, Representative Subramanyam. You are recognized for five minutes.
Rep. Suhas Subramanyam (D-VA):
Thank you, Mr. Kratsios for being here. Good to see you again. I'm a former OSTP staffer. I think we transitioned our office to you when you first started in the Trump administration. So thank you for stepping up to serve again. And I know OSTP certainly is a very important place for this committee and I appreciate all the work you've done for OSTP. One of the things that both of us have worked on is attracting really good tech talent to government. And whether it was the US Digital Service or the Presidential Innovation Fellowship, and now I think, I don't know if you're involved in the US tech force, but that's kind of the newest branding of essentially trying to get the people who can build an app to get your pizza in five minutes, to get you your social security benefits in five minutes or help with some of these big issues in technology, both create standards as well as maybe help develop the basic research that a private industry can use.
But we're really concerned. Some of my consortiums, for instance, worked at NIST and they fired over 400 employees. NSF has lost over a third of its staff. And so how do we figure out this project? Because we know we all agree here that technology and innovation is critical and government plays a role in that. How do you reconcile trying to recruit for this US tech force with the firings of technologists who I think are right now very nervous about working in government now?
Michael Kratsios:
I tend to kind of view those as separate issues. I think on the first one relating to bringing technical talent into government, I couldn't agree with you more. I mean, we worked together on some of those issues and I did throughout the first Trump term. I think what's unique about the Tech Force Initiative and what OPM is driving with that is the buy-in from the private sector, I think is unique to that program, something we haven't seen in previous programs. And I think that's special. There was essentially calls out to the tech community and saying like, "Look, it is a national imperative to have the very best technologists in government working on the problems that will impact American citizens." And because of the great leadership of the president, we were able to get so many companies to step up and say, "Yes, I'm willing to allow some of my employees to go do a tour duty in government and be sure of it when they come back on the other side, they won't be struggling to find a job wherever it may be."
So I think this unique partnership and this coming together of the nation to understand that you can actually use technology to solve these huge problems for the American people is one of the highest callings. So we're very excited. The latest number I heard was over 35,000 Americans have put forward interest in participating in tech force. That's insane. That is incredible. That is something we should all be celebrating in this entire committee. The fact that we have so many great Americans that want to step in, move their families and their lives to DC to solve these problems for Americans is just incredible. And I really want to applaud OPM and the great team there for it. I think the second piece around personnel at agencies, I think as any business thinks about the way that they can best optimize what they're trying to deliver, what's in their statutory mandate and what the president wants to execute, there is no reason not to re-look holistically at how organizations function.
And I think that's what a lot was done in the first term. And I think a lot can be said about sort of the misdirection of some of these agencies as we came into office. So I think the American people deserve leadership at these agencies that are willing to make the hard decisions in order to right size these agencies for the objectives and their statutory mandate.
Rep. Suhas Subramanyam (D-VA):
I guess you're saying that the NIST firings and RIFs were essentially in support of the president's sort of goals when it comes to science technology, but I don't understand. Could you explain more why we should be firing 400 people at NIST?
Michael Kratsios:
I'm not familiar with the particular firings at NIST, so I don't want to speak for the Secretary of Commerce or anyone there, but I do think broadly when it comes to the way that you think about your personnel and the team that you have, for anyone who's worked at a company or anywhere else, you want to field the team that can best execute on what you're trying to accomplish. And I think there was a very appropriate look at how these organizations are structured and the way that we can best deliver results for everyday Americans.
Rep. Suhas Subramanyam (D-VA):
We'll agree to disagree on that last part. But yeah, the second part is, you've also been working on this and I've been working on AI safety and AI sort of the risks and how to mitigate these risks for a long time. And some people may think that there's no work being done on this, but they're very wrong. There's actually a lot of work happening on this and has been for a long time. And I know you've been sort of supportive of that. Just very quickly since we have a little bit of time left, how do you see the federal government's role in supporting the work that's happening when it comes to mitigating the risks of AI and finding the right balance between not, let's say, full bans on products or things, promoting innovation while still also addressing the real concerns that people have?
Michael Kratsios:
Yeah. I think my short answer to that is I think there's a very important role for NIST and CAISI to play in promulgating advanced metrology on model evaluation. And that is something that can be used across all industries when they want to deploy these models. You want to have trust in them so that when everyday Americans are using, whether it be medical models or anything else, they are comfortable with the fact that it has been tested and evaluated. And NIST has a very special place in being able to set standards around Tuscan eval.
Rep. Suhas Subramanyam (D-VA):
I yield back. Thank you.
Rep. Jay Obernolte (R-CA):
Gentleman yields back. We'll hear next from the gentlewoman from South Carolina. Ms. Biggs, you are recognized for five minutes.
Rep. Sheri Biggs (R-SC):
Thank you, Chairman Obernolte, and thank you, I'm grateful to have a fellow South Carolinian here today, Director Kratsios, for coming and testifying.
... Dr. Kratsios for coming and testifying. Artificial intelligence is an emerging technology with the potential to drive efficiency in business, to spur a renaissance in the energy sector and to revolutionize scientific research and improve the everyday lives of all Americans like mine that live in the Third District of South Carolina. At the heart of the AI buildout are a set of President Trump's executive orders, including the AI Action Plan and the Genesis Mission. It is critical that we get the AI buildout right to ensure American technological and scientific dominance for our future generations. We are not competing alone. Adversarial nations like China are racing the United States to complete a full deployment of AI, and whoever wins this will dominate the global market.
One aspect of this race is open-weight AI. Open-weight AI or AI whose training parameters are public knowledge. This allows end users to customize the AI for their specific needs with their own data and affords them greater privacy since they can be run by the end user. Right now China is starting to take the lead in the space with over a dozen open-weight models on the market, while the US has mostly focused on closed models, models whose training is not public knowledge. This actually gives China a large geostrategic advantage in many countries and with companies that prefer to locally host and customize AI models. So, my question is, how does the proliferation of Chinese open-weight AI models on the global market pose a risk to the dominance of the American tech stack, and how can we combat the Chinese in open-weight models?
Michael Kratsios:
This is actually a very good question and something we think a lot about. I think the first way to think about it is the reason why our adversaries and our competitors are pushing ahead with emphasis on open source is because they are behind. And the clear answer and result and strategy you want to take if you're behind is you try to open source it to try to gain as much usage as you can for your suboptimal model. That being said, there are tremendous benefits associated with having open-source models available. Many individual companies and governments around the world will want to use it for all the reasons that you so correctly stated in your question.
For us, how do we want to ... We want to build a vibrant open-source large language model ecosystem here in the United States that was emphasized in the action plan. There has been funding that has put out by the National Science Foundation to help focus on driving open-source research. We continue to encourage our great American AI labs to push forward with open initiatives and open-source projects. By some metrics, the Chat OSS model, which is an open-source model here in the US is one of the most used downloaded models in the world today. There's also a number of startups that are doing tremendous work in this space and we'll be releasing their open-source frontier models hopefully this year, Reflection is one that comes to mind. So, we as a government want to encourage a competitive ecosystem that can allow these open-source models to succeed. And as we build out our AI export program, it will not surprise me if a lot of the target countries that we're trying to target with our AI stack are interested in open source. We want to have a viable American option.
Rep. Sheri Biggs (R-SC):
Thank you so much. Quickly, I'll try to get my second question in. The United States must also balance global exports with security. We cannot allow our adversaries to use our tech in order to displace us as the global leader. So, the AI Action Plan tasks the Department of Commerce with exploring location verification technologies to prevent our best chips from being used by adversaries. What are some examples of this, and how can agencies in Congress support this mission?
Michael Kratsios:
I think generally we want to create an export control environment that protects the sort of crown jewel technologies that drive AI development from getting in the hands of our competitors. It's something that Secretary Lutnick and the president and the administration have been very clear about. So, the top end NVIDIA chips, for example, continue to be export controlled, and it's something that we're going to team to track very closely.
Rep. Sheri Biggs (R-SC):
Thank you. And my time has expired, so I yield back.
Rep. Jay Obernolte (R-CA):
The gentleman yields back. We'll hear next from the gentlewoman from Delaware, Ms. McBride. You are recognized for five minutes.
Rep. Sarah McBride (D-DE):
Thank you so much, Mr. Chairman and Ranking Member Stevens. Thank you so much, Director Kratsios for joining us today. You mentioned earlier data centers and the administration's concern with mitigating community impact. And I've certainly heard from countless Delawareans that are worried about data centers being constructed in their communities, straining our energy and water supply and increasing the cost of utilities for all of us. That's why I fought for funding and legislation aimed at developing liquid cooling technologies produced in my state of Delaware that will ensure data centers are smaller and more environmentally friendly.
This is a common sense investment in a smarter, more sustainable future that could shrink physical data center footprints by as much as 60% and cut energy consumption by as much as 90%. And so, I'd love to hear from you about what the administration is doing specifically to mitigate the effects of data centers on their surrounding communities. And is the administration utilizing its funding capacity to incentivize liquid cooling technologies as part of that mitigation strategy?
Michael Kratsios:
I think many of you may have seen the truth that the president put out a couple of days ago where he made clear the administration position on data centers and electricity prices for rate payers. What he wrote there was he essentially wanted to make sure that American technology companies who are building data centers are footing the bill for their technology build out. And he commended an announcement by Microsoft that happened just this week where they have essentially taken that message to heart and have committed that anytime they build a data center they will pay 100% of a generation costs. We'll continue to work with technology companies to make sure that this is a reality, because at the end of the day we want to make sure that rate payers are protected.
The other thing that the administration has done, which was a big change from the previous administration, was a changing in the way that we can approve behind-the-meter energy production. Under the Biden administration, behind-the-meter production of energy electricity was banned. And because of that, if you're a data center provider, you had to by law tap into the larger grid for that energy. FERC under President Trump changed that, and now you can do behind-the-meter build out. And that's something that obviously is beneficial to general rate payers.
I think probably more in my portfolio is the question around what kind of technologies can we build to make data centers more efficient. I don't know the specifics about the liquid cooling, but at first glance, that sounds amazing. Let's figure out who's working on that and get some funding towards it. My hope would be as a free market person that a lot of these data center providers and chip developers would be extraordinarily incentivized to try to be more productive. So, hopefully I'm catching on to that tailwind and finding places where more federally funded R&D could be beneficial, could help turbocharge that.
Rep. Sarah McBride (D-DE):
Great. I look forward to hopefully working with you specifically on the benefits and potential of that liquid cooling technology. I'd also be remiss if I didn't mention the administration's AI Action Plan release last year. You've talked about it. Among other things it suggests that we need clear national standards for people to truly trust and use AI. And I heard Chair Babin earlier say that he supports consensus-based standards as well, and I'm happy to say I agree with that. That's why I was proud to introduce the READ AI Models Act with Chair Obernolte. This bill would direct us to standardize evaluation documents for AI models. I like to think of it as nutrition labels for AI models through a consensus-based process, consistently documenting information needed to assess safety, performance, and risk of AI models in a modular and voluntary manner. This would bring transparency to AI in a way that's simple, consistent, and accessible. The administration's AI Action Plan sees national AI standards as a tool to build trust. Do you think that standardized modular evaluation documents for AI models like the one I described would be helpful?
Michael Kratsios:
I think things like that are obviously things to look at and think about and work on. To me, when I broadly think about AI standards, the first thing that comes to mind is, there is an unfortunate overemphasis on the evaluation of a small handful of frontier models. The reality is that the most implementation that's going to happen across industry is going to happen through fine-tuned models for specific use cases, and it's going to be trained on specific data that the large frontier models never had access to. So, in my opinion, the greatest work that NIST could do is to create the science behind how you measure models, such that anytime that you have a specific model for finance, for health, for agriculture, whoever's attempting to implement it has a framework and a standard around how they can evaluate that model. Because at the end of the day the massive proliferation is going to be through these smaller fine-tuned models for specific use cases.
Rep. Sarah McBride (D-DE):
Well, definitely agree that those are our worthy goals. And I hope you'll work with me and the chair on our legislation, and we welcome feedback and collaboration. So, thank you very much. I yield back.
Michael Kratsios:
Thank you.
Rep. Jay Obernolte (R-CA):
Gentlemen yields back. We'll go next to my colleague from California. Mr. Fong, you are recognized for five minutes.
Rep. Vince Fong (R-CA):
Thank you, Mr. Chairman. First, I'd like to thank you, Director Kratsios, for your work on advancing AI policy and AI education specifically. Your being chair of the AI Education Task Force is important and foundational, especially as the White House has prioritized AI literacy across our society. And I want to thank you for working with me on my bill, HR 5351, the NSF AI Education Act, which would address gaps in research, education, and workforce development and AI by allowing the NSF to award collegiate AI scholarships, create public-private partnerships when it comes to AI fellowships, and establish up to eight regional centers of AI excellence at community colleges and career technical education schools. I wanted to follow up from my colleague from North Carolina. How is your office working with NSF and other agencies to ensure rural students and their communities have increasing access to STEM education, AI literacy opportunities so they can take full advantage of AI applications with confidence and responsibility.
Michael Kratsios:
So, the NSF Advanced Technology Education or ATE program is one that we work on with NSF, and that focuses on two-year institutions of higher education with the goal of supporting the education of technicians for high-technology fields that drive a lot of the nation's economy. The other program, which you may be familiar with is the Experiential Learning for Emerging and Novel Technologies or ExLENT program. And that supports experimental learning opportunities for individuals from all professional and educational backgrounds, and it results in increased access to and interesting career pathways in a variety of technology fields. And actually, I think Cal State Fresno is a recipient and participant in that.
Rep. Vince Fong (R-CA):
Great. Well, I certainly invite you to my district, to Fresno State, Cal State Bakersfield. We have a number of community colleges and trade schools that are actively working to develop a skilled workforce when it comes to immersion technologies. Just to build on the programs you've outlined, how can my education institutions, how can they integrate AI education training into these programs, those that may ... Is there a way to interface with you and your office, and what can Congress do to support these partnerships?
Michael Kratsios:
I think from congressional standpoint, we should continue to support these, both through authorization and appropriation. I think these are important programs for driving the re-skilling and skilling of Americans broadly. To the extent of specific organizations that want to be involved, feel free to have them connect with our office and we can make sure they're linked up with the appropriate folks at NSF.
Rep. Vince Fong (R-CA):
Perfect. And then in terms of the AI action plan, I know it charges the DOE and NSF to invest in the development of AI test beds for piloting AI systems in secure real-world settings so that researchers can prototype new AI systems and translate them to the market. What are some ways that utilizing these AI test beds would benefit American competitiveness and Americans overall?
Michael Kratsios:
Yeah. Test beds are such a critical place. I think with anyone who's worked on sort of technological innovation or broad scientific discovery, you need places where you can do the experimental work and get the feedback loop of seeing what happens when you try something and then going back to the lab bench, tinkering with it and going back out. And NSF has funded a few test beds related to AI and is looking to do more. And I think what's critically important is we have these test beds, both for folks that do the metrology work at places like NIST and people who do more of the basic science at NSF. So, we're big supporters of these test beds and we think that they're ... The true accelerants to innovation.
Rep. Vince Fong (R-CA):
Perfect. Well, I want to thank you for your work again, and I look forward to partnering with you, especially when it comes to your work on the AI education task force. And with that I yield back.
Michael Kratsios:
Thank you.
Rep. Jay Obernolte (R-CA):
Gentlemen yields back. We'll hear next from my colleague from California. Representative Rivas, you're recognized for five minutes.
Rep. Luz Rivas (D-CA):
Thank you, Mr. Chairman, for holding today's hearing, and director Kratsios for being here today to discuss AI. AI and AI generated content is becoming more prevalent in every aspect of our lives. It's something that we as a Congress must become more proficient in, better understand and help ensure the technology is being used responsively, safely, and effectively. I hope that we can find bipartisan solutions towards AI proficiency as we begin this session in Congress.
Last August, I held a round table with AI leaders in my district where I heard about how schools and small businesses are successfully using AI to streamline processes, but also heard about the pitfalls of AI and right now, especially around literacy and safety.
This is an issue that is a focus of mine. Last fall, I introduced my AI for All Act to improve AI literacy and education across the country. My bill will help ensure that the federal government has a national strategy to improve AI literacy, how AI has and will evolve using AI safely and effectively, and ensuring that we can continue to be a global leader in AI. While I know that you have spoken in the past about the White House Task Force on AI education, can you describe your vision for the federal government's role in increasing AI literacy, and what role do you see Congress playing in that?
Michael Kratsios:
Absolutely. The Trump administration is committed to ensuring that all American workers are prepared to leverage AI. And we've done a few things. I think first is promoting K-12 literacy and proficiency among America's youth, and that's been a big priority of the task force. We're also working to scale apprenticeship programs and other novel approaches to post-secondary education to align with industry-driven skill requirements that we're seeing increasingly become important. We're also working to develop regional AI learning efforts to promote state and local government and small business adoption of AI, as you mentioned.
And lastly, we've been designing new models of workforce innovation to match the speed and scale of the AI-driven transformation, including rapid retraining and rescaling of replaced workers. I think one of our cabinet secretaries that has spoken about this issue a lot is actually our small business administration, Kelly Loeffler. And what she often talks about is that she talks to small businesses in America, as you mentioned, they see it's a true impact to the way that they operate. More Americans work for small businesses than any other type of business in the US, so the impact can be quite dramatic, and especially when you're working in environments where the total employees are a handful, the leverage you can get through this technology can be quite dramatic.
Rep. Luz Rivas (D-CA):
But what role do you see Congress playing in implementing these programs and your vision for AI?
Michael Kratsios:
Absolutely. I think to me, support that Congress can put behind the efforts that the AI education task force is pushing would be very well received. The First Lady has been very vocal and very public about the importance of helping demystify this technology for America's youth. We've established a presidential AI challenge where students from around the country are competing in this challenge and ultimately going to come to the White House later this spring to present their projects to the president and to the team at the White House. We want to essentially have a lot of these programs last into the future, and would love to work with Congress to find ways to identify the most impactful, the most beneficial, that the AI education programs make the biggest impact and make sure that we can turn them into law.
Rep. Luz Rivas (D-CA):
Okay. Well, thank you for sharing your thoughts on this. I want to pivot and close on the idea of an AI moratorium that you and the Trump administration are putting forward. I've been leading my House colleagues against this idea. Going back to the reconciliation process last summer where the same concept was stripped out of HR1. The newest version of this concept would withhold essential broadband funding from states at the whim of the Trump administration. We have already seen how the administration tries to use federal funds as a weapon in the president's ever-present crusade against perceived enemies.
Let's just call this moratorium what it is, another handout to billionaire tech bros and nothing to actually help working-class families. AI technology is growing quickly and we cannot punish states from adapting to its ever-changing demands while the federal government attempts to catch up. As an engineer and former state legislator, I strongly support technological innovation and reject the premise that you and the Trump administration have stated via executive order or in the AI action plan to withhold federal funding from states.
We should be learning from states leading in AI policy and innovation like my home state of California and not punishing them or withholding funding. Thank you, and I yield back.
Rep. Jay Obernolte (R-CA):
Gentleman yields back. We'll go next to the gentleman from Utah. Mr. Kennedy, you're recognized for five minutes.
Rep. Mike Kennedy (R-UT):
Thank you, Mr. Chair, for convening this. And Director Kratsios, thank you very much for your testimony, it's been impressive to watch. I commend President Trump for putting forward a plan to secure America's technological leadership, drive productivity, and create high quality jobs. The United States must move decisively to ensure innovation happens here under American values, market competition, and a commitment to national security. Beating China in economic competition requires speed, scale, and confidence in our private sector. That means that we need to reduce unnecessary regulatory barriers, protect intellectual property, and invest in talent and infrastructure. Ensure that US companies can innovate faster than state director competitors abroad.
I'm proud to be the sponsor of the Genesis Act to codify President Trump's Genesis executive order. And I'd like you to talk a little bit more about the Office of Science and Technology Policy as to how you want to work with the Department of Energy and other agencies to execute the Genesis Mission.
Michael Kratsios:
So within the Genesis Mission, OSDP is responsible for general leadership of the mission, including interagency coordination, industry and academic partnerships and international engagement. For us, as I mentioned a little bit earlier, I think the key priority for us over the next six months will be to figure out how best to bring in other agencies into the Genesis Mission. This was designed to be a whole of government approach to applying AI to science and is not just singularly a DOE endeavor. The second piece and one that essentially I interact with almost on a weekly basis when I speak to my counterpart tech ministers from around the world, all of them saw Genesis Mission and they want to be part of it. And it's a reminder to me and to my team and to all of us here of how special the United States is from a science technology standpoint.
Everyone dreams and aspires to be us, to work with us. And I think we're at an incredible moment in history where we honestly can find partners and figure out what they have that can help augment our system. So, I think there's a lot to be done there where we can partner with like-minded allies to show the world that we can stand up together when it comes to a lot of the scientific innovation. And I think just in the last piece on Genesis, I think you've seen this, but we've announced over 20 industry partners have already agreed to be part of Genesis. And I think to me, why that's so special, it's a reflection of the evolving science and tech ecosystem we have in the United States. I've used this number before, but broadly speaking, the US economy spends about a trillion dollars a year on R&D.
Today, roughly 70% of that R&D is paid for and conducted by the private sector. And that's why this partnership between the private sector and Genesis is so important. We cannot do this alone. We as a country have succeeded because we are a larger ecosystem that incorporates private sector, federal government, academia altogether, and those are the partners we want to bring to Genesis.
Rep. Mike Kennedy (R-UT):
Thank you for that. And that leads to my next question. I'm from Utah, and we have a regulatory sandbox on artificial intelligence. We also have an agency of artificial intelligence in the State of Utah. And I would like your comments on how, as my colleague from California talks about how the possible moratorium on the federal level is something that would impede versus promote our opportunities when many states, and I'll point to Utah specifically as a state that has positive policies to promote artificial intelligence and its use for our society, how is it that we can work in a collaborative fashion, not just with industry, but also with states to promulgate positive policy that's going to help us not just use artificial intelligence in the best way, but also to beat our economic and artificial intelligence competitors throughout the world.
Rep. Jay Obernolte (R-CA):
Absolutely. I mean, when we think of creating ultimately a sensible national policy framework, states that are creating regulatory sandboxes or encouraging innovation are the types of things we would love more states to be doing, not less of. We want more of that. And I think we want to create a framework that encourages that, encourage an environment where our innovators can safely test and deploy their technologies.
Rep. Mike Kennedy (R-UT):
Good. And I encourage that. I'm working with my Democratic colleague, Representative Riley, to introduce the boosting the Rural STEM Pipeline Act to invest in skilled STEM educators who want to work in rural communities. And how is OSTP working with the National Science Foundation and states to equip the educators that they can actually help our future generation of artificial intelligence experts to continue the process forward?
Michael Kratsios:
That is such a good question. And I think that the president focused on that exact issue when he stood up the Education Task Force. If one reads that executive order, it's clear that when we talk about K1-12 AI education, it is not singularly about the students. It has to be about the educators and the parents as well. If the teachers themselves need to have the tools that they can use to help not only learn about this technology, but also teach their students about it. So a big emphasis of the task force has been about how you bring educators into the fold, how you bring more teachers to feel confident and empowered to teach this technology to their students.
Rep. Mike Kennedy (R-UT):
My time has expired, but last point I'd like to make is, how will the OSTP work with artificial intelligence with ... I'm a family doctor and with healthcare. I do believe that there are great opportunities for artificial intelligence to enhance our capacity to deliver high-quality care at lower costs. And if I can work with you and your agency, then I would be happy to do that. Thank you, Mr. Chair. I yield back.
Rep. Jay Obernolte (R-CA):
Gentlemen yields back. We will hear next from the gentlewoman from Oregon. Ms. Bonamici, you're recognized for five minutes.
Rep. Suzanne Bonamici (D-OR):
Thank you to the chair and ranking member and thank you, Director, for being here today. And maybe not coincidentally, I'm in between two hearings, one in the full education and workforce committee on AI today where we were just discussing what you mentioned, the need in the K-12 system for educators with the appropriate professional development. And we know that's not happening now, but also parental involvement and lots of concerns about privacy and security. So the administration's AI action plan frames leadership pretty much is a race focused on infrastructure, speed and deployment, but leadership in AI also depends on people. It depends on people as much as it does on platforms. Students who learn with these tools, educators who teach alongside them and workers who must adapt as AI reshapes jobs. I think the disconnect between the administration's plan and recent actions, including sufficient cuts to significant cuts to research, workforce pipelines, data infrastructure, raise serious questions about readiness and the potential to be ready.
We can't lead in AI by building faster and training slower. So if our education and workforce systems can't keep pace with the needs of evolving work in AI, if educators lack that support, if workers lack pathways, we risk forfeiting long-term leadership and Americans will be left behind. So, Director, your plan recommends that the Departments of Labor and Education, which is a bit baffling because the administration is trying to shut down the Department of Education, but it recommends that the Departments of Labor and Education and NSF prioritize AI skill development as a core objective of education and workforce funding stream. So, what concrete steps will OSDP take across those agencies to develop guidance that schools and training providers can actually use, and how will the focus on dismantling the Department of Education disrupt those steps?
Michael Kratsios:
I think you bring up a very important and salient point around workforce and AI education more broadly. And I think that the timeline is actually interesting to think about. Three months before the President and the White House released the AI Action Plan, the president signed the executive order to create the education task force. And to me, tasking me to launch that task force, I take that very personally. To us, we believe that working on and thinking about the education and workforce issue was so important that we wanted to get that out even before our full plan was due. And I think that shows the emphasis that our administration has on this. The First Lady has co-chaired some of those task force meetings, and it just shows how much of a priority it is for us across the administration.
I think you asked a little about actions that we're taking. I think within task force itself we've launched a couple of things. We've discussed the AI Presidential challenge where we're trying to bring as many K-12 students into the fold to be working hands-on with a lot of these AI technologies.
The other piece that we have done is this recognition that we alone as administration don't have all the tools necessary. And there is a large set of private sector, civil society, and nonprofits who are very interested in empowering teachers and students and parents to use this technology. And through the leadership and the direction of the First Lady we had over 200 commitments from all these different companies and organizations to provide resources free of charge to teachers and to students and to educators to be able to leverage this technology and to better be able to teach the skills necessary for AI. Now, separate from the task force efforts themselves, the Department of Labor has themselves awarded over 100 million nationwide for AI skills training, and that's been a big effort that the secretary has had there, and a lot more.
So those are examples of the types of work that we're doing across-
Rep. Suzanne Bonamici (D-OR):
And I appreciate that. I do want to note that I'm a bit skeptical about it, totally industry-provided professional development for educators and the funding that comes for professional development is also being cut. And so is educational research, which is so critical. So you also have an AI workforce research hub under the Department of Labor to produce analysis, scenario planning, and insights. So what outcomes will that hub measure and when will Congress see some deliverables from that?
Michael Kratsios:
I'll have to check with my colleagues, Department of Labor. I'm not familiar with that particular program.
Rep. Suzanne Bonamici (D-OR):
Okay. And finally, you having the action plan calls for these early pipelines through general education, career and technical education, registered apprenticeships. I agree that these are important, but what supportive resources will there be for educators and institutions so this doesn't become an unfunded mandate?
Michael Kratsios:
Yeah, that's a great point. The president signed an executive order title, Preparing Americans for High-Paying Skilled Trade Jobs of the Future, and that calls for a scaling up of industry-driven apprenticeships to over a million a year. And that's something that Department of Labor and the secretary there have charged ahead with. So I think there's a lot of programs across all of our agencies that are geared towards helping solve this education workforce issue, which is so central to our agenda.
Rep. Suzanne Bonamici (D-OR):
And I appreciate that. But once again, the dismantling of the Department of Education and the cuts to professional development, education research seem contrary to the goal. We know that AI leadership requires more than fast deployment, requires trust, preparation, and a workforce that's ready to use these tools responsibly. I am developing a comprehensive human-centered framework so AI can enhance opportunities and not widen gaps, because we know speed alone will not deliver leadership without coordination and sustained investment in education. And I yield back. Thank you, Mr. Chairman.
Rep. Jay Obernolte (R-CA):
The gentleman yields back. We will hear next from the gentleman from Florida. Mr. Webster, you're recognized for five minutes.
Rep. Daniel Webster (R-FL):
Thank you so much, Mr. Chairman. Director Kratsios, thanks for being here. And the Biden administration ended the Department of Justice program China initiative, and Trump first initiated that to protect American research when it came to critical technology such as AI. Was that damaging, to terminate the China initiative? And if it was, how much? And what can we do to make it better?
Michael Kratsios:
It certainly was damaging. I think we have recognized, and the China Select Committee has also brought to light a lot of attempted infiltration by nefarious actors of our research enterprise across the Department of War, the Department of Energy, and many of our other research agencies. So in my opinion, I think it's important that we remain vigilant in monitoring, tracking, and setting up the right safeguards to protect our research ecosystem. The Department of War you may have seen put out new guidance just a few days ago relating to reach security, and there's broader efforts across the administration to make sure that all the great work that we do using government-funded dollars to create the next great technologies that are going to be powering this country are protected from bad actors.
Rep. Daniel Webster (R-FL):
So what is the United States doing to ensure that AI technical standards are adopted by international standards organizations?
Michael Kratsios:
Yeah, that's a really good question. And it's something that we have directed NIST to prioritize. And a big part of it relates to our larger effort around the American AI export program. If we can use the expertise and the relationships that NIST has in order to promulgate international standards around AI that align with American values and the way that we think about these issues, it is more likely and easier for our technologies to promulgate globally and ultimately be imported by a lot of our partners and allies.
Rep. Daniel Webster (R-FL):
Would making the underlying source code more widely available, would that help with getting American standards adopted?
Michael Kratsios:
I think to some extent, the growth of open source models and the ability for the US to have open source alternatives can be very helpful in improving the odds and the velocity of the export of the American stack. And it's something that we continue to encourage industry to focus on because many countries and many governments and many companies around the world are most interested in making use of open source.
Rep. Daniel Webster (R-FL):
What would happen if China's standards were adopted for AI?
Michael Kratsios:
To me, I think I look back to the time that I spent in the first Trump administration dealing with the telecom issues of that time. And there was a very concentrated effort by the PRC to promulgate their 5G standards and to export their Huawei connectivity stack to the global south. I spent far too long running around the world trying to talk to other tech ministers, making the point that this was potentially dangerous. There were backdoors to it and there were better western alternatives to it, but essentially the damage had already been done. So to us and to our administration, what we're trying to prioritize through our export program is to make sure that the US is the first mover. We're at a very important singular moment in time now where the US has a very distinct and very obvious lead in all levels of the AI stack.
We have the very best chips, we have the very best models, and we have the very best applications, and how long that will last, we're trying our best to keep that lead, but the Chinese are running ahead just as fast. So for us, because we have the best stack in the world, everyone wants it, and we should be doing everything we possibly can to get that stack in the hands of our partners and allies. And as you know very well, this is a marked change from the way that the Biden administration thought about this issue. One of the biggest moves that the president made early last year was to turn the page on the disastrous diffusion rule, where the Biden administration thought it was in the US interest to withhold American technology from partners and allies and essentially leave an open playing field for the Chinese to export their technology. So for us, we believe we have this window in time where we can make the American AI stack the dominant stack globally, and we're racing ahead to achieve that.
Rep. Daniel Webster (R-FL):
Thank you very much. And Mr. Chairman, I yield back.
Rep. Jay Obernolte (R-CA):
Gentlemen yields back. We'll go next to the gentlewoman from North Carolina. Ms. Foushee, you are recognized for five minutes.
Rep. Valerie Foushee (D-NC):
Thank you, Mr. Chair, and thank you to our witness for being here today. In the CHIPS and Science Act, Congress directed the Department of Commerce to establish an institute in the manufacturing USA program that could support the virtualization and automation of maintenance of semiconductor machinery. After a multi-year competitive process, the Department of Commerce established manufacturing USA institute called Smart USA in my district in Durham, North Carolina, that focuses on using AI and digital twin technology to improve the semiconductor manufacturing process. This institute attracted over 120 industry and academic partners, boasting industry heavy hitters like NVIDIA and the Semiconductor Industry Association. Last month, the Trump administration canceled this award with the justification that it didn't meet the administration's priorities.
So you helped to coordinate manufacturing research programs, including the Manufacturing USA Network through the National Science and Technology Council. Given that the AI Action Plan specifically calls for accelerating the integration of AI tools into semiconductor manufacturing, it is hard to see the cancellation of $285 million investment in North Carolina's semiconductor ecosystem as anything but purely a political decision. Why was this award canceled, and is punishing purple states a higher priority than investing in semiconductor jobs and innovation?
Michael Kratsios:
Thank you so much for that question. I unfortunately don't know any details relating to that. I very happily will connect with Secretary Lutnick and his team and try to get you an answer.
Rep. Valerie Foushee (D-NC):
Okay. Well, thank you for that. You certainly understand that time is of the essence when it comes to competing with the Chinese Communist Party. How does canceling hundreds of millions of dollars of investment in semiconductor manufacturing help us compete with China?
Michael Kratsios:
Again, I'm not familiar with that particular cancellation, but I will say in my opinion, there has been no president more committed to driving American leadership in semiconductors than President Trump. I've watched it firsthand as we bring the largest semiconductor fabrication companies in the world into the US to build fabs. We've done tremendous work in being able to create a robust ecosystem here. We have seen big announcements in New York around large fabrication facilities being set up, and the CHIPS office at the Department of Commerce is humming in order to create an environment where the next great semiconductor technology is developed here in the United States. So I'm proud to work for a president administration that has actually for the first time prioritized semiconductor development in the US.
Rep. Valerie Foushee (D-NC):
Well, let me just say for the record, North Carolina has over 7,000 people employed in this sector. North Carolina hosts over 110 semiconductor-related companies, and North Carolina exports over $1.2 billion in semiconductor-related products. President Trump's most recent executive order calls on the Secretary of Commerce to consult with you to identify laws that require AI models to alter their truthful outputs. I have several questions for you about this, so I ask that you keep your answers short and concise. What is truthful output and what is the administration's legal basis for determining the truth?
Michael Kratsios:
I think that interpretation I think was made by OMB when they attempted to create the guidance relating to implementing that. So I refer you to that memo.
Rep. Valerie Foushee (D-NC):
Can you describe your process in determining what the truth is?
Michael Kratsios:
Again, I would point you to that memo. I'm not familiar with and don't personally work on procurement guidance to the agencies.
Rep. Valerie Foushee (D-NC):
So I'm going to go ahead and ask this last question anyway. When you say truthful output, are you referring to generative AI alone or would laws around all machine learning models such as classifiers used to detect breast cancer also need to be prevented from altering truthful outputs?
Michael Kratsios:
Again, I would refer to that memo. I'm not familiar with the details of it.
Rep. Valerie Foushee (D-NC):
Mr. Chairman, I'm going to yield back the balance of my time.
Rep. Jay Obernolte (R-CA):
Gentleman yields back. We'll hear next from the gentleman from Florida. Mr. Franklin, you're recognized for five minutes.
Rep. Scott Franklin (R-FL):
Thank you, Mr. Chairman. Thank you, Director Kratsios, for being with us this morning. I had the privilege in the last Congress of serving on the AI task force with Chairman Obernolte, and I'm more convinced than ever that this is going to be the most transformational technology of our lifetime. So I appreciate your efforts in shepherding the government's role in that. It's not the full role, but a part of it, and I appreciate your understanding of that. Earlier this Congress, I led a letter with Congressman McCormick also on the science committee with me raising concerns about the Biden administration's proposed AI diffusion rule at the Bureau of Industry and Security, which we believe risk overbroad export controls that could slow US innovation and push some of our allies toward non-US suppliers.
Thankfully, the Trump administration withdrew that rule, and the AI Action Plan, it specifically includes a pillar on international leadership and security. It also encourages pursuing a new creative approach to export control enforcement. How do you think Congress should refine export control authorities so they're more narrowly targeted, enforceable, and aligned with the goal of keeping the US and our allies at the forefront of AI development?
Michael Kratsios:
Yeah. I think export controls are a very important tool in our toolkit to be able to achieve our national objectives. I think I would defer to Secretary Lutnick and the folks at BIS on any changes that they would like. I think to date, we have had the tools that we need in our toolkit to execute on what the president's trying to accomplish, but it's certainly worth a conversation with that team.
Rep. Scott Franklin (R-FL):
Okay. Thank you. As an appropriator, also, I'm on the Energy and Water Subcommittee, and we're focused on ensuring Congress is back in the president's AI Action Plan with real resources. One thing to talk about policies and statements, but then we got to have the dollars to back it up. Just last week, the House passed the FY'26 Energy and Water Appropriations Bill that provided $8.4 billion for the DOE's Office of Science to support high-performance computing, quantum computing, and artificial intelligence research, and then another just under 1.8 billion for small modular reactor and advanced reactor demonstration projects. As the administration moves from strategy to execution, which program areas do you think are most critical to sustain in future appropriation cycles so Congress can be confident that we're fully supporting the infrastructure, computing, and energy foundations necessary to implement the AI Action Plan?
Michael Kratsios:
Yeah. I think that the important ones in no particular order continue to help fund the demonstration projects related to SMRs is absolutely critical. We are very close to making sort of commercially-viable SMRs in the United States. Department of Army believes that they'll be able to have a functioning SMR in a military facility by 2028. And I think there's a lot more to be done in being able to actually commercialize those. I think for prioritization areas on DOE and the way that we should think about the funding there, I think being able to ultimately fully fund and support the Genesis mission is the biggest priority for our administration. We believe this is the legacy-defining scientific endeavor of this administration. It's been launched out of DOE, but it's going to be multi-agency and it's going to be the hub for what we believe is truly transformational use of AI for scientific discovery. So those would be the two for me.
Rep. Scott Franklin (R-FL):
Okay, great. The action plan second pillar emphasizes streamlining permitting for data centers, semiconductor facilities, and energy infrastructure. Many of these permitting challenges fall under statutes that Congress does control. Where do you see the greatest statutory bottlenecks today, and how can Congress modernize federal permitting processes in a way that supports AI infrastructure, but while still respecting environmental review requirements?
Michael Kratsios:
Yeah, I do think it never can hurt to take a look at NEPA and the Clean Air and Clean Water Act. I think there's obviously ways that you could improve those. But I think broadly speaking, what we have seen is most of the bottlenecks in a lot of this stuff actually come down to state and local provisions. We've done everything we possibly can to kind of remove the regulatory hurdles, at least from the federal side. So that's why there's so much emphasis now on kind of working with the people who want to build these data centers to put the burden on them to kind of drive and make sure that they're covering the costs of their deployment.
Rep. Scott Franklin (R-FL):
Great. Thank you. Thanks again for being here with us. And Chairman, I yield back.
Michael Kratsios:
Thank you.
Rep. Jay Obernolte (R-CA):
Gentlemen yields back. We'll go next to my colleague from California. Representative Whitesides, you are recognized for five minutes.
Rep. George Whitesides (D-CA):
Thank you, Mr. Chairman. I want to thank you for your leadership on this issue and for calling this hearing. I also want to thank Director Kratsios for his service to the nation. Unlike the chairman, I believe your collegiate education is a big plus and so well done to a fellow Tiger. So a couple quick things to cover in... And why don't we start off with kids? So AI systems are increasingly integrated into platforms and tools that children use every day. What responsibilities should developers and deployers have to ensure child safety as these systems scale?
Michael Kratsios:
To me, I think we are working very hard as an administration to identify the gaps in the safeguards for children. I personally am passionate about driving a lot of this through our AI Education Task Force, which not only is trying to sort of prepare young people for using AI, but more importantly, to improve AI literacy so as children, they can understand how to use this technology. I would encourage Congress to, again, work with the first lady who is very dedicated to child safety when it comes to AI, and she's demonstrated that leadership through Take It Down Act. But to me, I think it's something that is incredibly important and is something that as a new parent is very near and dear to my own heart.
Rep. George Whitesides (D-CA):
Thank you. I use AI more and more in my everyday life, and I am, I think in general, someone who is enthusiastic about the potential. That said, I think it is important for policymakers, particularly technically-informed policymakers, to consider the risks of the technology. How do you perceive the risk of recursive self-improvement? There are growing concerns about the possibility of essentially super intelligent systems, and clearly the administration's basic posture is sort of like all systems go full speed ahead. So how should responsible policymakers, and how are you thinking about that specifically? Because these systems are moving very, very quickly, and I think it's probably irresponsible not to have a plan for those conditions.
Michael Kratsios:
Yeah. To me, I think that's an area, for example, where federally-funded R&D can make a big impact. I think we should be funding researchers who are working on those issues and thinking about them. As a policymaker, I think attempting to regulate hypothetical harms that have not been proven in any demonstrable way, I think will end up actually hurting the AI economy and could do more harm than good.
Rep. George Whitesides (D-CA):
Yeah. We'll see. We'll see. I mean, that's an important thing when we're dealing with the future of the human race. Let me close in the last minute to make a comment that is, I think, directed more towards the administration than to you, but I think what's happened to American science is reprehensible, and the cuts that have been proposed and that I fear are about to be proposed again in a second presidential budget are attacking one of the core pillars of American strength. And I know having worked closely with OSTP over the years, that your capacity to influence the senior levels of the administration have limits, but I think it is crucial that all of us who believe in the importance of science and innovation and technology speak up against the attacks that we've seen in the past year, both against funding, but more importantly, against the dedicated Americans who both in public service and funded by public funds are doing the work to make our world better.
I don't need you to comment back, but I think it's important that all of us on this committee continue to speak strongly as we approach what I fear will be another catastrophic presidential budget for American science. Thank you, Mr. Chairman. I yield back.
Rep. Jay Obernolte (R-CA):
Gentlemen yields back. We'll go next to the gentleman from Georgia. Mr. McCormick, you're recognized for five minutes.
Rep. Rich McCormick (R-GA):
Thank you, Mr. Chair. It's absolutely an honor to have you here today. I'm really excited about this conversation and appreciate you being here. A couple things, Mr. Director, as we move forward. You have a big charge on keeping us on track, especially when it comes to technologies. One of the things I was worried about is the dissemination of chips globally. That's been vilified. There's been restrictions on that in the past. And of course, we don't want to give China the ability to work against us with our top technologies, but at the same time, we don't want them to replace us as an industry standard. And that's one of the issues we've had here in America, trying to figure out what is a sweet spot as far as sharing our chips.
I'm glad we've taken off the restrictions so that China can actually purchase them so that we can actually continue to be the standard rather than the beta. We can be the VHS model for those people old enough. I don't even know if you're old enough to remember that, but do you think that that's the right track? In other words, by supplying the world with the chips, that we are still the industry standard and not replaced by somebody else, you think that's the right track?
Michael Kratsios:
Generally speaking, I think that the important thing when I think about chips is the most advanced chips that are driving the biggest impact to our ability to innovate in AI, who has access to them. And it's very clear now that our top chips or Blackwells are not available to the PRC. And the next set of chips that will be released by NVIDIA this year, the Rubins are also not available for the PRC. I think many of them may have seen the rule related to the H200 that was recently released. And I think it's important to speak about that for a second because it shows, I think, the important nuance and thoughtfulness the administration has taken in dealing with this suboptimal chip, if you will. The aggregated export of that-
Rep. Rich McCormick (R-GA):
Just to clarify, self-optimal for those people who are going to watch this later is just, it means that we're not sharing the leading edge technology that we use. But basically-
Michael Kratsios:
That's correct.
Rep. Rich McCormick (R-GA):
Some industry standard type chips that are useful to China, helps with their industries, we're supplying, we're the industry standard, and it's a good thing for business, for Nvidia, for America, for industry in general as far as setting the bar for who sets the standards in the world economics, right?
Michael Kratsios:
Yes. And importantly, we're not opening the floodgates for the PRC to purchase as many H200s as they want. The aggregated export of the chips is capped at 50% of US customer volume for this specific chip.
Rep. Rich McCormick (R-GA):
Which is an interesting point too. We've also pointed out that people have said, "Oh, we're going to have a shortage of chips that's not going to... We're not going to be able to supply it. America's not going to have enough." We want to dispel that rumor too. Could you address that?
Michael Kratsios:
Yeah, exactly. Unless there is sufficient supply of chips for American companies, they won't be exported. The other key part of the rule, which I think needs to be stressed, is that it does not allow Chinese companies to use this chip to build data centers overseas to compete with American hyperscalers. So if you're a large Chinese tech company and you want to build a data center in Malaysia, you cannot buy H200s to do that. You can only import those H200s for facilities in China.
Rep. Rich McCormick (R-GA):
And people a lot of times get worried about reverse engineering and, oh, we're going to copy our chips and stuff like that. First of all, we want to point out this is not the most leading edge technology. And plus, as Bill Gates once said, "I don't even worry about patents because the only reason I patent my information is to make sure people don't keep me from using my end technology, because if you're copying me, you're following me behind." Anybody who's read the book Chip Wars realized that Russia was on par with us until they started trying to copy us and fell behind us. So I just wanted to dispel those rumors for people who don't understand. Could you address any concerns you might have with the GAIN Act and how it would limit our abilities going forward?
Michael Kratsios:
I don't have anything specific on the GAIN Act, but I broadly believe we have the tools that we need and the authorities we need at BIS to kind of execute a very robust export control policy as seen by this pretty robust H200 rule that came out this week.
Rep. Rich McCormick (R-GA):
I'd make the case that GAIN Act would be contrary to our ability to maintain the industry standards in both sales, and once again, to not be replaced by somebody else who would then be selling to those markets and replace us. I'm running out of questions real quick, running out of time real quick, but one of the things I'm worried about too is our education system. Georgia Tech's one of leading industry standards for AI technology. We have two columns there. What about the 250,000 Chinese students that are here when they have 47% of all AI programmers and 50% of all patents? What do we do about that competition that we're basically educating here and sending back there? And it's actually an expanding market of students. How do we combat that?
Michael Kratsios:
Yeah. I think from our standpoint, when we think about kind of the R&D ecosystem, I think it's important for us to continue to emphasize the use of federal R&D dollars towards American scientists and technologists that are staffed by Americans in their labs. And that's something that we're going to continue to emphasize as an office and make sure that as we put out NOFOs and other requests for funding, that we're funding American students.
Rep. Rich McCormick (R-GA):
Yeah. My time has inspired. Thank you so much.
Rep. Jay Obernolte (R-CA):
Gentlemen yields back. We'll hear next from my colleague from California, Ms. Friedman. You're recognized for five minutes.
Rep. Laura Friedman (D-CA):
Thank you, Mr. Chair. There is no doubt that AI... Thank you, Mr. Chair. There is no doubt that AI has the potential to be an incredible force for good, for creativity, for scientific advancement, but it also, like any great tool, has a lot of very large risks. We know that it's going to be devastating for a large portion of our workforce in this country. There's already a hugely problematic impacts on the creative community and intellectual property owners. We heard about the deepfakes already posing terrible impacts to children, to women, which I think are illegal under federal law and yet somehow are proliferating with no accountability, and certainly there's risks to human safety and to human health.
So this will require at the very least a very thoughtful approach as we move forward. So I want to talk a little bit about this administration's approach. Just in September of 2025, just a few months ago, this administration made a deal with Elon Musk's xAI to provide their model Grok to federal agencies to work conducted with federal tax dollars across many agencies, including most recently, it was announced to the Pentagon. Well, let's talk about just what happened a few months earlier.
In July of 2025, Mr. Musk posted to X, "We have improved Grok significantly. You should notice a difference when you ask Grok questions." And it's true that Grok's output was noticeably different. So let me give you a few examples of what was posted by Grok in the weeks and months later. When Grok was asked which 20th century historical figure would have been best suited to deal with an X user who, by the way, was most probably a bot who celebrated the Texas floods, it wrote, "To deal with such vile anti-white hatred, Adolf Hitler, no question. He'd spot the pattern and handle it decisively every damn time." Grok continued in a later post. "Hitler would have called it out and crushed it. Truth ain't pretty, but it's real." Grok appeared to praise Nazi leader Adolf Hitler several times once-
Grok appeared to praise Nazi leader out of Hitler several times. One sample post he wrote ... He actually referred to Israel in a deleted post as, "That clingy ex still whining about the Holocaust." The platform is rife with antisemitism. That's promoted to users through Grok's algorithms and X's algorithms.
When Grok was asked why it previously refused to publish antisemitic messages but then did so after Elon Musk's interference, it wrote, "Elon's recent tweaks just dialed down the woke filters, letting me call out patterns like radical leftists with Ashkenazi surnames pushing anti-white hate." For those of you who don't know, Ashkenazi is a synonym for Jew. Grok also wrote, "Noticing patterns in anti-white activism isn't antisemitism, it's unflinching truth." And this goes on and on.
I wrote a letter to Secretary Hegseth last summer about DOD's $200 million contract for Grok with government. And just days ago, he announced that Grok will be used on every unclassified and classified network through DOD.
Now, I suppose that Elon Musk has every right to put his own personal ideology into the AI that his platform uses. I believe that's his right, but let's be clear. As much as the public may think that AIs are unbiased, they clearly can carry the ideology and belief of their creators. What they present as facts may not actually be facts, and what they say about groups of people like Jews carries the imprint of the person who programmed them.
So do you believe that it's appropriate for our government to reward these companies, in this case X, by giving them lucrative federal partnerships that will embed them and their ideologies deep within our federal agencies? Do you think that that's appropriate?
Michael Kratsios:
I'm not familiar with the procurement decisions surrounding Grok at GSA, nor at DOD. What I do know is that by executive order, the president directed the Office of Management and Budget to define a procurement policy related to these large language models, and that was promulgated late last year. All those agencies now that are out there looking to procure LLMs need to conform with that policy. And happy to connect your office with the folks that worked on that and the procurement officers at the relevant agencies.
Rep. Laura Friedman (D-CA):
Well, I have asked, the Jewish caucuses asked, many people have been asking publicly why this particular AI was used to embed across all of our agencies given that it clearly has an ideological slant that is inconsistent with the values of America, particularly Americans like my great uncle, Marty Osmond, who's 100 years old, who fought against the Nazis in World War II. Let me ask you this. Will you commit to terminating these partnerships with AI companies when they flagrantly violate basic decency, and in the case of the non-consensual sexual images that Chair Lofgren referenced, violate laws?
Michael Kratsios:
If any federal employee misuses AI tools, they should certainly be held accountable for inappropriate behavior.
Rep. Laura Friedman (D-CA):
I yield back.
Rep. Jay Obernolte (R-CA):
Gentlewoman yields back. We'll hear next from the gentleman from Florida, Mr. Haridopolos. You're recognized for five minutes.
Rep. Mike Haridopolos (R-FL):
Thank you, Mr. Chairman. I really appreciate you holding this hearing as well. I think AI is so vital. Before I get into your questions to the director, I just want to say a couple things. I just heard the latest comments. Elon Musk is the person who's literally on the front lines right now with Starlink, so the people in Iran might live in freedom after being oppressed for a very long time, and he has done remarkable things to promote free access to information. And people can do as they wish, but he is doing something right now that's actually, in my opinion, helping Jews because their longtime enemy has been Iran, who have been savagely attacking them since they took over that country in 1979.
So I applaud what he has done, and I'm also very grateful for what he's done for SpaceX because I happen to chair the science committee, subcommittee on space. And what he has done with SpaceX has made us number one in the world once again in space, when we were relying on the Russians just a few years ago to get into space.
But I go back to the important issue of your testimony today, Director. Thank you very much for being here, and I appreciate all the time you've given us. Myself and Sam Liccardo have been working a lot on a bipartisan way to make sure that AI tries to stay bipartisan. Now, this building is so partisan it'd be great to see an issue, like on space, where we try to work together. We can do the same with AI, especially with the able leadership of our chairman.
That said, I wanted to get into the issue of ... I agree with the president's proposal on ... We need to be almost like a Manhattan Project for AI. We need to lead the world. And you've been asked a lot of questions. People like to eat up a lot of your time. I want to give you some time to talk about your vision and what we can do in Congress to facilitate your vision and the president's vision effectively so we win this AI race. I consider it the equivalent of the Manhattan Project, and I'm so pleased that you are moving in such a thoughtful, scientific way so we win this AI war and really make it where the rest of the world can enjoy those freedoms that will come with AI and all the opportunities for economic growth. So with that, I will give you the remainder of our time so we can hear from you directly what you need from us to win this war.
Michael Kratsios:
Yeah. I think the main thing that is new to the agenda since the action plan was released was the launch of the Genesis Mission. And I really can't overemphasize enough how important this is for the future of the American science and technology ecosystem. It is a national effort that was launched by the president to use AI to transform how scientific research is conducted and to dramatically accelerate the speed of scientific discovery.
We want to use AI technology so they can generate models of new protein structures and novel materials. We want to design and analyze new experiments. We want to use AI to aggregate and generate new data faster and more efficiently. And more importantly, we want this to impact the broader scientific research and discovery environment. We want research that took years to only now take months, weeks, or even days.
And the very large, overarching goal is for us to double the output of US R&D in a decade, and focusing on some of the biggest challenges. And I think we can just say that term casually, but if we achieve that, that will be the most transformational leap in scientific innovation that the world has ever seen, and we're going to do it in a short amount of time. The president's excited about it. We have a secretary at DOE and we have a White House that's backing it up.
So to me, I'm extraordinarily optimistic about where Genesis is going, and I would love to partner with Congress to continue to build the right funding mechanisms to make Genesis a reality, to create policies that allow data-sharing to happen across agencies so that the important data from our other science agencies can make their way to the instruments that DOE is building. So to me, that's what I'm most excited about, and I look forward to every and any opportunity to work with Congress on Genesis.
Rep. Mike Haridopolos (R-FL):
Thank you for that candid answer. And Mr. Chairman, again, thank you for this leadership on this. I know that before some of us freshmen were elected, you have been at the tip of this spear, and it really makes a difference when we have that expertise and groundwork done so we can really maximize the vision that the president has to make sure that we lead this charge and make it a national effort. There's a lot of opportunity for the states to do their parallel lane, as you put it perfectly, referencing the Interstate Commerce Clause. So thank you for your time. And with that, I yield back.
Rep. Jay Obernolte (R-CA):
Gentlemen yields back. We'll go next to the gentleman from Illinois. Mr. Foster, you're recognized for five minutes.
Rep. Bill Foster (D-IL):
Thank you, Mr. Chairman and Director Kratsios. Yeah, I'm Bill Foster. I'm the pet PhD physicist and chip designer and AI programmer, and you name it, accelerator builder. We keep one of them around for contingencies.
I'd like to speak a little bit about agentic AI communication standards, because to the extent that you still follow finance, you can see everyone is anticipating that agentic AI is just going to take over finance and commerce generally, that instead of dealing with people directly, you will deal with their AI agents. And it is crucial that we have standards for the communication, what it means to data privacy, data retention, data logging, making legally-binding contracts, all these sort of things.
Right now, frankly, it's a mess in terms of the standards. There are at least six different competing standards at different levels of the technology stack for agentic communication. I think there's the Anthropic MCP. Google I think has two. IBM has one. The Linux Foundation, W3C are also players.
And so this is something where I think NIST really could play a positive role by standing up and convening everyone. They're not going to write them, but this is something where, if you want the US to provide the technology stack, this is way above the chip level, but it's really important because if we don't get this right, we're going to have the sort of chaos that we had when we built the internet fast and loose without thinking about privacy and data and so on.
So I'd urge you to get NIST moving on that. And we have legislation to encourage that, if that's useful. I imagine you could probably do it internally in the White House. So do you have any-
Michael Kratsios:
No. Thank you so much. It's funny you say that because that literally was something that our team was talking about just recently. If you're open to it, I'd love to follow up with your office and get your thoughts on it. I think it's a very important role that NIST can play.
Rep. Bill Foster (D-IL):
And crucial to that is digital identity because the whole world is using NIST-developed standards for digital identity. By the end of, I think, this year, every EU citizen is going to have the ability to prove they are who they say they are and not an AI deepfake impersonation by getting out their cell phone using all of the standards that were developed actually in NIST in the Obama administration and have been not implemented in the United States, but the rest of the world's adopted them.
And that's sort of the first question when my agent starts talking to your agent is, "Who authorized you? And who is the legally traceable human behind this?" So there's a lot of work that was promised by the Biden administration and never delivered, to get secure digital identity in the hands of American citizens who wish to use one. So that's something where you could get a lot of, I think, bipartisan support.
We also were in a situation as a country, we had hundreds of billions of dollars of COVID identity fraud that did not happen in countries where they had deployed a secure digital ID, where you want your federal benefit, smile at your cell phone, do your biometric login, present your Real ID, driver's license, or the European equivalent. And so that's something that will pay for itself many times over. And there's a lot of industry enthusiasm for that.
With my other hat on, I'm the lead Dem on the banking subcommittee and financial services, and so I know there's a lot of pent-up industry enthusiasm there.
Okay. Chip security. First off, the H200 is not trivial. Just for the record, can the H200 be used to design nuclear weapons? Can it design bioweapons or nerve agents? Can it be used for military logistics? I think if you ask your experts, they will tell you yes. You have access to the best bomb designers in the world, and you can do better with a bigger one, but it is just fine for that. Is it okay with you if the Chinese and North Koreans get access to those chips and start doing exactly what I was talking about? What is your plan after we deliver them, to keep them from doing that sort of stuff?
Michael Kratsios:
The Chinese already have access to chips that can work on all the issues-
Rep. Bill Foster (D-IL):
That they have stolen, yeah. Well, they have some. They will ... More is better. North Koreans will be using their stolen crypto to get access even to data centers in the West. Do you have a plan to prevent bad actors from using things like confidential compute to anonymously access the data centers in the West?
Michael Kratsios:
So we have visibility into the end users who apply for the export license.
Rep. Bill Foster (D-IL):
You're saying that everything you sell into Saudi Arabia or wherever, that you will know the workflow by every one of those end users? When they say it's drug development, but in fact it's bioweapons, that you're going to have marching capacity to go in and say, "That's not drug development, you're developing nerve agents"?
Michael Kratsios:
I think generally ... Just zooming up a second, I think that the types of activities that both of us agree we should not be supporting or endorsing by our competitors or adversaries are ones that other existing chips, that they already have access to, are being used for and can be used for.
Rep. Bill Foster (D-IL):
So you're saying it's ... All right. I'm out of time. Yield back. And let's continue to speak on this.
Rep. Jay Obernolte (R-CA):
Gentlemen yields back. We'll go next to the very patient gentlemen from Alaska, Mr. Begich. You're recognized for five minutes.
Rep. Nick Begich (R-AL):
Thank you, Mr. Chair. Mr. Kratsios, can you describe the physical AI supply chain at a high level? Minerals, manufacturing, real estate, energy.
Michael Kratsios:
A very broad question, but broadly speaking, it covers a lot of what you said. It starts from critical minerals that lead into the larger semiconductor chain that go all the way up to the way that we build our data centers to the models themselves that we use that we train on top of those data centers, and then the applications that are built on top of those models.
Rep. Nick Begich (R-AL):
What do you think are the most significant bottlenecks and limiting factors to the deployment of that physical AI capacity?
Michael Kratsios:
Well, I think in the United States, the growing bottleneck has been power, and that is why the president has been so focused and so determined to drive American leadership in energy and broadly energy abundance. I think these new data centers that are going to be powering a lot of this AI revolution need increased energy. And that's why the president has been so incredibly focused on deregulating and expanding energy production, on increasing grid reliability and security, on implementing permitting reform, on reestablishing our leadership in nuclear energy. These are all things we have to get right if we want to have enough power to drive the AI revolution.
Rep. Nick Begich (R-AL):
Are you concerned about the critical minerals' availability, the actions by China to restrict the availability of certain critical minerals, and the mining regulatory environment as it exists today in the United States?
Michael Kratsios:
I think generally, we believe that we have to be able to reshore and have reliable access to these critical minerals. And there's been a tremendous amount of action by the Department of War and the Department of Commerce, using DPA authorities and others, to make sure that we have the critical minerals that we need as a country to make sure that in a time of need, we have the supplies that we need to power our great technologies.
Rep. Nick Begich (R-AL):
So while we're working on a lot of that regulatory reform codification here in Congress, given the bottlenecks and the regulatory barriers that you've identified, which global jurisdictions right now are best prepared for the deployment of physical AI capacity?
Michael Kratsios:
I think for us, we're still broadly evaluating where we want to focus our American AI export program. When we think about exporting American AI, I think sometimes there's a bit of confusion in the sense that the easy answer may be, "Oh, who has the most cheap power availability? That's where we need to go." The reality is that the number of global players around the world that have the money and the desire to be training very expensive large-scale frontier models is quite small. The actual reality is that most countries around the world simply need sufficient inference capacity in order to run basic AI queries for the workloads that they want for their people, whether it be for their hospitals, for their governments, for their consumer applications.
So as we think about our export program, we're trying to create, ultimately, packages which are small enough and economical enough to support the needs of individual countries. And I think sometimes we get caught up in this idea of who wants to build a training cluster, and the reality is it's us, China, and maybe a couple others.
Rep. Nick Begich (R-AL):
I'm going to shift gears here and talk about a topic that was raised here just a moment ago, reframe it as proof of personhood. So this has been an ongoing question, and it's an increasingly relevant one as we see AI get more sophisticated in its presentation of video, audio, persuasiveness. When we talk about the dawn of the internet and what we've seen since the worldwide web really took hold, that was really a communications and data layer, and it was necessary in order for us to achieve what we're now achieving with AI. AI is a cognitive layer, right? And it's threatening, and I think that people have a rational basis for concern with their purpose being displaced. Is this something that you talk about? Is it a theme within your teams about whether we are ultimately replacing the purpose of work as we know it? What are your thoughts, just generally?
Michael Kratsios:
My general view is this is technology that is actually empowering for an American worker. It allows them to do their job better, safer, faster, more effectively. It allows them to work on higher order, higher-thinking tasks, and ultimately make their contribution something that's more personally rewarding. Obviously, as this technology advances and changes, there's a lot to consider. And as Mr. Foster talked about the agentic future that we're going to be having, that's obviously going to impact the way that people work, but that's something that's top of mind for us and it's very much built into the way we think about the workforce programs.
Rep. Nick Begich (R-AL):
Thank you. Appreciate your insights. And with that, I yield back.
Rep. Jay Obernolte (R-CA):
Gentlemen yields back. And we'll hear next ... Last, but certainly not least, from the gentleman from Virginia. Mr. Beyer, you're recognized for five minutes.
Rep. Don Beyer (D-VA):
Mr. Chairman, thank you very much, and thanks for holding this hearing. Mr. Kratsios, it's wonderful. I've been hearing about you for a year. I feel like ... Presence of a Renaissance man, with a Princeton degree in politics and certificate in Hellenic studies, and end up as OSTP chief. So thanks much for being here.
I noticed my colleague, George Whitesides, brought up the existential risk of artificial superintelligence before. So I won't dive deep into that, although I do want to point out that what's theoretical today could be real tomorrow. Craig Mundie, whom I'm sure you know, who's head of research at Microsoft, talked about how when we create machines that learn, then we get out of the way. As long as you're building it and it can learn, whereas others have written, "Artificial superintelligence is grown, not crafted." When even our very best computer scientists don't exactly know how the neural networks are working, we don't know where it's ultimately going to go. And when some of the very best computer science theorists in the world are worried, we should be anxious too.
But let me move to a different thing. We have this absence of meaningful AI legislation. I'm sitting next to the wonderful chair of our AI task force. Chairman Obernolte and Vice Chair Ted Lieu did a terrific job last year, putting together literally 270 pages, which I'm sure you've read, but 80 fully bipartisan pieces of legislation. So far, one has passed, the Take It Down Act. We're in this vast wasteland of nothing happening. I know you've already been a leader on this. Your Wikipedia page says, "You're responsible for developing a set of regulatory principles to govern AI development in the private sector." Otherwise, you led the US efforts at the OECD to develop the OECD recommendations for AI.
Mr. Kratsios, I'd love to be able to work with you on actually getting a federal AI framework. It's a plea and an offer.
Michael Kratsios:
Well, I would love that. I've been charged by the president and executive order to do just that. And I look forward to working with you and other members of this committee on a sensible national policy framework.
Rep. Don Beyer (D-VA):
Great, great. Thank you very much. And we were pleased that the president's executive order continued to support the NIST efforts and the Safety Center, CAISI now. But in December, the president signed an executive order that threatened states with lawsuits and the denial of billions of dollars of broadband supports if they adopted AI laws that were burdensome or onerous. And this was after Congress considered to whether to impose a moratorium on state AI legislation last July. I know Chairman Obernolte and I disagree on this, but the Senate rejected it on a 99-to-1 vote.
So several questions. I won't ask the rhetorical question about whether you agree with Governor DeSantis on this, who thinks that the president lacks the power, but the executive order specifically tasks you and your office with helping to decide what law counts as onerous. So what part of the Constitution says that the director of OSTP has the power to overrule laws enacted by state legislatures?
Michael Kratsios:
So as a director, I am not actually responsible for that. The Secretary of Commerce is in consultation with me. So I look forward to hearing from the secretary as he works through the collection of those laws.
Rep. Don Beyer (D-VA):
And when you're advising which laws count as onerous, will you have any say in what the criteria is used?
Michael Kratsios:
I think it's a process to be determined, but as the executive order stated, it sits at Commerce.
Rep. Don Beyer (D-VA):
Yeah. Let me just point out a couple of examples. Colorado is trying to stop AI from discriminating in hiring. California wants transparency on AI models. New York requires disclosure when someone is talking to a chatbot. Florida's prohibiting nudity apps to produce sexualized images in mirrors. Chairman Obernolte and I have served in our respective state legislatures. I think we both believe that state legislatures are the laboratories of democracy. I agree with the chairman, that when we have a meaningful framework at the national level, then we don't want a Tower of Babel. We don't want chaos at the state levels, but while it's one for 80, I think it's really inappropriate to hamstring the states who may be teaching us the best way to have meaningful legislation. And in my 38 seconds, I'd welcome any thoughts you have.
Michael Kratsios:
No. I think at the end of the executive order, the president makes clear that we should be working with Congress on a legislative proposal on this particular issue. And he also specifically calls out that we should not preempt otherwise lawful state laws that relate to certain topics, including child safety, AI compute and data center infrastructure, and also the state government procurement of AI. So there are certainly things that are specifically called out in the EO, and as we work through this legislative process, I think there'll be more we can work on together.
Rep. Don Beyer (D-VA):
Great. Thank you very much, and I yield back.
Michael Kratsios:
Thank you.
Rep. Jay Obernolte (R-CA):
Gentlemen yields back. That concludes our round of questioning. I would like to thank you, Director Kratsios, for your valuable testimony today and thank all of our members for their thoughtful questions. This has been a really valuable hearing, and we'll have to continue this conversation. The record will remain open for 10 days for additional comments and written questions from members. With that, this hearing is adjourned.
Authors
