Transcript: Sam Altman Testifies At US Senate Hearing On AI Competitiveness
Cristiano Lima-Strong / May 9, 2025
May 8, 2025—OpenAI cofounder and CEO Sam Altman testifies at a United States Senate Committee on Commerce, Science, and Transportation hearing titled “Winning the AI Race: Strengthening US Capabilities in Computing and Innovation.”
OpenAI CEO Sam Altman, Microsoft President Brad Smith, AMD CEO Dr. Lisa Su, and CoreWeave CEO Michael Intrator urged US lawmakers to take a hands-off approach to artificial intelligence at a major Senate hearing on Thursday, mounting a more forceful defense against calls for regulation in the sector.
The Senate Commerce Committee session marked a notable shift in posture for some of the industry leaders, such as Altman, who just two years prior openly embraced calls for AI regulation at another high-profile hearing. The tech executives also pushed lawmakers to support more investments in AI infrastructure and skilling initiatives.
Many of those sentiments were echoed by top Republicans, including Senate Commerce Chairman Ted Cruz (R-Texas), who railed against European-style AI rules that he said if duplicated would set the US back in the global race against China to develop the technology.
It was the biggest congressional hearing on the topic since the White House was retaken by President Donald Trump, who has made it a priority to remove what he views as “barriers” to AI innovation, such as former President Joe Biden’s sweeping AI executive order.
Key moments included:
- Cruz, whose committee is poised to play a major role in any federal AI legislation, said he planned to introduce a bill to create “a regulatory sandbox for AI … that will remove barriers to AI adoption, prevent needless state overregulation, and allow the AI supply chain to rapidly grow here in the United States.”
- Altman repeatedly rejected specific calls for regulation. He said proposals requiring AI developers to vet their systems before rolling them out would be “disastrous” for the industry. Asked about more limited proposals to have the National Institute of Standards and Technology (NIST) set AI standards, Altman replied, “I don't think we need it. It can be helpful.” Altman later advocated for “sensible regulation that does not slow us down.”
- Industry leaders warned that the US could lose pace with China if it does not continue to invest in research, education, supply chains, and energy to support AI development. “The number one factor that will define whether the United States or China wins this race is whose technology is most broadly adopted in the rest of the world,” Smith said.
Below is a lightly edited transcript of the hearing, “Winning the AI Race: Strengthening US Capabilities in Computing and Innovation.” Please refer to the official audio when quoting.
Sen. Ted Cruz (R-TX):
Good morning. The Senate Committee on Commerce, Science, and Transportation is called to order. Welcome to our witnesses. Thank you for joining us this morning. In the last 2 years, AI has brought the United States and the world to a critical inflection point. AI may be a technology as transformative as the internet or even more so. It has unleashed a new global industrial revolution, with the potential to unlock opportunities that improve our quality of life, create jobs, and stimulate economic growth. The country that leads in AI will shape the 21st-century global order. As a matter of economic security, as a matter of national security, America has to beat China in the AI race. China has made AI central to its national strategy, and China aims to lead the world in AI by 2030, investing heavily in AI adoption across industries like manufacturing and defense. In this race, the United States is facing a folk in the road: do we go down the path that embraces our history of entrepreneurial freedom and technological innovation, or do we adopt the command and control policies of Europe? I would suggest that Congress draw on the lessons we can learn from the dawn of the internet. In the early 1990s, Washington embraced the internet and explicitly adopted a style of regulation that was intentionally and decisively light touch. Congress chose to deregulate under the Telecommunications Act of 1996 while President Clinton pursued tariff agreements and treaties that protected America's intellectual property and technological exports. Further, in 1998, Congress enacted a 10-year internet tax moratorium so that state laws wouldn't balkanize and stymie the promise of e-commerce. The results of these decisions were extraordinary.
By 2000, the United States has recorded five straight years of historic highs in productivity gains and investment growth. Hundreds of thousands of new jobs were created and the United States became a top tech exporter with massive sums of private investment pouring into the US digital economy. By contrast, EU countries pursued a series of heavy handed regulations that proved enormously costly. In 1993, the United States and Europe had economies virtually identical in size. Today the American economy is more than 50% larger than Europe's. The drivers of that are tech and the shale revolution. Those two comprise virtually the entirety of that massive growth over Europe. According to one EU commission report, only 6% of global AI startup funding flows to EU firms 6%. That is one 10th of the amount that is going to American companies. The report directly blames this yawning chasm on the EU's nasty regulatory approach, and yet the Biden administration for inexplicable reasons tried to align AI policy with the EU to adopt their failed policies.
President Biden's sweeping AI executive order, the longest executive order in American history, cast AI as dangerous and opaque, laying the groundwork for audits for risk assessments and regulatory certifications. Biden's approach inspired similar efforts in state legislatures across the country, threatening to burden startups, developers and AI users with heavy compliance costs. Some of my colleagues suggest that a friendlier version of the Biden approach makes sense. They want a testing regime to guard against AI discrimination and have government agencies provide guidance documents seemingly something out of Orwell that will usher in what they call best practices as if AI engineers lack the intelligence to responsibly build AI without the bureaucrats. Many in the industry foolishly have supported such paternalism harmful regulations take many forms. Biden's misguided midnight AI diffusion rule on chips and model weights would've crippled American tech companies ability to sell AI to the world.
The Biden plan would've handed over key markets to China. We should want foreign countries, particularly our allies to buy American. I vocally opposed this rule for months and indeed the Ranking Member and I together urge the Biden administration not to adopt it, and I'm very pleased that President Trump has now confirmed he plans to rescind it. All of this busybody bureaucracy. Whether biden's industrial policy on chip exports or industry and regulator approved guidance documents is a wolf in sheep's clothing to lead in AI. The United States cannot allow regulation even the supposedly benign kind to choke innovation or adoption. American dominance in AI depends on two factors, innovation and adoption. Innovation drives breakthroughs and global competitiveness. Adoption ensures that these tools empower American workers and businesses enabling the United States to become the world's leading adopter and exporter of ai. Thankfully, President Trump has largely reversed Biden's misguided AI agenda.
In fact, I think AI was a sleeper issue in this last elections. Americans wanted to see President Trump and Republicans and indeed all senators champion AI policies focused on innovation and adoption. The contrast has been astounding. This year there have been over $1 trillion of new AI projects including major investments in Texas like the core Weave data center in Plano and the $500 billion project Stargate and Abilene by OpenAI and Oracle and others adopting a light touch regulatory style for AI will require Congress to work alongside the president just as Congress did with President Clinton. We need to advance legislation that promotes long-term AI growth and innovation. That's why I will soon release a new bill that creates a regulatory sandbox for AI modeled on the approach taken by Congress and President Clinton at the dawn of the internet that will remove barriers to AI adoption, prevent needless state overregulation, and allow the AI supply chain to rapidly grow here in the United States. That's how we'll accelerate economic growth, secure US dominance in AI and beat China. And with that, I turn to the Ranking Member Cantwell.
Sen. Maria Cantwell (D-WA):
Thank you, Mr. Chairman. Thank you for this hearing and welcome to the witnesses before us, Mr. Altman, Dr. Su, Mr. Intrator and Mr. Smith. It's a great pleasure to have all of you here, but it's especially prideful moment for the Pacific Northwest to have Mr. Smith and Mr. Altman here both representing an open AI approach. By that I mean a approach where we want to win against China and a closed system by making sure that what is developed here in the United States and around the globe is an architecture where the United States wins and is open. To do that, we need to focus in winning on computing power, on algorithms and on robust data sources. All of that will be key. Personally, I believe a continued investment in NSF helps in all of those areas as a good public private partnership with the industry that's represented here today.
I'm so proud that we passed the CHIPS and Science Act because the CHIPS and Science Act also set a foundation for investing in the United States of America and bringing more of the supply chain back to the United States of America to build on a future leadership that we already have. I believe in the computing power, but we also need to understand that we have to move forward on the CHIPS Act like the University of Washington, $10 million grant on multidesign sets for chips. The very large scale integrated designs I'm sure that Dr. Su will tell us about today, but the fact that the United States has to continue to lead on the future designs and the implementation of that also requires us to be very smart about data centers, about sources of electricity and how we're going to build that supply that could be up to 12% of electricity demand in the very near future.
So how do we do that? I've noticed in each of your testimonies you all explain this, but I'm also very proud that Microsoft has already signed an agreement with one company, Fusion Energy in Everett, Washington, for a power source supply. Maybe Mr. Altman and his testimony will talk about this, but that they hope to get very near future energy source from that. So clearly the United States leading on electricity and development. So Mr. Smith, very much appreciate in your testimony the accentuation on the fact that the United States of America needs hundreds of thousands of new electricians, something we should all want to get behind. The fact that having electricity and the electricians and the data source centers here in the United States and in other places will be key. While I want to see us move forward, as the Chairman said, we signed a letter saying we needed a broader support for export controls.
I want to be clear, export controls are not a trade strategy. They are not a back pocket issue that the president of the United States whips out in trade negotiations. We are going to move fast because we are going to set standards. I believe those standards should be encouraging very broad distribution of US manufactured and made AI chips and technology and that we're asking our partners overseas to comply with the rules that we establish. Things like making sure that there is no circumvention of the supply that somehow gets into China's hands, making sure that we have access and making sure that we can verify on that, and also making sure that US data companies and cloud-based companies are allowed to be in that market. We should not be going to markets overseas only to have them tell us that organizations with cloud services from the US would not be allowed.
This I believe would be a robust initiative on getting us AI chips and US AI open systems dominated around the globe. Why do we need to move fast? We need to move fast because if we don't, we are looking at another Huawei, another instance where the United States is behind and also saying we should tear out this system that now we don't like for lots of reasons and backdoor policies. So I'm all for winning. That is why we passed the Chips and Science Act. I'm all for winning and that is why we have passed seven bills out of this committee last year that kind of got stuck in the lame duck. I think the Chairman of the committee wasn't ready to move forward in negotiations with the House and Senate it on those seven bills, but those bills a bill between myself and Senator Young on the Institute for Standards, NIST standards, which I think we still need to do.
My colleague and I Moran on education and scholarships, small business and the bill by my colleague here, senators Klobuchar and Thune, which was also related to the NIST standards. So we had an opportunity a year ago to move fast. We didn't do it, so let's do this now. Let's get together and figure this out. The faster the United States moves now I like this great Paul Romer quote, which was about collaboration is the next phase of innovation. If we don't collaborate here, if we throw down on politics instead of getting the policy right, we won't move fast. Let's allow these people to do what they do best and let's make sure the United States has the right policies in place so that our open AI standard wins today. Thank you, Mr. Chairman.
Sen. Ted Cruz (R-TX):
Thank you. I'd now like to introduce our witnesses for today. Each of our witnesses and their companies represent critical parts of the AI infrastructure, hardware and software supply chain. Our first witness is Sam Altman, the co-founder and CEO of OpenAI. OpenAI is one of the world's most advanced AI companies known best for its ChatGPT product. Our second witness is Lisa Su, the chair and CEO of Advanced Micro Devices. AMD develops high performance processors, graphic chips and AI accelerators that power artificial intelligence, and Dr. Su is also a Texan. Our third witness today is Michael and Trader, the CEO and co-founder of Core Weave, an AI Hyperscaler Core Weave is the world's largest purpose-built AI cloud platform and our final witness is Brad Smith, the vice chair and president of Microsoft. I believe everyone is familiar with his company. Mr. Altman, you are recognized for your opening statement. If you could turn on the volume. Sorry about that, and I do enjoy telling techies how to operate the tech.
Sam Altman:
Pretty embarrassing that I couldn't figure that out. Anyway, thank you Chairman. Thank you, Ranking Member Cantwell. Thank you, Senators and fellow panelists. It's a real honor to be here. I was here about two years ago and at that time, ChatGPT had recently launched. It was a curiosity in the world. People weren't sure what it was going to mean, what it was going to be used for. Today, we've made significant progress. ChatGPT is used by more than 500 million people a week. I just saw yesterday that according to SimilarWeb, it's now the fifth biggest website on the internet globally growing very quickly, but most of all it's being used in really important ways. It's significantly increasing productivity. We hear scientists say they're two or three times more productive than they could be before. We hear people that are getting medical advice or learning in ways they couldn't before and it's no longer this thing that was going to come in the future, but it's here now and people are really using it.
We're very proud to be one of the leaders of this. We're very proud that America is leading in AI so significantly and I think that's critical what Senator Cruz said about the importance of innovation in America and that what happened with the internet we have happen again. I believe this will be at least as big as the internet may be bigger that needs to happen. For that to happen, investment in infrastructure is critical. I believe the next decade will be about abundant intelligence and abundant energy. Making sure that America leads in both of those, that we are able to usher in these dual revolutions that will change the world we live in, I think in incredibly positive ways is critical. I got to go to Abilene, Texas, yesterday, where we're building out what will be the largest AI training facility in the world. It's coming along beautifully.
Super exciting to see. We need a lot more of that. There's a whole sort of AI factory, like a supply chain of energy chips standing up, data centers, building the racks and more. We've got to do that really well in the US so that we can continue to innovate, continue to lead and continue to sort of shape this revolution. Speaking of that, I was very inspired by what Chairman Cruz said, so I'd like to deviate from script here and tell a story in my prepared written testimony. I covered the basics, so if it's okay, I'd love to tell you a story.
I grew up in St. Louis, and I was a computer nerd and it was the time of the internet boom and I thought it was the coolest thing ever. We kind of lived in this beautiful old brick house in this suburb of St. Louis and I lived in the attic and I had this computer and I would stay up all night and I would learn to program and I got to kind of use the internet and it was like a crazy time of tons of innovation. All sorts of stuff was happening. It was amazing and it was all happening here. All the internet companies were in the US. I used a Mac that was built here. I used chips that were started near where I now live and I learned about computers. I thought it was the coolest thing ever and I can draw a straight line from that experience to founding OpenAI and getting to work on companies like Helion, the spirit of American innovation and supportive entrepreneurship.
I don't think the internet could have happened anywhere else, and if that didn't happen, I don't think the AI revolution would've happened. Here I am a child of the internet revolution. I have the great honor to be one of the parents of the many parents of the AI revolution, and I think it is no accident that that's happening in America again and again and again, but we need to make sure that we build our systems and that we set our policy in a way where that continues to happen. I think this is magic. I don't want to live in Europe either. I think America is just an incredible and special thing and it will not only be the place where the AI revolution happens, but all the revolutions after I was home visiting St. Louis recently drove by our old house and I was at night and I looked up and in that top floor window the light was on and I thought hopefully there's some kid in there staying up late at night playing with ChatGPT, figuring out how he or she is going to start whatever company comes next and whatever the next thing is after I will happen here too.
That is to me the magic of this country. It's incredibly personally important, and I hope it keeps going. Thank you very much for having me.
Sen. Ted Cruz (R-TX):
Thank you. Dr. Su.
Dr. Lisa Su:
Chairman Cruz, Ranking Member Cantwell members of the committee. It is a real honor to be here on such an important topic. I'm chair and CEO of AMD. We are a US headquartered semiconductor company founded 56 years ago and we build high-performance computing chips for the modern economy every day. Billions of people rely on our products and services powered by our technologies, but our chips are also extremely important to support the critical missions including powering defense systems and secure communications as well as enabling breakthrough scientific research. I have to say our proudest moments though are when we see amazing public-private partnerships and our work in Supercomputing is an example of that. Through more than a decade of partnership with the Department of Energy, A MD now powers the two fastest supercomputers in the world, one that is housed at Oak Ridge National Labs that was put into place in 2021 and the other at Lawrence Livermore National Labs that was just recently put into commission late last year.
These systems are really critical from a national infrastructure standpoint and solve many, many large research issues as well as national security and scientific leadership. Now in terms of AI, there's so much that's been stated about AI. I really want to thank German Cruz and Ranking Member Cantwell for having this hearing. I think it is a wonderful opportunity to talk about how we win. AI is truly the most transformative technology of our time , the United States leads today, but what I would like to say is it is a race. Leadership is absolutely not guaranteed. It's a global race that will shape the outcome of national security and economic prosperity for many decades to come. Now, maintaining our lead actually requires excellence at every layer of the stack, so I'm really honored to be here with my panelists as well. We have deep partnerships with Microsoft and OpenAI that demonstrate how you need silicon, you need software, you need systems, and really the application layer to be successful.
Now, in terms of what to do, I thought about what would be the most important things to say today, and I put them in five categories. I think the first and probably the foremost is we must continue to run faster. This is a race and the race does not stand still. Nobody in the world standstill. We lead today because of the bold decisions that we've made and because of the innovation economy that we have, but we need to continue to run faster and that means ensuring that we have computing available. I think Sam's story about Abilene is an excellent example of how when you allow the computing infrastructure to expand at the rate and pace that the private sector wants, you actually make a tremendous progress. I would also like to mention the importance of open ecosystems. I think open ecosystems are really a cornerstone of US leadership and that allows frankly, ideas to come from everywhere and every part of the innovation cycle.
It's reducing barriers to entry and strengthening security as well as creating frankly, a competitive marketplace for ideas. Third, we are very happy to see the focus on a robust domestic supply chain. For us. In the semiconductor world, we used to not get so much attention. Now we get a lot of attention thanks to the importance of chips and the fact is we need more manufacturing in the us. The efforts so far have made a good progress, but there's a lot more that can be done and that should be done in public-private partnership. Fourth, we must invest in talent. Frankly, the United States should be the best place to study ai, to work in AI to really move forward all of the innovations that we need, and I think again, this can also be done in significant public-private partnership. And then fifth, of course, in the area of export controls.
We totally understand as an industry the importance of national security, and that goes without saying as a US company, but we also want to ensure as Chairman Cruz and Ranking Member Cantwell stated, it is important to have widespread adoption of US technologies We lead today because we have the best technology. However, if we're not able to fully have our technology adopted in the rest of the world, there will be other technologies that will come to play. They may not be as good as we are today, but frankly, usage really spurs innovation and this is something that we certainly need to work with in a public-private partnership, and I would frankly end by saying like, Sam, I had a computer when I was growing up. I grew up in New York. I'm a little older than Sam, so my first computer was a Commodore 64 and then I graduated to the Apple ii, but the fact is this is the best place to do computing innovation in the world. We wanted to stay that way with really a very rich and broad ecosystem. So thank you again for the opportunity to be here today
Sen. Ted Cruz (R-TX):
And I had an Apple II as well with a shoebox of floppy discs, and somehow I ended up taking a wrong turn and ending up in politics instead, Mr. Intrator.
Michael Intrator:
I started out with a Vic 20. Chairman Cruz, Ranking Member Cantwell and distinguished members of the committee. Thank you for the opportunity to testify today I'm honored to appear alongside my industry colleagues and partners. My name is Michael and Schrader. I'm the co-founder and CEO of Core Weave founded seven years ago CoreWeave started like many innovative ventures humbly in a garage, experimenting initially with graphics processing units or GPUs for cryptocurrency mining. Recognizing the transformational potential, we pivoted to support powerful AI applications, dramatically scaling the vision of an operation. Today, core Weave stands at the forefront of America's AI infrastructure revolution. Operating more than 30 data centers across 15 states, we manage more than 250,000 GPUs currently using 360 megawatts power over two short years, our revenue has surged by 12000% reaching 1.9 billion in 2024. As a result of this progress, core Weave became publicly traded company on March 28th, 2025.
CoreWeaves rapid growth is a testimony not only to the technology, but also to the surging global demand for advanced AI infrastructure. Our infrastructure enables American businesses to rapidly translate AI aspirations into impactful economic realities by empowering companies to accelerate innovation. We are fueling America's competitive edge while improving productivity and prosperity. Modern AI requires specialized infrastructure, purpose-built computing capabilities that surpassed traditional cloud computing in scale and performance. Today's general purpose cloud that was built to support and scale the complexity of AI workloads, we cannot run a 21st century economy. On the 20th century's infrastructure, AI workloads involve trillions of simultaneous calculations, demanding unprecedented computing power, advanced cooling systems, cutting edge chip technology, ultra high speed networks, and accelerated storage. Since 2018, the computing power necessary for advanced AI models has multiplied approximately a hundred thousand fold. At Core Weave, our facilities symbolize America's great tradition of innovation.
Our data centers built, maintained, and staffed by skilled American workers embody how modern technology not only stimulates economic growth and enhances national security, but also improves humans' lives. We are at a critical juncture in the global AI competition. The nation that leads AI infrastructure will set the global economic agenda and shape human outcomes. For decades, our largest competitor, China recognizes the stakes and is spending significant resources to strengthen their position. I want to focus on four elements of policy that will help determine whether the US secures its leadership role in the AI race. First, strategic investment stability. AI infrastructure is deeply capital intensive and requires a significant level of coordination across industry stakeholders. Stable, predictable policy frameworks, secure supply chains and regulatory environments that foster innovation are crucial. Policymakers must provide clear and consistent policy and regulations across all jurisdictions that enables long-term investment and rapid scaling of AI technology.
Second, energy infrastructure development. To support the rapid deployment of AI infrastructure, America must ensure abundant and affordable supplies of energy, careful reforms in permitting and regulatory process, unnecessary to accelerate infrastructure projects, facilitate more rapid construction interconnections and energy for data centers. Third is global market access. Maintaining America's leadership also means ensuring our technology is where access to global markets, export controls and trade agreements can be calibrated to both address national security risks and support global diffusion of American AI technology and finally, public-private partnerships and workforce development. America's unique advantage in the AI is enhanced by our powerful tradition of public-private partnership. Coral Wave is proud to co-found the New Jersey AI hub with Microsoft, Princeton University, and the New Jersey Economic Development Initiative. Initiatives like this develop critical worst work skills, foster innovation and ensure economic and economies and communities are prepared for the AI driven future. America stands ready to lead the AI revolution, which will bring enormous benefits. It is a rare moment in time that we must meet. If government industry and all affected parties work together, the United States can win this race and seize the vast opportunity ahead of us. Thank you again for the opportunity to testify. I look forward to answering your questions.
Sen. Ted Cruz (R-TX):
Thank you, Mr. Smith.
Brad Smith:
Chairman Cruz, Ranking Member Cantwell, members of the Committee, thank you for the opportunity to be here today. Let me take from what my colleagues here said and offer a few thoughts. I want to refer to this chart here that shows the AI tech stack, which is also part of my official testimony. It makes a simple yet important point, which is that we are all in this together. If the United States wants to succeed in leading the world in AI, it needs infrastructure, it requires success on the platform level, and it requires people to build applications. Interestingly, we at Microsoft get to work with leaders from all three of these companies. Our success, each of our success, depends on each other. And what is true of us, is true of the whole country. So what do we need from the Congress and the country in order to succeed? I think it's three things. I described them in my written testimony. First, as Chairman Cruz said, we need innovation. Innovation will go faster with, as Sam said, more infrastructure, faster permitting, more electricians. We need more innovation fueled as Ranking Member Cantwell said by support from our universities and the federal agencies that support basic research across the country, one of this country's crown jewels. We also need, as Chairman Cruz said, faster adoption. What people refer to as AI diffusion, the ability to put AI to work across every part of the American economy, to boost productivity, to boost economic growth, to enable people to innovate in their work. And the number one ingredient for that history shows time and time again is skilling, investing in education. And finally, we need to export.
If America is going to lead the world, we need to connect with the world. We need to remember, I believe always that as a country, only four and a half percent of the world's people live in the United States of America. Our global leadership relies on our ability to serve the world with the right approach to export controls and always, especially in technology, in our ability to sustain the trust of the rest of the world. Ultimately, I think people who take the time, if they take the time to watch or read about this hearing, may wonder what is this all about? What are we at this table trying to do? What do these two letters AI really mean to them? Are we who are working in this industry trying to build machines that are better than people or are we trying to build machines that will help people become better?
Emphatically it is and needs to be the latter. Are we trying to build machines that will outperform people in all the jobs that they do today or are we trying to build machines that will help people pursue better jobs and even more interesting careers in the future? Indisputably, it needs to be the second, not the first. And I believe that is what we are and can do together. As somebody who's now spent almost 32 years in this industry, there are two things that always strike me. The first probably won't surprise you never underestimate what technology can do, how quickly it can move, what it can accomplish. But the second is one that I think is too seldom discussed, even though every day it stares us in the face. Never estimate what people can do, never underestimate human ambition. Never underestimate what a person can do if given a better technology tool and the ability to learn how to put it to use.
That's the story of this industry. It's the story of the country. It is, as you heard, the story of Sam Altman. Not everybody becomes a Sam Altman or a Satya Nadela or a Bill Gates. Everybody deserves the opportunity to try. Tonight across America, whether it's the attic of a house or the basement or just an everyday bedroom, there are kids with computers, with phones, with access to the internet, and now the ability to put AI to work. Let's invest in their education. Let's invest in the skills that the American public needs. Let's then invest in creating the future that the American public deserves. Thank you.
Rep. Tim Sheey (R-MT):
Well, thank you witnesses for your testimony. And I'll start off with the first round of questions and move down the dias to our Ranking Member here. Thank you for your testimony. Certainly makes me sleep better at night. Worried about Terminator and Skynet coming after us, knowing that you guys are behind the wheel, but in five words or less, start with you Mr. Smith. What are the five words you need to see from our government to make sure we win this AI race?
Brad Smith:
More electricians. That's two words. Broader AI education.
Rep. Tim Sheey (R-MT):
And no using ChatGPT as a friend.
Michael Intrator:
Thank you. I would say that we need to focus on streamlining the ability to build large things,
Dr. Lisa Su:
Policies to help us run faster in the innovation race.
Sam Altman:
Allow supply chain-sensitive policy.
Rep. Tim Sheey (R-MT):
That was good. So what I hear there is something pretty similar to the races we've won before. Nuclear energy, for example, the Germans and Austrians really led the innovation around that, but we won the race because we put a massive government effort collaborating with our universities and others to win that race space. The Soviets put the first satellite up, put the first man in space, but we won the space race because we adopted a framework to ensure that we won aviation, automobiles, et cetera. So what I hear from you is you do need support from our government, but you also need the government to stay out of your way so you can innovate and win this race. How do we incentivize companies to do business here in America to make sure we win this race in America and America leads not just China, but other non-state actors too. I think that the scariest thing about AI from a capability standpoint, it doesn't have to be a state actor to win this race. It's not like nuclear engine, it's not like space technology. A non-state actor could just as easily win this race and wield more power than anyone else. So how do we encourage innovators investment to happen here in America to ensure we win this race? Mr. Altman, you want to start?
Sam Altman:
We were honored to announce back in January, project Stargate a $500 billion investment in United States infrastructure that is now well underway. As I mentioned, getting to see it yesterday in Abilene, the first site was incredible. We need a lot more of that. We need certainty on the ability to build out this entire supply chain, build the data centers permit, the electricity. We'd love to bring chip production here, network production here, server rack production here, and I think the world does want to invest. We have a lot of global investment flowing into the US to do this. We also want to make sure that other countries are able to build with our technology, use our models and sort of be in our orbit and use us diffusion of technology here. So that's really important. We need to make sure that the highest skilled researchers that want to come work at us, companies can come here and do that.
We need to make sure that companies like OpenAI and others have legal clarity on how we're going to operate. Of course there will be rules. Of course there need to be some guardrails. This is a very impactful technology, but we need to be able to be competitive globally. We need to be able to train, we need to be able to understand how we're going to offer services and sort of where the rules of the road are going to be. So clarity there and I think an approach like the internet, which did lead to flourishing of this country in a very big way. We need that again, Dr. Su.
Dr. Lisa Su:
I would add, I think computing is a foundation to all of this. We want to have more compute built in the US by the US companies and ensure that we have a great environment for that. We want to ensure that our technology around the world is also used broadly and in the right ways. So I think the conversation about export controls and rules should just be simple, easy to follow, easy to enforce and enable US AI platforms to be the foundation. And then certainly the comments around bringing manufacturing back home and ensuring that we have the right talent base are all extremely important elements of that.
Rep. Tim Sheey (R-MT):
Are companies weighing doing business in AI in America versus China? Are the companies making that side-by-side comparison?
Dr. Lisa Su:
I think if you look across the world, there are countries and companies that will ask those questions if it's hard to obtain US technology, although US technology is the best, if it's hard to obtain, then there's a hunger for AI and they will choose what is available and if China is available, that will certainly be a outcome that we would not like to see.
Rep. Tim Sheey (R-MT):
Well, I think I hear the words infrastructure, electricians, universities, regulatory framework, and I think those are things we can help with. I hear words like innovation and talent and I say, I hear Dr. Su, run faster. Those aren't things that government can't manufacture talent. We can't make you run faster, but we can give you the tools to do that and I think it's time that we create a framework so that you have the tools you need to win this race because you're going to be the ones that win it, not us. Thank you for your testimony and I recommend Sen Cantwell.
Sen. Maria Cantwell (D-WA):
Thank you, Mr. Chairman. I'd like to continue that same theme generally about competitiveness. Do we need NIST to set standards? If you could just yes or no and just go down the line,
Sam Altman:
I don't think we need it. It can be helpful.
Sen. Maria Cantwell (D-WA):
Yes.
Michael Intrator:
Yes, yes.
Sen. Maria Cantwell (D-WA):
Okay. So in the context of what we're talking about here, we're really just talking, I don't know Mr. Smith or Mr. Integrator or Dr. Su. The issue here is if we want to move fast, we want to create, just like with electricity, the standards by which we want to move fast here, I would just call it code for code is what we want. We want NIST to do something in the standard setting that will allow us to move much faster. Is that right? Either Mr. Smith or Mr. Integrator,
Brad Smith:
What I would say is this, first of all, NIST is where standards go to be adopted, but it's not necessarily where they first go to be created. So we've
Sen. Maria Cantwell (D-WA):
Got it. Thank you for that clarity. We're talking about an industry, IEEE, lots of different organizations, industry input, and then they're adopted. So yes, let's clarify that. Let's clarify that.
Brad Smith:
I think that's the way it works.
Sen. Maria Cantwell (D-WA):
Yes, but you think we need to do that, particularly if the United States wants to lead,
Brad Smith:
We will need industry standards, we will need American adoption of standards, and you are right. We will need US efforts to really ensure that the world buys into these standards.
Sen. Maria Cantwell (D-WA):
Okay, Mr. Intrator.
Michael Intrator:
I think it's important that when you're working with standards, what that allows for is a common vocabulary which allows for acceleration. And so to the extent that we can step into that role and establish touch points where everyone can agree on specific things that will lead to an acceleration both domestically and abroad.
Sen. Maria Cantwell (D-WA):
I don't know if drilling down more on what you think those are, but in general, when I think about the internet and HTTP or HTML or any of the TCPIP, we're talking about things that allowed us to move faster and getting those standards established helped us do that. On the export issue, Mr. The issue of cloud sources shouldn't be left out. If we say, let's go with Malaysia. Malaysia is going to tell us that they can certify that there's no diversion of these chips to China, and we basically have a way that we can make sure that this is understood and monitored, then we also want accessed, right? We want access by US companies.
Michael Intrator:
Yeah, I think Lisa's point was excellent, right? At the end of the day, the world wants to be able to build and deploy artificial intelligence in a very broad way. And if we nature of cores of vacuum, if we do not step into that role, other technology will step in that role. If it is suboptimal, so be it. It's better to have something that is suboptimal than have nothing. And so that is
Sen. Maria Cantwell (D-WA):
What well, we don't want to recurrence of a Huawei that develops faster and then has a government back door, and then we all have to raise opposition. I'm for a tech NATO, I'm for the five most sophisticated democracies and tech nations setting the rules of the road and saying, this is who you should buy from. Don't buy from anybody else who has a government backdoor. Not a good idea. So that's how we get leverage. I'm not so hot on the president's tariff agenda for this very reason because we're not building the alliances, we're creating the enemies. And what I want to do is get the supply chain here, get the semiconductor flow here, lower the cost, and go as fast as we can.
Michael Intrator:
Yeah, I agree with that. I don't think anybody's not going to agree with that, right? I think that's an excellent objective. I just think that what will happen beyond the five NATO companies is that there will be a demand for artificial intelligence and they will proceed with what they can proceed with.
Sen. Maria Cantwell (D-WA):
Dr. Su, what is your view of this? About how we win, how we protect our objectives, but we're more aggressive on the export strategy?
Dr. Lisa Su:
Well, I think there is a clear recognition that we need an export strategy. And so having this conversation is very important. And from our perspective, the idea is to ensure that our allies, and frankly I use allies in the very broadest sense, get access to the great American technology that we have with the appropriate controls in place. And I think you can do both to your earlier comment Ranking Member Cantwell about the need to have US technologies in those countries. I think those countries are actually very interested in doing that because we do have the best technology today. And using that to really build this broad AI ecosystem is really our opportunity.
Sen. Maria Cantwell (D-WA):
I agree. Thank you so much.
Rep. Tim Sheey (R-MT):
The senior senator from Ohio.
Sen. Bernie Moreno (R-OH):
Thank, you Chairman Sheehy. Make sure Senator Cruz heard that one. So first of all, thank you for being here and taking the time. If I could just real quickly just confirm that I've heard what you said pretty unanimously, which is we need dramatically more power generation in this country. Is that correct? Alright. So Dr. Su, you just recently did a partnership with TSMC to manufacture your chips here in America. Thank you. I think it's a little bit long overdue. I wish we had done more of that earlier. Are those semiconductor fabs, high energy users?
Dr. Lisa Su:
Thank you, Senator. We are very pleased with our efforts together with the government on bringing more manufacturing back to the United States. To your question, certainly semiconductor manufacturing plants are high energy users and we do need more power for both manufacturing as well as for data centers as you mentioned.
Sen. Bernie Moreno (R-OH):
And without chips, this just doesn't work. If we don't have the highest-performance chips made here in the United States, this is not going to happen here. Correct.
Dr. Lisa Su:
We absolutely need the highest performing chips and we also need the entire ecosystem for chip manufacturing. So wafers are one piece, but there are many other pieces as well.
Sen. Bernie Moreno (R-OH):
And are those chips powered by solar power and windmills
Dr. Lisa Su:
Today? There are not, but I think there are opportunities to certainly do that.
Sen. Bernie Moreno (R-OH):
So do you think it's outrageous that last year, because of the policies of the Biden administration, that 90% of new power generation in this country was windmills and solar panels and we absolutely knee capped American energy. We have a thousand years of natural gas sitting in Pennsylvania, Ohio and West Virginia, and yet 90% of power generation in this country last year was solar panels and windmills. Does that make this country more competitive or less competitive? Anybody can jump into that one that wants to answer that.
Brad Smith:
Let me say two things. One, you're right, we need more electricity. I think our industry it's worth remembering is only going to account for 15% of the total additional electricity the country is going to need. We are going to need electricity from a variety of sources. Today, the United States, 56% of our electricity comes from carbon. 44% comes from carbon-free energy, meaning nuclear wind or solar. We need a broad-based approach and we need a diversity of solutions.
Sen. Bernie Moreno (R-OH):
But again, 90% was energy. That's not affordable, it's not abundant and it's not reliable. Lemme just shift gears. Mr. Altman, thank you for first of all, creating your platform and an open basis and agreeing to stick to the principles of nonprofit status. I think that's very important. Do you think that the internet age did a good job between the beginning of the nineties through the 2000s of protecting children?
Sam Altman:
I would say not particularly.
Sen. Bernie Moreno (R-OH):
Yeah. And your new father, correct?
Sam Altman:
Yes.
Sen. Bernie Moreno (R-OH):
Congratulations.
Sam Altman:
Thank you very much. He's doing well. He is. It's the most amazing thing ever.
Sen. Bernie Moreno (R-OH):
Yeah. I don't think you want your child's best friend to be an AI bot.
Sam Altman:
I do not.
Sen. Bernie Moreno (R-OH):
So what can we do? How can we work together to protect children?
Sam Altman:
We've talked a lot about some of the things we're doing here. We're trying to learn the lessons of previous generation, and that's kind of the way it goes. People make mistakes and you do it better next time. One thing we say a lot internally is we want to treat our adult users like adults. We want to give them a lot of flexibility. We want to let them use the service with a lot of freedom. And for children that needs to be a much higher level of protection, which means the service won't do things that they might want. Now we're still early, so sometimes people say, oh, you're being too strict on the rules and it's just, we can't perfectly tell this. But if we could draw a line and if we knew for sure when a user was a child or an adult, we would allow adults to be much more permissive and we'd have tighter rules for children.
Sen. Bernie Moreno (R-OH):
So I think what I would ask is if you could have your team commit to having your teams work with our teams to make certain that we put together the right framework early on, I think is the best way we can move forward. Because we don't want to overregulate, but we can't repeat the mistakes of the internet and social media era where children got harmed.
Sam Altman:
We'd be delighted to work with you on that.
Sen. Bernie Moreno (R-OH):
Great. Thank you.
Sam Altman:
It's super important.
Sen. Bernie Moreno (R-OH):
Thank you.
Sam Altman:
Can I say one more thing about what you said? This idea of AI and social relationships. I think this is a new thing that we need to pay a lot of attention to. People are relying on AI more and more for life advice, sort of emotional support, that kind of thing. It's a newer thing in recent months and I don't think it's all bad, but I think we have to understand it and watch it very carefully.
Sen. Bernie Moreno (R-OH):
Alright, thank you. And thank you for that commitment. It's very appreciative. Talk to your team already. Good people.
Sam Altman:
Great.
Sen. Bernie Moreno (R-OH):
Mr. Intrator, quickly, can you talk about the intersection between the importance of a robust stable coin ecosystem here in America and how that has a future with payments and how AI will factor into that? Because I don't think people see how this fits into the broader puzzle.
Michael Intrator:
So, thank you for the question. And we did start out as a crypto based company hobby that kind of got away from us a little bit. Look, I think that stable coins, crypto, AI , they share certain DNA in common, which is that they are attempts to build into a future where new technology will make things better for society. And there is a huge potential for us to use stable coins, crypto and AI in a combination for better outcomes.
Sen. Bernie Moreno (R-OH):
Alright, thank you. And that was the quickest coup since 1959.
Sen. Ted Cruz (R-TX):
Senator Klobuchar.
Sen. Amy Klobuchar (D-MN):
Thank you very much. Senator Cruz. A lot of exciting things with AI, especially from a state like mine that's home to the Mayo Clinic with the potential to unleash scientific research while we've mapped the human genome and we have rare diseases that can be solved. So there's a lot of positive, but we all know, as you've all expressed, there's challenges that we need to get at with permitting reform. I'm a big believer in that energy development. Thank you Mr. Smith for mentioning this with wind and solar and the potential for more fusion and nuclear, but wind and solar, the price going down dramatically in the last few years, and to get there, we're going to have to do a lot better. I think David Brooks put it the best when he said, I found it incredibly hard to write about AI because it is literally noble whether this technology is leading us to heaven or hell, we wanted to lead us to heaven. And I think we do that by making sure we have some rules of the road in place so it doesn't get stymied or set backwards because of scams or because of use by people who want to do us harm. As mentioned by Senator Cantwell, Senator Thune and I have teamed up on legislation to set up basic guardrails for the riskiest non-defense applications of ai. Mr. Altman, do you agree that a risk-based approach to regulation is the best way to place necessary guardrails for AI without stifling innovation?
Sam Altman:
I do. That makes a lot of sense to me.
Sam Altman:
Okay, thanks. And did you figure that out in your attic?
Sam Altman:
No, that was a more recent discovery.
Sam Altman:
Thank you. Very good. Just want to make sure. Our bill directs Mr. Smith, the Commerce Department to develop ways of educating consumers on how to safely use AI systems. Do you agree that consumers need to be more educated? This was one of your answers to your five words, so I assume you do.
Brad Smith:
Yes, and I think it's incumbent upon us as companies and across the business community to contribute to that education as well.
Sam Altman:
Okay, very good. Back to you Mr. Altman. The Americans rely on AI as we know increasingly on some high impact problems to make them be able to trust that we need to make sure that we can trust the model outputs. The New York Times recently reported earlier this week that AI hallucinations newer to me where models generate incorrect or misleading results are getting worse. That's their words. What standards or metrics does OpenAI use to evaluate the quality of its training data and model outputs for correctness
Sam Altman:
On the whole? AI hallucinations are getting much better. We have not solved the problem entirely yet, but we've made pretty remarkable progress over the last few years. When we first launched chat GBT, it would hallucinate things all the time. This idea of robustness, being sure you can trust the information, we've made huge progress there. We cite sources. The models have gotten much smarter. A lot of people use these systems all the time and we were worried that if it was not a hundred 0.0% accurate, which is still a challenge with these systems, it would cause a bunch of problems. But users are smart. People understand what these systems are good at, when to use them, when not. And as that robustness increases, which it will continue to do, people will use it for more and more things. But we've made, as an industry, we've made pretty remarkable progress in that direction over the last couple of years.
Sam Altman:
Well, I know we'll be watching that. Another challenge that has been, we've seen, and Senator Cruz worked and I worked on a build together for quite a while, and that's the Take It Down act. And that is that we are increasingly seeing internet activity where kids looking for a boyfriend or girlfriend, maybe they put out a real picture of themselves, it ends up being distributed at their school or somehow someone tries to scam them from financial gain or it's AI as we've increasingly seen where it's not even someone photos, but someone puts a fake body on there. And we've had about over 20 suicides in one year of young people because they felt like their life was ruined because they were going to be exposed in this way. So this bill we passed and through the Senate and the House, the first lady supported it and it's headed to the president's desk. Could you talk about how we can build models that can better detect harmful deep fakes, Mr. Smith?
Brad Smith:
Yeah, I mean we're doing open. AI is doing that. A number of us are, and I think the goal is to first identify content that is generated by AI and then often it is to identify what kind of content is harmful. And I think we've made a lot of strides in our ability to do both of those things. There's a lot of work that's going on across the private sector and in partnership with groups like NCMEC to then collaboratively identify that kind of content so it can be taken down. We've been doing this in some ways for 25 years since the internet and we're going to need to do more of it.
Sam Altman:
And on the issue last question, Mr. Chair, since the last one was about your bill, I figured it's okay, the newspapers, and you testified before the Senate Judiciary Committee, Mr. Smith about the Bill Center. Kennedy and I had still think that there's an issue here about negotiating content rates. We've seen some action recently in Canada and other places. Can you talk about those evolving dynamics with AI developers and what's happening here to make sure that content providers and journalists get paid for their work?
Brad Smith:
Yeah, it's a complicated topic, but I'll just say a couple of things. First, I think we should all want to see newspapers in some form flourish across the country, including say rural counties that increasingly have become news deserts. Newspapers have disappeared. Second, and it's been the issue that we discuss in the judiciary committee, there should be an opportunity for newspapers to get together and negotiate collectively. We've supported that, that will enable them to basically do better. Third, every time there's new technology, there is a new generation of a copyright debate that is taking place. Now, some of it will probably be decided by Congress, some by the courts. A lot of it is also being addressed through collaborative action and we should hope for all of these things. I'll just say strike a balance. We want people to make a living creating content and we want AI to advance by having access to data.
Sam Altman:
Okay, thanks. I'll ask other questions on the record. Thank you, Mr. Chair.
Sen. Ted Cruz (R-TX):
Thank you. Senator Klobuchar asked whether AI will lead us to heaven or hell. It reminded me of a famous observation by Yale law professor Grant Gilmore, that in heaven there is no law and the lion will lie down with the lamb. In hell, there is nothing but law, and due process is meticulously observed. Let me ask you this, and this is to each of the four witnesses in the race for AI, who's winning? America or China? If the answer is America, how close is China to us, and what do we do to make sure the answer remains America will win? Mr. Altman, we'll start with you.
Sam Altman:
It is our belief that the American models, including some models from our company, OpenAI and Google and others are the best models in the world. It's very hard to say how far ahead we are, but I would say not a huge amount of time. And I think to continue that leadership position and the influence that that comes with that and all of the incredible benefits of the world using American technology products and services, the things that my colleagues have spoken about here, the need to win an infrastructure, sensible regulation that does not slow us down. The sort of spirit of innovation and entrepreneurship that I think is a uniquely American thing in the world. None of this is rocket science. We just need to keep doing the things that have worked for so long and not make a silly mistake.
Sen. Ted Cruz (R-TX):
Dr. Su?
Dr. Lisa Su:
Will answer in the realm of chips, I would say America is ahead in chips today. We have the best AI accelerators in the world. I think China, although they have restrictions given their ability to use advanced technologies, the one thing that's very important for us all to remember is there are multiple ways to do things. Having the best chips is great, but even if you don't have the best chips, you can get a lot done. So I think this conversation about how far behind China is, they are certainly catching up because there are many ways to do things. I think relative to what we can do, I will continue to say really ensure that our spirit of innovation is allowed to work and that is having very supportive government policies to do that, having very consistent policies and allowing us to do what we do best, which is innovate at every layer of the stack.
Sen. Ted Cruz (R-TX):
Mr Intrator?
So I'll speak to it from the physical infrastructure and software stack to deliver that America is ahead, but it is the Achilles heel from the perspective of the ability as I started to better. Sorry about that. So the ability to build very large solutions to the computing infrastructure component of this is an area that we're going to struggle with from a permitting and building large projects to be able to deliver the power to allow those building artificial intelligence to continue to move as fast as they can. In the race that we're in
Sen. Ted Cruz (R-TX):
Mr. Smith?
Brad Smith:
I think the United States has a lead today in what is a close race and a race that will likely remain close. The number one factor that will define whether the United States or China wins this race is whose technology is most broadly adopted in the rest of the world. This is a global market and it will defined as technology markets typically are by network effects. 18% of the people in the world live in China, 4% live in the United States, 78% live somewhere else. The lesson from Huawei and 5G is whoever gets there first will be difficult to supplant. We need to export with the right kinds of controls. We need to win the trust of the rest of the world. We need to have the financial architecture that gets not only to the countries that are industrialized, but the nations say across Africa, where typically China and Huawei have done so well.
Sen. Ted Cruz (R-TX):
So some of my colleagues have made reference to standards as something that is desirable. And I will say standards is often code word for regulations. And indeed the EU stifling standards concerning the internet is what killed tech in Europe. We are seeing now state legislatures mimicking the EU such as California's SB 10 47, which thankfully was overwhelmingly defeated, but would've created essentially a California DMV for AI model registration. How harmful would it be to winning the race for ai if America goes down the road of the EU and creates a heavy-handed prior approval government regulatory process for AI?
Sam Altman:
I think that would be disastrous. I to give a more specific answer to your previous question, which I think touches on why it would be so bad, there are three key inputs to these AI systems. There's compute, all the infrastructure we're talking about, there's algorithms that we all do research on and there's data. If you don't have any one of those, you cannot succeed in making the best models. And as Brad said, the way for America to influence the world here is to have the technology that people most want to use and most adopt. The world uses iPhones and Google and Microsoft products and that's wonderful. That's how we have our influence. We don't want that to stop happening. So systems that stop us on any of these areas, if we have rules about what data we can train on that are not competitive with the rest of the world, then things can fall apart.
If we are not able to build the infrastructure, and particularly if we're not able to manufacture the chips in this country, the rules can fall apart if we can't build the products that people want, that naturally win in the market. And I think people do want to use American products. We can make them the best, but if we're prevented from doing that, people will use a better product made from somebody else that doesn't have the sort of, that is not stymied in the same way. So it is, I am nervous about standards being set too early. I'm totally fine with the position some of my colleagues took that standards, the industry figures out what they should be. It's fine for them to be adopted by a government body and sort of made more official. But I believe the industry is moving quickly towards figuring out the right protocols and standards here and we need the space to innovate and to move quickly.
Sen. Ted Cruz (R-TX):
So if each of you could briefly answer that question, because my time's expired, so I want to be respectful of that.
Dr. Lisa Su:
I agree with the comments that Sam made.
Brad Smith:
I agree. And I would just say, and I think the point you're making is we have to be very careful not to have these pre-approval requirements, including at state levels because that would really slow innovation in the country. Great.
Michael Intrator:
I think that a patchwork of regulatory overlays will cause friction in the ability to build and extend what we're doing.
Sen. Ted Cruz (R-TX):
Thank you. Senator Shatz. Senator, apologies.
Brian Schatz (D-HI):
No problem. Chairman, thank you for being here. I just want to follow up on the Chairman's question and a sort of, it may be an emerging consensus on the committee. Okay. I don't think there's anybody even on this side of the day that's proposing a sort of you European style preapproval. I think there are some people who would like to do nothing at all in the regulatory space, but I think most people understand that some guardrails, those are the words that you use. Mr. Altman rules and guardrails are necessary. Are you saying that self-regulation is sufficient at the current moment?
Sam Altman:
No, I think some policy is good. I think it is easy for it to go too far and as I've learned more about how the world works, I'm more afraid that it could go too far and have really bad consequences. But people want use products that are generally safe. When you get on an airplane, you kind of don't think about doing the safety testing yourself. You're like, well, maybe this is a bad time to use the airplane example, but you kind of want to just trust that you can get on it.
Brian Schatz (D-HI):
It's an excellent time to use the airplane example, but I think your point is exactly right is that look, there is a race, but we need to understand what we're racing for and it also has to do with American values. It's not just a sort of commercial race. So we can edge out our near peer competitor both in the public sector and the private sector. We're trying to win a race so that American values prevail internationally. Mr. Smith, I want to move on to another topic. It seems to me that on the consumer side that one of the most basic rights of a user on the internet is to understand what they're looking at or listening to and whether or not it was created solely by a person, a person using an AI or automatically generated using ai. Do you think a labeling regime, not a prohibition on the use of ai, but just the disclosure, especially as it relates to images, music, creativity, do you think a label would be helpful for consumers generally?
Brad Smith:
Yes, and I think that's what we in the industry have been working to create. I think you're right to make the distinction and focus especially on say, images, video, audio files, there's a standard called C2PA that we in a number of companies now have been advancing. It has content credentials. It enables people to know where something was created, who created it. And I think you're right to know whether it was created by a person, by AI or a person with the help of say AI.
Brian Schatz (D-HI):
I just want to use sort of a common language, not the language that all of you use or that we've all learned to use. When you talk about the data as one of the three elements that makes a model work. Data really is intellectual property. It is human innovation, human creativity. And I do think we may have a disagreement, and I agree with Senator Klobuchar about the need to understand that these models have been trained on data, but what we're really talking about is human achievement all the way up to now. And I have a deep worry, look, I'm actually an optimist in the energy space and the public service space. Certainly in health innovation, there are a lot of really exciting opportunities here. But we got to pay people for their knowledge and I am concerned that these models are going to be so successful in spitting out what appears to be knowledge that we're going to on the backend, not pay people for all of the inputs. And we will have a sort of stalling out of these models. And you talked about attention, but I'm trying to figure out what the tension really is other than you'd like to pay as little as possible for these inputs. Go ahead Mr. Smith.
Brad Smith:
Well you had me until the last sentence.
Brian Schatz (D-HI):
I know.
Brad Smith:
Hey look, we create intellectual property, we respect intellectual property. So we're emphatically of the view that intellectual property and the creation of it should be rewarded. Ultimately, intellectual property laws are always about drawing the line. It's really the line that you referred to in copyright. There is expression that is protected. If you write a book and somebody copies it, then you are entitled to be paid. But there are ideas. If someone reads your book, if someone remembers that Shakespeare wrote a story about two teenagers who fell in love, sure, then that's that's fair use. That's that's why this country was created.
Brian Schatz (D-HI):
That's what we need to focus on. With your permission Chairman, I want to ask one final question.
Sen. Ted Cruz (R-TX):
Proceed.
Brian Schatz (D-HI):
Thank you. I am actually quite excited about the prospect that in 20 years people are going to say, remember when you had to wait on the phone to talk to Kaiser Permanente or the va? So maybe Mr. Altman and Mr. Smith, I want you to, a buddy of mine used to say, paint a picture and paint me in it for the government actually delivering services. I want you to describe what an AI agent or AI can do to reduce those pain points that we accept as a fact of life in interacting with the government. It seems to me so much of what makes us irritated with the government is the lack of sorting data that exists somewhere, but we can't get access to it. So just very quickly, you have 15 seconds each for some cheerleading.
Sam Altman:
I can imagine a future where the US government offers a AI powered service that makes it really easy to use all government services to get great healthcare, to get great education. You have this thing in your pocket and if you have any medical problem, you get an answer. If you need to appeal something on some process you having with the government or file your taxes or whatever, you just do it instantly. You have an agent in your pocket fully integrated with the United States government and life is easy.
Brad Smith:
Anything like that. Remember when you had to stand in line to renew your driver's license? Remember when you didn't know how to report a pothole that needed to be repaired on your street? Remember when you had a fender bender in a car and you had to fill out all these forms and talk to all these people to get insurance coverage? Now you can do it all with one AI system. You can use your phone. And by the way, you can do this today in Abu Dhabi. We need to bring it to America. Thank you.
Sen. Ted Cruz (R-TX):
Senator Budd.
Sen. Ted Budd (R-NC):
Thank you, Chairman. Again, thank you all for being here. I've enjoyed various conversations with each of you. The ability for the US to deploy new energy generation capacity and upgrade its grid in so many ways, the key to the race against China Energy's how we can win. And it's also how we can lose permitting in this country. It takes too long. China's command and control system means that they will not fail to deploy the energy needed to achieve the scale necessary to develop the most advanced models, which will drive all to the benefit of AI. So I'm glad to be working with Senator Lumi on the free act, which would set up a permit by rule structure, which would let large projects meet comprehensive standards at the front end instead of dragged out on a case by case process. So we all want to protect the environment and we all want to maintain US economic and technological leadership. So Mr. Andor, what has CoreWeave’s experience been in contracting power and are you concerned that the current permitting system can make it hard for the US to achieve capital investment and the scale needed to win this AI race?
Michael Intrator:
So as you said, access to power, access to scale, power is certainly one of the keys to our ability to win this race. There are others, but it is one that I spend a lot of time thinking about. I separated the comment into access to power and access to scale power because I do think that we are moving towards a period of this race where the size, the magnitude of the infrastructure that is being required to move our artificial intelligence, the labs that are building it, the companies that are building it forward at the velocity that's necessary is going to be a specific challenge that really requires a lot of thought. We have a huge part of our organization focused on not just getting access to power but getting access to the size and scale of power that's going to be able to build the infrastructure at the scale of Abilene or close to it in order to allow this to move forward. It's tough, right? And it will get harder as we move through time because the existing infrastructure that does have opportunities, it has some level of elasticity is going to be consumed. And once that is consumed, you're going to get down to kind of a first principle. How do we get power online now? And that's really going to be challenging within the regulatory environment as it currently is configured.
Sen. Ted Budd (R-NC):
Thank you. Mr. Smith. A similar question. How is Microsoft trying to secure power for its data centers? I mean we read about that in the news recently, but what does federal policy need to focus on to make sure that we don't lose this race because we can't get enough energy?
Brad Smith:
Well, we invest to bring more electricity generation onto the grid and then to bring it through the grid to our data centers. We probably have more permitting applications in more countries than quite possibly any company on the planet. Last time I looked at it, it was 872 applications in more than 40 countries. The number one challenge in the United States when it comes to permitting, interestingly enough, is not local, it's not state. It is the federal wetlands permit that is administered by the Army Corps of Engineers. We can typically get our local and state permits done in about six to nine months. The national, the wetlands permit is taking off in 18 to 24 months. Both the outgoing Biden administration and the incoming Trump administration have focused on this. But if we could just solve that, we could accelerate a lot here in this country.
Sen. Ted Budd (R-NC):
Very helpful. Thank you. Mr. Alman. Much has been made about the Chinese open source models like DeepSeek. We spoke about that a month or two ago. A concern that I have is that accessible Chinese models promoted by the Chinese Communist Party might be an attractive for AI application developers to build on top of, particularly in developing world economies. So how important is US leadership in either open source or closed AI models?
Sam Altman:
I think it's quite important to lead in both. We realize that we open AI can do more to help here. So we're going to release an open source model that we believe will be the leading model this summer because we want people to build on the US stack in terms of closed source models. A lot of the world uses our technology and the technology of our colleagues. We think we're in good shape there.
Sen. Ted Budd (R-NC):
So how could federal policy further help encourage AI ecosystem to be developed right here in the us?
Sam Altman:
Well, you touched on a great point with energy. I think it's hard to overstate how important energy is to the future here. Eventually chips network gear that will be made by robots and will make that very efficient and will make that cheaper and cheaper, but an electron is an electron. Eventually the cost of intelligence, the cost of AI will converge to the cost of energy and it'll be how much you can have. The abundance of it will be limited by the abundance of energy. So in terms of long-term strategic investments for the US to make, I can't think of anything more important than energy chips and all the other infrastructure also. But energy is where this, I think this ends up. Thank you Chairman.
Sen. Ted Cruz (R-TX):
Senator Kim?
Sen. Andy Kim (D-NJ):
Yeah, thank you. Mr. Smith. I think I'd like to start with you. I thought your point about what exactly is the race. We keep talking about the race and you framed it in a particular way saying that it's about adoption in the rest of the world, the 78%. I guess I just wanted to ask you to tease that out some more in terms of understanding what role we could play in congress, in government in terms of trying to accelerate and champion that AI adoption internationally.
Brad Smith:
I think there's two things. The first is it just shines a light on the importance of getting it right for export controls, which is the AI diffusion rule that's being discussed right now. And I think what it shows is we want to have, I believe as a country, the kinds of national security controls that ensure that say CHIPS don't get diverted to China or get accessed by the wrong users, say in China for the wrong reasons. And that is something that people have drafted in the Department of Commerce. At the same time, we need, I believe to say get rid of the quantitative caps that were created for all of these tier two countries because what they did was send a message to 120 nations that they couldn't necessarily count on us to provide the AI they want and need. And just think about it. I mean, if this is a critical part of your country's infrastructure, how can you make a bet on suppliers if you're not confident that they'll be able to fulfill your needs? So I think you and Congress and the Senate can help the White House and the Department of Commerce get this right.
Sen. Andy Kim (D-NJ):
Mr. Altman, I wanted your thoughts on this. Is that the right framing of the race? Is it about the adoption internationally in terms of other countries? I guess I'm trying to think through it. Part of what you just said in your previous response was that we want other nations to be able to build upon the US AI stack. Is that the right framework? Is that what we're thinking about? Or is it more about the consumer? Is it more about getting the rest of the world and the 78% of the population to adopt AI applications that are us or is it interrelated?
Sam Altman:
I think it's heavily interrelated. To me, the stack is from the chips at the bottom to the applications on the top, and we want the whole world on the US stack. We want them to use US chips, we want them to use services like ChatGPT.
Sen. Andy Kim (D-NJ):
Does having other nations building on the infrastructure component of the stack, does that more or less than guarantee or at least have a high likelihood that then the consumers in that country will be using our products and applications? Is that the sort of theory of the case?
Sam Altman:
It probably does make it marginally more likely, but I also think if someone's using a stack that we don't trust to train models, like who knows what it's going to do, who knows what sort of back doors would be possible, who knows what sort of data corruption issues could be possible. I think the AI stack is increasingly going to be a jointly designed system from the chip all the way up to the end consumer product and lots of stuff in between. I think separating that won't work that well in practice and we shouldn't want to. Again, I think this point, this is a very critical point. The leverage and the power the US gets from having iPhones be the mobile device people most want, and Google being the search engine that people most want around the world is huge. We talk maybe less about how much people want to use chips and other infrastructure developed here, but I think it's no less important and we should aim to have the entire US stack be adopted by as much of the world as possible.
Sen. Andy Kim (D-NJ):
Yeah, I mean when we are looking at, you're talking about our investment into models and building of that nature, how are we doing in terms of development of the applications, AI tools and applications though that are trying to embed in people's lives? Not necessarily just the overarching models, but do you feel like we're putting the level of intensity that we need to in terms of that type of development
Sam Altman:
ChatGPT is the most adopted AI service in the world, not just in the United States, but in the world by a quite significant margin. We're very proud that people like it and we need to keep pushing on that. I think it's important for all the reasons you just discussed, there are many other US companies building incredible products and services that are also getting globally adopted. This is what the US does best.
Sen. Andy Kim (D-NJ):
Yeah. Dr. Su, I wanted to just ask one last point to you over and over again. Each of you is talking about talent as this incredible power but also could be a bottleneck to us. How are we doing when it comes to development of talent in this country? If you were to give us a grade, what would you grade us at in terms of our development right now?
Dr. Lisa Su:
Thank you, Senator, for the question. Look, I think the smartest engineers are in the United States. We have a great base of talent, but what I will say is we need more more hardware developers, software developers, application developers.
Sen. Andy Kim (D-NJ):
How wide is that Delta? If we're talking about this as a race as you did, is that a space where we have a larger amount of Delta or is that a place where it's closing rapidly too?
Dr. Lisa Su:
Well, I think we do have a very talented overall talent base, but we also have the desire to have the best. And that includes not only US nationals, but also having the best international students.
Sen. Andy Kim (D-NJ):
Drawing the talent from.
Dr. Lisa Su:
That's right. I think high-skilled immigration is one of those areas where we want the best people in the world to be doing their work in the United States. And Senator, if I can just add something to your previous point about the cycle and what race we're trying to win. Technology is one of those things where you can have a very vicious positive cycle. So in other words, when we lead and more people adopt, that means more developers that make our technology better, that increases our lead. That's what we want is to have our leadership just increase over time.
Sen. Ted Cruz (R-TX):
Senator Schmitt?
Sen. Eric Schmitt (R-MO):
Thank you, Mr. Chairman. Mr. Altman, I'll start with you. I really enjoyed and was inspired by your story with a light on and the home you grew up in St. Louis and you talked about the spirit of innovation that is the spirit of St. Louis as a fellow, St Louis native. That's a good story to hear and we just look forward to more investment in St. Louis from your company. That'd be great too. So I'll put a plugin for that. But I do want to ask you specifically, there's a lot made of sort of the comparison between the United States and the regulatory environment and what exists in Europe. What specifically, and I'll open this up to you, what specifically has gone wrong in Europe that we can draw some conclusions from?
Sam Altman:
First of all, we'd love to figure out how to invest more in St. Louis. I'd love an excuse to get to go home more often. I'll point out one example that I think is just very, I painful to users. When we launch a new feature or a major new model, we have what is now considered a little bit of like an in joke where we say we have this great new thing not available in the EU and a handful of other countries because they have this long process before a model can go out. And there will be, I believe, great models and services that are quite safe and robust that we will be unable to offer in other regulatory regimes. And if you are trying to be competitive in this new world, and if you are consistently some number of months behind what other people in other countries get access to, that's an example that's extremely painful to users.
Sen. Eric Schmitt (R-MO):
And you mentioned your observation that the AI stack may get more vertically integrated. So how does that work then? Because right now the best estimates I suppose is that, I don't know, China's two months to six months behind maybe on large language models. Hopefully some of the advances we're seeing in the US maybe there's a degree of separation. It's hard to know exactly right with DeepSeek, but then you get down to the chips and that advantage is more like a couple of years, probably something like that. So if that's where we're headed, does that increase the US' advantage in your view? Or does that sort of allow China to catch up quicker as we get more vertically integrated?
Sam Altman:
I think there's a lot of things that can increase US leadership, but we touched on this earlier. I think it's so important. There will be great ships made around the world. There will be great models trained around the world if the United States companies can win on products and the sort of all of the positive feedback loops that come from how you can improve this. Once real users are using your products in their daily lives for their hardest tasks, that is something special that is not so easy to catch up with just by doing good chips and good models. So making sure that the US can win at the product level here, obviously I'm talking my book a little bit, but I really do believe it is quite important and that's in addition to all of the chips algorithm, the infrastructure, algorithms and data. I think this is a new area where the US is really winning and has a very strong compound in effect.
Sen. Eric Schmitt (R-MO):
Thanks. Mr. Intrator. Did I pronounce that correctly by the way?
Michael Intrator:
Yep.
Sen. Eric Schmitt (R-MO):
Okay, thank you. I want to turn a little bit sort of staying on this regulatory environment. One of the things I think that's most concerning that's coming out of Europe is this sort of censorship regime that exists not just online but in real life. But certainly it's happening online. I mean people are being arrested for things that they say online. And one of the concerns I have with AI, I suppose is that if we end up with a place where it's somehow policing misinformation, and I think even in NIST's most recent voluntary standards, one of the risks to be on the lookout for was the spread of misinformation. So the point of the question is how do we make sure that, I think part of what's going wrong in Europe is it's a funneling of information. And in my view, whether I agree with the point of view or not, it ought to be out there. People can make their own decisions. You combat speech you don't agree with, not by censoring it, but with more speech. What are some lessons to be learned there and make sure that does not happen here?
Michael Intrator:
So Europe is moving forward with its regulatory regime in a European way and from our seat where we have to make these enormous capital investments. One of the things about the approach that Europe is taking that we are deeply concerned about every day is the balkanization to use of how they go about allowing information to flow and how they go about regulating and how they go about with each component of their union having its own set of rules, which will be tremendously challenging in Europe as time goes on because it is really hard to make the magnitude of investments that we,
Sen. Eric Schmitt (R-MO):
Beyond that, though, jurisdictionally, I'm talking about content now.
Michael Intrator:
So the role of our company is really kind of below that, and Sam and Microsoft, you are going to get a lot more attention paid to the content level because of the role that they play in the stack. It's not really where we are primarily focused, we're really focused on the investment side of
Sen. Eric Schmitt (R-MO):
It. If any of you would like to, Sam or Mr. Altman, if you would like to respond to that, I'd like to get some answer.
Sam Altman:
I think, well, first of all, I strongly agree that people getting put in jail for stuff they say online is very not American, not what we should be doing. The AI is quite different than social media, at least in its current evolution. People are using these tools in a sort of one-on-one way instead of this massive thing online. So I think it's easy to make too many analogies, but it's a little bit dangerous to try to talk about AI and the things we're going to face here in the same way that we did for social media. But our stance is that we need to give adult users a lot of freedom to use AI in the way that they want to use it and to trust them to be responsible with the tool. I know there's increasing pressure in other places around the world and some in the US to not do that, but I think this is a tool and we need to make it a powerful and capable tool. We will of course put some guardrails in a very wide bounds, but I think we need to give a lot of freedom. Yeah, I'm
Sen. Eric Schmitt (R-MO):
Out of time, but there's a lot more questions there that we'll follow up with. Thank you Madam Chair.
Sen. Maria Cantwell (D-WA):
Thank you. Senator Hickenlooper.
Sen. John Hickenlooper (D-CO):
I appreciate that line of questioning. I was ready for you to continue as well. I could have given you a minute or two. Mr. Smith, Microsoft has a long and deep history and transforming workplaces all over the world through software from Windows operating system to its office applications like PowerPoint, Excel, and now the AI powered Copilot application and software development lifecycles seem to be becoming increasingly shorter, updates becoming more frequent. What are the internal processes that Microsoft follows to evaluate co-pilot's accuracy and performance before it was released? And what kind of independent review teams other than Microsoft's own product developers are involved in that? Who do you bring in to help with that?
Brad Smith:
Well, first of all, since most of what we're talking about here, when you're talking about our copilots, start with models that are developed at OpenAI. I would say OpenAI has its internal process. There is then a joint, what's called the DSBA deployment safety board where we decide together whether something is safe to deploy as the name implies. We then at the applications level, have our own internal deployment safety board. We have a variety of engineering tools that we use to assess these features. We test these features. We have red teams meaning sort of competing teams that often go to work to sort of attack the features. And then ultimately the product is released when those tests are completed and the results are satisfactory.
Sen. John Hickenlooper (D-CO):
Good. I like that. Well, lemme go over to Mr. Altman. Obviously you all have a natural incentive to ensure that the products are high quality and safe, but the field is so competitive and in applied research and with rigorous testing, these constant improvements really are fundamental steps to the performance of a model. So risk assessments are that key tool and I'm a big believer in evidence-based technical standards. I've been accused of being the only real scientist who's published peer-reviewed papers in the Senate. So Mr. Almond, do you believe that under appropriate circumstances, independent evaluations based on standards performed by qualified evaluators and done voluntarily could help validate the testing that you're performing internally and in conjunction with peer companies?
Sam Altman:
Thank you, Senator. And I think it's awesome that you have published peer-reviewed papers and would love to see more of that.
Sen. John Hickenlooper (D-CO):
Well, wait, I was on the Maslow's Triangle of Science. I was near the bottom. I was a geologist. So that's not high up in that geology is great.
Sam Altman:
Yes. I think what you say is very important. It's an important part of our process today. External testing helps us find things that we may have missed internally and we're very proud of our safety record on the whole, not that we've not been perfect and we're continuing to learn new things, but I think we do have a process that is leading towards models that the public generally thinks are safe and robust to use. And we've developed a lot of techniques to be able to continue to deliver that. But external testers and red teamers are a critical part of that process and I think they've helped us find many things in the models to improve.
Sen. John Hickenlooper (D-CO):
Mr. Smith, would you add anything to that?
Brad Smith:
No.
Sen. John Hickenlooper (D-CO):
Okay. Got it. Someone giving testimony who doesn't have something to add? It's a moment of scientific reflection. Dr. Su, the bipartisan Chips and Science Act, historic effort to try and maintain US leadership and emerging technologies like semiconductors, but others as well as a technology arms race continues globally. And you were talking about this A MD plays a key role in delivering state-of-the-art designs the best for the new chips that are going to power electronics and the devices that are going to allow AI to become global as scientists work around the clock to develop new breakthroughs and to try and increase and improve in performance, but at the same time, shorten r and d timelines, what do you see as the next frontier of CHIP technology in terms of energy efficiency? And that's not just based on the Chinese competitors, but how can we work together to improve direct-to-chip cooling for high-performance computing?
Dr. Lisa Su:
Well, thank you for the question, Senator. I would say look, there is a tremendous amount of innovation that's going on in the semiconductor sector today. The Chips and Science Act was certainly helpful in raising the profile of chips in the United States relative to what are we doing to go faster and build better and more power-efficient chips? Frankly, we're using AI extensively through our chip development cycles and it does allow us to augment what are typically very long cycles, many years, several years for us to develop chips. We can shorten the and also improve the efficiency. And there are lots and lots of great new technologies in terms of cooling technologies that are super important for us to build the large-scale systems that we talked about earlier today. So thank you for the question.
Sen. John Hickenlooper (D-CO):
You bet. Alright, I'm out of time. I'll yield back to the chair. Thank you all.
Sen. Ted Cruz (R-TX):
And Senator Hickenlooper, I will say as a Texan whose parents were in the oil and gas business, I think geologists are awesome.
Sen. John Curtis (R-UT):
We have a consensus.
Sen. Ted Cruz (R-TX):
Senator Curtis.
Sen. John Curtis (R-UT):
Thank you, Mr. Chairman. It's a delight to be here. Mr. Altman, I, you started kind of a one-upmanship on computers and I will just tell you, in 1985, the month you were born, I was attending a class at Brigham Young University and carried in a laptop and it was almost kicked out.
Sam Altman:
What laptop?
Sen. John Curtis (R-UT):
It was a TRS-80. Oh, awesome. Made by Radio Shack. I upgraded the memory from 40K to 80K, ran on four AA batteries. That's incredible. Yeah. So I'm very envious of your generation. Let me start with you. If I would, I think Utah would aspire to lead out with data centers and advanced technologies. Could you just address four states and Utah specifically what it is that makes them attractive to projects like Stargate?
Sam Altman:
Yeah, and I know that we're having productive discussions about some potential sites in Utah, power cooling, fast permitting process, labor force that can build these things, the electricians, the construction workers, the entire stack, A state that wants to partner with us to move quickly. Texas really has been unbelievable on this. I think that would be a good thing for other states to study, but we'd be excited to try to fix something out.
Sen. John Curtis (R-UT):
Thank you. I think I could speak for our state leaders. We would be excited as well. But as you know, this also brings challenges and one of those challenges, the demands for energy. What's your thoughts on how we protect rate payers and put a little bit of a firewall between them?
Sam Altman:
I think the best way is just much more supply, more generation. I think if you make it easy to reasonably profitably create a lot of additional generative capacity, the market will do that. That will not only not drive up rates because of the AI workload, hopefully it'll drive it down for everything. And we've talked a lot about the importance of energy to ai. Energy is just really important to quality of life. One of the things that seems to me the most consistent throughout history is every time the cost of energy falls, the quality of life goes up. And so doing a lot to make energy cheaper. In the short term, I think this probably looks like more natural gas, although there are some applications where I think solar can really help in the medium term, I hope it's advanced nuclear efficient infusion more energy is important. Well beyond AI in some sense, we have these dual revolutions going of AI and energy, the ability to have new ideas and the ability to get them done, to make them happen in the physical world where we all live, these are kind of the limiting reagents of prosperity and let's have a lot more.
Sen. John Curtis (R-UT):
Thank you. Mr. Smith. We've talked about how significant power is to the success here. What role do you think Microsoft and other tech leaders have in developing energy and particularly the right type of energy?
Brad Smith:
I think we have a tremendous responsibility to contribute to the solution. And I think Sam helped with his list. I would highlight two things and I just would I guess illustrate it with what we do everywhere. But most recently with a major site in southeastern Wisconsin, we went from zero basically to becoming the largest industrial user of electricity in the state, roughly 400 megawatts. And so we worked with the local utility, we made the investment to help and really enable them to expand their electricity generation. Now that electricity then needed to be delivered from their power plant through the grid to our data center. We went to the Public Utilities Commission and we proposed a rate increase on ourselves because we thought it was important that we pay for that improvement to the grid so that the neighbors, so to speak, would not have to. And I think what it really illustrates is the collaborative partnerships that are needed to provide the capital to do the construction, to improve the grid and to be, I think, very sensitive to the community as a whole.
Sen. John Curtis (R-UT):
Thank you, Mr. Altman. Lemme come back to you. I was a small business owner of a special spot in my heart for small business owners. Can we talk a little bit about ChatGPT and how that might assist small business owners? And let me paint a little broader picture. We've heard a lot about other tools that are perhaps out of favor, particularly with the US government that are very helpful for small businesses. But I don't know if small businesses are fully understanding the platform that you have and how they might use it for marketing, for data research and ways to help their small business be successful.
Sam Altman:
There were all these moments as ChatGPT was beginning to take off where we would be like, oh, we may have a hit on our hands. There's like, someone's using it for this and this, and that's strangers talking about it. You see someone using it in a coffee shop. But one of the ones that really sticks out for me is pretty quickly after chat. BT launched in the first six months, say I was in an Uber and the driver was making conversation. He's like, have you heard of this thing called ChatGPT, it's amazing. And I was like, yeah, what do you think about it? And he was using it to run basically his entire small business. He was like, I ran a laundromat. And he's like, I had all these problems, couldn't find good people to write my ads, couldn't get legal documents reviewed, couldn't answer customer support emails. And he was like a mega early adopter, but he was one of these people that was using AI to make a small business work. And we talked about that story a lot at the time, but it's nice to reflect on it again now. We've now heard that at scale from a lot of people, but that was one of those moments early on we're like, oh, this is maybe going to work
Sen. John Curtis (R-UT):
And I'm out of time. But just to mark, this is more than just something that helps proofread emails. And you don't need to comment because I'm out of time, but I think we would all agree with that.
Sam Altman:
It is.
Sen. John Curtis (R-UT):
And look forward to seeing these applications move forward. Mr. Chairman, I yield my time.
Sen. Ted Cruz (R-TX):
Senator Duckworth.
Sen. Tammy Duckworth (R-IL):
Thank you, Mr. Chairman. I thank you through the panel for all of you being here today. I want to begin by talking about the importance of partnerships between the private sector and our national laboratories in maintaining United States leadership in ai, Illinois is a proud home of two crown jewels of the national laboratories firm Lab, America's premier particle physics and accelerator laboratory and Argonne National Laboratory home to the Aurora supercomputer that will accelerate breakthroughs in ai, cancer research and fundamental physics. There is nothing more important than sustaining and amplifying investments in our nation's incredible network of national labs. Yet Donald Trump and Elon Musk with the support of some Republicans in Congress applauding to take a chainsaw to the vital research initiatives being carried out across our country. This is a self-sabotaging attack, plain and simple. And if allowed to proceed, Trump and Musk will inflict lasting harm on our innovative capabilities and capacity that our enemies could only dream of achieving. Does anyone truly have confidence that had Doge been around decades ago, they would not have cut the project that created the internet as an example of wasteful publicly funded research and development? So my question to any member of the panel is the following. Can you explain the importance of the national lab system to maintaining our research edge and discuss any partnerships you've established or are currently pursuing, especially those threatened by massive cuts to the national labs research.
Sam Altman:
We partner with the national labs. So maybe I could take a first cut at this also, Senator, I would love to get to visit Fermilab someday. That would be like a…
Sen. Tammy Duckworth (R-IL):
That was my next question.
Sam Altman:
That would be a realized highlight. That'd be very cool. There's many wonderful things that AI is going to do for the world, but the one that I am personally most excited about is the impact AI will have on scientific discovery. I believe that new scientific discovery is the most important input to the world getting better and people's quality of lives getting better over time. It is hard to overstate where we would be where we are because of scientific advancement and where we would be without it. So we're thrilled to get to partner with the national labs on this. I think science has not been as efficient as it can be. And we're also thrilled to hear from scientists that they're multiples more effective than they used to be. And I think that AI tools will mean we can accomplish at some point a decade worth of scientific progress in a year for the same cost or even less. This will be one of the most important contributions in my opinion, that AI makes to the world. And it's no longer theoretical. The national Labs are a great example. That's the only partnership where we've given a copy of our model weights to another organization. It's a very deep and important partnership to us, and I expect that that will really bear fruit.
Sen. Tammy Duckworth (R-IL):
Thank you. Anybody else on the panel?
Brad Smith:
Yeah, I think you highlight a very important issue. This country has 17 national labs administered by the Department of Energy and about 85 to 90 research universities. And together they are the fabric of much of scientific discovery and have been since the Manhattan Project in World War II. We in the tech sector, we at Microsoft work with most almost all of them. And there's a particular cycle of innovation that the United States has mastered. You have curiosity driven research in these institutions and then the advances move out of those institutions into startups and into larger companies. And what I always find interesting is I meet with officials around the world. They have studied this, they seek to emulate it. And I always worry that in the United States, we run the risk of taking it for granted. We should never take this for granted. It is the foundation for the country's technological leadership.
Dr. Lisa Su:
Very much so. Dr. Su. I just wanted to add to that, we are also very large supporters of the public-private partnerships with the national labs. I think the national labs have in a way always tried to look ahead of the curve and that's a great place for us to invest. We think there are a key piece. We have partnered with all of the national labs as well over the last decade, and that continues to be a place where I think there can be significant public-private partnership.
Sen. Tammy Duckworth (R-IL):
Thank you. Mr. Intrator.
Michael Intrator:
I just think it would be really interesting to come to these AI factories and to walk or travel through these institutions and identify all the different pieces of the science that leads back and was ultimately driven and founded on something that came out of those institutions because it's amazing actually.
Sen. Tammy Duckworth (R-IL):
Thank you. And would any of the remain three of you would like to come to a lab in Illinois? Either for me or Argonne? I will give you personal chores. Thank you. All right. All four of you. It's done. Thank you. Us a Chairman.
Sen. Ted Cruz (R-TX):
Thank you, Senator Young. Thank you Mr. For holding this important hearing on winning the AI race. It's good to see our panelists here. One of the things that I like to underscore whenever I talk about this issue is we are not just discussing a race to create jobs, not just discussing a race to figure out how to eek out more growth from our economy. Although that's important, not just trying to identify how humans can flourish more especially Americans through application of AI solutions to our daily lives in various ways. But this is an issue of national and economic security. I want folks at home to get that. I know all our panelists are highly conversant and knowledgeable about that.
In my discussions with you and many others I've heard we need to work with like-minded partners and allies to win this race. And it's only going to be done collectively. I've heard here today from a number of you that this race is in part about getting market share diffusion of RA AI models and solutions into other countries. It is through that means for me to perhaps elaborate on your thoughts that we can see that our own values are advanced. These models, presumably they'll be embedded with our values related to privacy and transparency and property rights and freedom of speech and religion, not the values of the Chinese Communist Party on each of those various fronts. And then if we can establish digital trade rules, digital cross-border agreements on digital trade with these other countries, we could conceivably erect higher barriers to entry for models that don't come embedded with our standards models of say the Chinese Communist Party has given sanction to.
So there's a geopolitical national security overlay this entire conversation, which is why I think the Chairman's emphasis on not overly constraining innovation or deployment is very important. But it's also why I think it's important that we be thinking about how to work with other countries in their standards development. And so that's where I want to begin asking questions. And I'll start with Mr. Smith. If the United States doesn't adopt some standards through some entity, whether it's NIST or another federal entity or federally sanctioned entity, then won't other nations go ahead and feel the need to adopt their standards without any consultation with the United States?
Brad Smith:
I think it's a really important point you make, and it is the lesson from the evolution of privacy law. The United States didn't adopt a national privacy law. Europe did twice. And most American companies of any size today apply across the United States work that complies with European privacy law. It's just more efficient. So I think the United States needs to be in the game internationally to influence the rest of the world. And you cannot be in the game if you do nothing. You must do something. So you take Senator Cruz's idea, a lightweight approach, and then you build support around it.
Sen. Ted Cruz (R-TX):
So just to unpack that, and I'll stick with Mr. Smith with apologies to everyone else because my time's limited. Would it be easier to shape the standards of other large economy countries that share most of our values if we already have a set of standards adopted?
Brad Smith:
Generally, yes. I think we always have to be careful because if you go too soon, you go before the standards have really come together, but you've got to have some kind of model that you can show the rest of the world and win support for.
Sen. Ted Cruz (R-TX):
And then presumably standards could be harmonized, right? They're not set and chiseled onto a tablet, so to speak, right?
Brad Smith:
No, that's indispensable. I mean, if our technology is going to go around the world, we need a set of laws or regulations that in effect create that basis for reciprocity and interoperability.
Sen. Ted Cruz (R-TX):
Okay. I only have 25 seconds left. Are there any violent objections to Mr. Smith's position? Because that seems eminently reasonable to me. It seems consistent with a light touch approach, but it also shows a certain sense of urgency that the United States needs to act. The last thing I'll say in my remaining 10 seconds is that I am planning on introducing legislation today called the AI Public Awareness and Education Campaign Act with several of my colleagues. And our aim is to have a whole of government approach to foster a greater awareness of AI literacy and grow STEM opportunities to create the next generation of our workforce. And looking forward to moving that forward. So it will be available for public review, critique, even accolades. And Mr. Chairman, I yield back. Thank you.
Sen. Ted Cruz (R-TX):
Senator Blunt Rochester.
Sen. Lisa Blunt Rochester (D-DE):
Thank you, Chairman Cruz. And thank you so much to the witnesses. This is such an important hearing. Five minutes will not suffice for me. I'll be submitting some questions for the record. I notice that for Mr. Altman and Mr. Smith, when the question of paint me a picture of the future came up, you actually leaned up in your chair. There was a level of excitement, and that's how I am about the future. When I came into the House of Representatives in 2017, I started a future of work bipartisan caucus because I had a concern that number one, there were certain groups of people that were going to be left behind, but there also as a country that we could be left behind. And I started, I had an event where we had everyone walk into the room and use a word cloud and tell me what you think of when you hear the future of work.
The biggest word coming in the door was fear. The biggest word walking out the door was opportunity. And so to me, this conversation is so vital to think about the opportunities, but also making sure that we are watching out for ethics, watching out for scams, watching out for that technology does not take over the human. And so I am just grateful for the conversation. And Mr. Altman, I listened to a interview that you gave with Lester Holt maybe a year or so ago. And you talked in that interview about how OpenAI, it wasn't initially even about making a product, it wasn't about the money. And so I know you were incorporated in Delaware, and I understand you've been working with our attorney general during the previously proposed legislation to transition to a for-profit, not legislation, but to transition to for-profit. And this Monday, OpenAI decided to apply to become a public benefit corporation instead, and to have the PBC govern your nonprofit arm. What went into this decision and what considerations influenced the timing of the organizational change?
Sam Altman:
So we never ... Thank, you for the question, Senator, and the chance to explain this. It's a complicated thing that I think has gotten misrepresented. So this is a wonderful forum to talk about it. We never planned to have the nonprofit convert into anything. The nonprofit was always going to be the nonprofit. And we also planned for A PBC from the very beginning. There were a bunch of other considerations about is it the PBC board that would control the nonprofit somehow or how our capital structure was going to work? That there was a lot of speculation on most of it inaccurate in the press. But our plan has always been to have a robust nonprofit. We hope our nonprofit will be one of the best, maybe someday the best resourced nonprofit in the world and A PVC with the same mission that would make it possible for us to raise the capital needed to deliver these tools and services at the quality level and availability level that people want to use them at. But still stick to our mission, which we've been proud over the last almost decade, our progress towards. So we had a lot of productive conversations with a lot of stakeholders and a lot of lawyers and a lot of regulators about the best way to do this. It took longer than we thought it was going to. I would've guessed that we would've been talking about this last year. But now we have a proposal that people seem pretty excited about and we're trying to now advance.
Sen. Lisa Blunt Rochester (D-DE):
And Dr. Su, your company primarily operates in the physical hardware portion of the AI stack. I have a bill with Senators Cantwell and Blackburn called the Promoting Resilient Supply Chains Act, which authorizes the Department of Commerce to strengthen American supply chains for critical industries and emerging technologies. Dr. Su and others, a semiconductor and chips manufacturing is critical to advancing the advancement of AI, but we're facing these global supply chain constraints. What specific policies, and I know you mentioned policies as well for supply chains, would we need to adopt to help American companies overcome the supply chain issues and compete in international with our rivals?
Dr. Lisa Su:
Thank you, Senator, for the question. There's no question. The semiconductor supply chain and overall supply chains are really critical for us to win the AI race. I think from a semiconductor standpoint, the efforts that have been made to move manufacturing back to the United States have been positive. I think they are start, there's a lot more that we can do. And one of the most important aspects of it is really to think about it end-to-end. There's so many steps to go from beginning to end in a semiconductor supply chain, including advanced wafers, including packaging, including the backends and system test. All of those avenues need to have a footprint in the United States. And then we have many allies around the world, which are very excellent partners as part of the global resiliency in the supply chain. And we would like to see those partnerships continue to flourish.
Sen. Lisa Blunt Rochester (D-DE):
And last question if I can. Mr. Smith, how do you see the interdependence between the AI stack sections creating either vulnerabilities or opportunities in the AI supply chain?
Brad Smith:
I think they create more opportunities than vulnerabilities because it enables companies to do what they do best and that we can work to together. And the world today has an integrated supply chain for anything that you buy. We just don't think about it when we go to the grocery store. I think one of the strengths of the tech sector is that we have, I'll call it a string of pearls, great companies in every layer of the stack. And we're going to need, frankly, more great companies, especially at the applications layer, and that it is how we work together.
Sen. Lisa Blunt Rochester (D-DE):
Thank you so much. I am out of time, but we will be following up with questions for the record as well as individually. Thank you. And I yield back.
Sen. Ted Cruz (R-TX):
Mr. Moran.
Sen. Jerry Moran (R-KS):
Chairman. Thank you very much. Mr. Smith mentioned data privacy, which has been a topic of mine for a long time. And we've been unsuccessful in legislation being adopted. But I still have the goal of making certain that consumers have control over their own data. And I was going to ask you, Mr. Altman, how can we provide consumers with more control over how their data is used by AI companies while preserving the utility of the AI system? So how do you get more privacy and still get the benefits?
Sam Altman:
So there's all of the standard privacy controls that companies like ours and others build and should. But there's a new area that I'd love to flag for your consideration, which is people are sharing more information with AI systems than I think they have with previous generations of technology. And the maximum utility of these systems happens when the model can get very personalized to you. So this is a wonderful thing and we should find a way to enable it. But the fact that these AI systems will get to know you over the course of your life so well, I think presents a new challenge and level of importance for how we think about privacy in the world of ai, how we're going to think about guaranteeing people privacy when they talk to an AI system about whatever's happening in their lives. How we make sure that when one system connects to another, it shares the appropriate information and doesn't share other information, and that users are in control of that. I believe this will become one of the most important issues with AI in the coming years as people come to integrate this technology more into their lives. And I think it is a great area for you all to think about and take quite seriously.
Sen. Jerry Moran (R-KS):
We do. We just don't have any success in finding the conclusions. But thank you for the encouragement. I chair a commerce justice science appropriation subcommittee that funds the Department of Justice and it plays a significant role in cybersecurity of our country. I just came in from a budget hearing with the FBI Director Dr. Patel, in which we covered cybersecurity threats. AI can, I think this is true, AI can be used on both sides of a cybersecurity attack and it can be used to automate phishing, malware creation, but machine learning can also increase our ability to detect and respond to cyber threats. What should Congress think about allocation of federal resources for cybersecurity and what should we consider when it comes to AI?
Brad Smith:
I would say that AI, as you said, is both an offensive weapon and a defensive shield when it comes to cybersecurity and as with many other things, the frontline of this the last few years has been in Ukraine because Russia has such a sophisticated cyber attack capability. And what we've found as a company that's been involved in supporting Ukraine since literally the moment that war began is that AI is a game changer. We have intercepted attacks against Ukraine faster than a human could detect them, and we block those attacks from taking place. So you deploy AI into, call it the frontline of the products themselves. We have to recognize that it's ultimately the people who defend not just countries, but companies and governments, the chief information security officers or the CISOs. So we've created what's called a cybersecurity copilot that basically automates for those individuals much of the workflow that takes their time so that they can be more effective and efficient.
When it comes to federal appropriations, I think that to put it simply, the United States government must remain at the forefront of having for itself the cybersecurity capabilities that it needs to defend the government and every day, I mean, we are in government agencies today during this hearing pushing Chinese out of agencies and the like, and this will be happening every day of every year from now to probably eternity. So we must keep the US government well-funded in this space. And I think we also need our intelligence agencies and especially the NSA to be well-funded so they can remain at the forefront when it comes to global leadership in this field.
Sen. Jerry Moran (R-KS):
Thank you for your observations and encouragement. My final question, rural areas, a place I come from often lack high speed broadband, and since many AI tools rely upon connectivity, I'm concerned that many parts of the country and many parts of Kansas may not be able to access the benefits that AI will bring to business schools, healthcare, et cetera. What can the federal government do to be supportive of development and availability of on-device or low broadband with AI systems that do not rely on constant connectivity?
Sam Altman:
I'm generally pretty excited about what AI will do here because you can offload so much of the processing to the cloud and then ship a relatively small amount of data. If you think about Chacha, BT as text comes in, there's a brain that thinks about it really hard and some text comes back. We can support people in low connectivity areas quite well with the same quality of service separately to that. I think getting great connectivity everywhere is important, but in the specific area of ai, I think we can actually address that gap quite well. That's good to know. Thank you very much.
Sen. Maria Cantwell (D-WA):
Mr. Luján.
Sen. Ben Ray Luján (D-NM):
Thank you, Madam Chair. And first I want to begin by recognizing and thanking Mr. Altman and Mr. Smith for your organization's ongoing involvement in the NIST US AI Safety Institute, as well as Dr. Su and Mr. Altman for your ongoing partnerships with our national laboratories. Now, Dr. Su, Mr. Altman, can you explain how your partnership with the national labs support scientific research? You explained this to a question that was asked by Senator Duckworth as well, but if you could just touch on that quickly.
Sam Altman:
Our latest models like O-3 are good at scientific reasoning, and so scientists are able to use these to help them review literature, come up with new ideas, propose experiments, analyze data in a way that the previous generations of models just couldn't. We've had the national labs and other scientists spend time with previous models, and they say, oh, this is kind of cool. It's interesting. It's not transforming things. These new models are the first time we're hearing from scientists at the national labs and elsewhere that this is a legitimate game changer to their research output.
Sen. Ben Ray Luján (D-NM):
Appreciate that. Dr. Su?
Dr. Lisa Su:
Yeah, I would add the same. I think our partnerships with the national labs have seen just tremendous opportunity. We have large scale compute across the national labs and the ability to really develop new applications that take advantage of, let's call it traditional high performance computing together with the new AI model capability that we just talked about, is I think a great opportunity to substantially move forward the ability for scientific discovery.
Sen. Ben Ray Luján (D-NM):
To both of you. Again, can you explain why federal investment in foundational research and standards bodies are crucial to your companies?
Sam Altman:
I think standards can help increase the rate of innovation, but it's important that the industry figure out what they should be first. I think a bad standard can really set things back, and we've seen many examples of that in history. I do think there's a new protocol to discover here at the level of importance of HDP. This is just one example. There's many other things too. I believe the industry will figure that out through some fits and starts, and then I think officially adopting that can be helpful.
Sen. Ben Ray Luján (D-NM):
Dr. Su.
Dr. Lisa Su:
I believe public-private partnerships really enable us to think, let's call it ahead of the curve. So there are lots of things that we do in industry and we do them very, very well. However, the beauty of the national labs and federal researches, it does allow, let's call it a bit more blue sky research, and I think that's a very positive add. So I think the key is how we can make sure that one federal dollar goes much, much further than that with a private investment. On top of that.
Sen. Ben Ray Luján (D-NM):
Yesterday I reintroduced a piece of legislation called the Test AI Act, which has bipartisan support, which would simply improve the federal government's capacity to test and evaluate in this area as well. So very much appreciate both your responses, but this is just one of many steps I would argue that is needed to ensure that the United States stays ahead now despite strong support across the country, including from industry leaders here today, President Trump's annihilating budgets for basic research and there are questions abound. So many, I'll argue that this will destroy our nation's competitive advantage. I simply just call on all my colleagues that we look at the investments to the National Science Foundation, national Institutes of Health and Department of Energy, office of Science, let's work together if there's questions that we have, let's find ways to address those, but let's ensure that these investments are making a positive difference so that we have more successes and more hearings celebrating what we're celebrating today. Now, beyond your partnership with the federal government, I would like to know more about how you partner with local communities when building out centers. Data centers put a strain on energy and water resources. However, unlike other businesses, they do not introduce many long-term jobs and economic benefits necessarily. So Mr. Smith, how many engineers do you have dedicated to model or hardware optimization to reduce energy use? And when you build a center, what initiatives do you have in place to reduce water use?
Brad Smith:
I don't know off the top of my head the number of engineers working on optimization, but I'd be happy to track down an answer and get it to you. Water use is a huge priority, especially in data centers. For example, in the southwestern United States and other countries around the world where water is in short supply, if you look at our data centers today, they run on liquid cooling. It's a closed-loop system. The liquid is a combination of frankly, water and other chemicals, but basically once it starts running, almost all of the water is recycled. So the amount of water that we consume is typically far, far smaller than what most people would estimate. We also have a commitment to water replenishment. Our goal is to be water positive, meaning that we're providing more water to the community than we are consuming. So, for example, across the United States today, we have more than 90 water replenishment projects, including one that focuses on the San Juan River in your state of New Mexico, which focuses on water security for the river. So I think it's a good example of how we can play a responsible role in addressing an issue that is of growing importance.
Sen. Ben Ray Luján (D-NM):
Appreciate it. Mr. Intrator, same question.
Michael Intrator:
Yeah, I can't answer the question of how many engineers we have focused on it, but I will say that the ability to extract more computational power out of a given megawatt is of paramount importance to my company, to all of us in this room. And we spend an enormous amount of time integrating the most bleeding-edge technology, which is a step function more efficient in terms of its computational output than the legacy technology has historically done. So moving to liquid cooling has just been an incredible improvement in efficiency, and ultimately we face this problem from within a given data center, within a given power envelope. How much can we move the computational resources forward? And that's really an important part of what we do.
Sen. Ben Ray Luján (D-NM):
I appreciate it. Mr. Chairman. I have other questions. I'll submit to the record. Mr. Moran did ask one question, Mr. Alt Suman, you responded to it, but can you all just answer yes or no? Is it important to ensure that in order for AI to reach its full prominence that people across the country should be able to connect to fast affordable internet? Dr. Su?
Dr. Lisa Su:
Yes.
Sen. Ben Ray Luján (D-NM):
Yes. Thank you. Appreciate it. I yield back. Thank you.
Sen. Ted Cruz (R-TX):
Thank you. Senator Lummis.
Sen. Cynthia Lummis (R-WY):
Thank you, Mr. Chairman, and thank you all for coming today. I really have been amazed at the outstanding progress that continues to be made in this field, and I'm already seeing people in Wyoming that are using ChatGPT or Claude to improve their businesses, whether it's healthcare or mining or oil and gas or education, ranching even. I'm just really excited about what this opportunity brings to America now as I see it. The world has presented us with two paths. On one hand, the EU has chosen to regulate first and asked questions later. The GDPR is already limiting European access to the most capable AI models. On the other hand, China appears to be fast-tracking AI development, standing up large amounts of energy very quickly in an attempt to outcompete America. So I'd like to ask a few questions about how we can make sure we get the full benefit of this technology and accelerate its development. So first question. Over the past year, we've seen many states, including California and Texas, consider their own AI frameworks, each one significantly burdensome in their own right. At the same time, our lead against China is shrinking to about only six months. So first of all, Mr. Altman, could you please sketch out what the world could look like if the US were to have a patchwork regulatory framework and how that could impact our competitiveness?
Sam Altman:
I think it would be quite bad. I think it is very difficult to imagine us figuring out how to comply with 50 different sets of regulation. And in many of these states, there have been dozens of different bills proposed that I understand several of which could be passed that will slow us down at a time where I don't think it's in anyone's for us to slow down one federal framework that is light touch that we can understand and that lets us move with the speed that this moment calls for. Seems important and fine, but the sort of every state takes a different approach here, I think would be quite burdensome and significantly impair our ability to do what we need to do. And hopefully you all want us to do too.
Sen. Cynthia Lummis (R-WY):
Does anyone disagree with Mr. Altman's assessment of a patchwork? Thank you. I have some questions about the infrastructure that is going to be necessary to lead and compete in AI. So my next questions are for our infrastructure providers, Mr. Smith and Mr. Intrator?
Michael Intrator:
That's correct.
Sen. Cynthia Lummis (R-WY):
Intrator. Thank you. Could you elaborate on how current permitting processes have impacted your ability to rapidly deploy AI infrastructure? The more specific you can be, the better.
Michael Intrator:
So quick comment on the patchwork, and then I'll dive in. Here is the investment that we're making on the infrastructure side is enormous. And the idea that you can make an investment that could then become trapped in a jurisdiction that has a particular type of regulation that would not allow you to make full use of it is really very, very suboptimal and makes the decision-making around infrastructure challenging. As far as the permitting goes, whenever this topic comes up, the discussion around permitting is excruciating and it's excruciating from the ability to quickly build and to build large. And I think that is kind of from the data center forward without even beginning the discussion from the data center back through the energy infrastructure that is necessary to be able to power these large investments at the scale that make them of relevance to moving artificial intelligence forward. Happy to spend more time digging into more details. Probably do that directly.
Sen. Cynthia Lummis (R-WY):
Okay. And I'll look forward to that conversation because I'm worried about Wyoming's very clean natural gas being something your industry's concerned about because President Trump likes natural gas, but President Biden didn't. And if you bill huge data centers and another president comes along who's anti-gas, that's a concern for you as you're deciding how to deploy capital. Mr. Smith, do you agree?
Brad Smith:
Generally? I do. I mean, I would say we need consistency across administrations in this country. We need to find more opportunities for bipartisan agreement. And I'll just highlight that in Cheyenne where we've long had a data center complex, we do have backup generators that run on natural gas. So there are a variety of ways for us to put different energy supplies to good use.
Sen. Cynthia Lummis (R-WY):
Are you exploring small modular nuclear?
Brad Smith:
Yes.
Sen. Cynthia Lummis (R-WY):
Including with people in Wyoming?
Brad Smith:
Yeah.
Sen. Cynthia Lummis (R-WY):
Thank you Mr. Altman. I'm pleased to hear you are releasing an open… Oh my time's up. Excuse me. It goes so fast.
Sam Altman:
I'd love to talk to you about it another time. We're very excited about it too.
Sen. Cynthia Lummis (R-WY):
Yeah, thank you. I yield back.
Sen. Ted Cruz (R-TX):
Thank you. Senator Rosen.
Sen. Jacky Rosen (D-NV):
Thank you. Chairman Cruz. I saw ready to push the button. And anyway, time does go by very fast. Thank you for having this hearing. I really believe in the promise of ai so exciting, and we have to ask the right questions in order to promote its growth on one hand and how can explore and create these new possibilities and pathways and also how do we protect ourselves from bad actors or outcomes as best as we can know at the time. And Mr. Altman, thank you for spending some time with me yesterday. I look forward to continuing to work with you on this. So I want to start a little bit today at Deep Seek and adversarial AI because in February I introduced bipartisan legislation with Senator Husted to prohibit the using DeepSeek on government devices. And earlier this week, Senator Cassidy and I introduced a bill that would expand those prohibitions to include federal contractors. So Mr. Smith, what should her approach be to AI models that are developed in or by adversarial countries like the PRC? Should we be concerned about our adversaries co-opting AI to promote a particular ideology, collect sensitive US data, and how are you combating this threat?
Brad Smith:
Well, I think you can take the DeepSeek example and it illustrates it well, and I think it's just worth thinking about the fact that DeepSeek produced two things. They have a model that is an open-source model and they have an application, the DeepSeek app At Microsoft, we don't allow our employees to use the DeepSeek app. We did not put the DeepSeek app in our app store because of the kinds of concerns that you mentioned, namely data going back to China and the app creating the kinds of content that I think people would say we're associated with Chinese propaganda. At the same time, because the model itself is an open source model, it was possible for us to go in it, analyze it and change the code in the model which we and other people have the permission to do to remove the harmful side effects. And so I think we have to always think about the different aspects of the technology. I will say put security first and then go forward from there.
Sen. Jacky Rosen (D-NV):
Thank you. I think we all know that data is the real power in our current world. He or she or whomever owns the data really can control a lot of what we do. But I want to move on and speak with you Mr. Altman about AI and antisemitism a little bit because earlier this year ADL released a report showing that several major generative AI models have perpetuated dangerous antisemitic stereotropes and sadly conspiracy theories. So Mr. Altman, what steps is industry taking to ensure that AI models don't perpetuate antisemitism? Will you consider collaborating with civil society to create kind of a standard benchmark for AI related to antisemitism? Use it as a form of evaluation and then maybe we could use those for other forms of hate as well.
Sam Altman:
Of course, we do collaborate with civil society on this topic and we are excited to continue to do so. We want our users to have freedom to use models in the way they want, but we also don't want them to be damaging to the fabric of society or particular groups. There will always be some debate and the question of free speech in the context of AI is novel and I think it's different than what we face before. We really do view these as tools for users, but of course we're not here to make horrible anti-Semitic products.
Sen. Jacky Rosen (D-NV):
Thank you. I want to move on to Senator Luhan. Talked about data center, energy use, water use, something we're all really concerned about. I want to put on top of that a little bit about data center security, add that to the mix. So last Congress, I actually got a bill passed into law, my bipartisan federal Data Center Enhancement Act. It establishes cybersecurity and resiliency standards for federal data centers. And so to Mr. Smith, or I'm sorry, Dr. Su, thank you Dr. Su, I want to ask you a little bit about hardware. Are there ways the hardware, like the chips, AMD designs, new chips that we're hoping to think about? I know my career in as a software developer, we just know things have gotten smarter faster and the cooler they can be, the better we can compute. So how can we make our chips cooler? How can we make our data centers our computing power more secure? And I know interoperability is sometimes a factor, but can you talk about this a little bit?
Dr. Lisa Su:
Sure. Thank you for the question, Senator. Look, I think all of those things are extremely important, as you said. So in our part of the energy efficiency, power constraints that we have from a chip standpoint, our job is to continue to make our chips more and more efficient every year. We've seen 30 times improvement over the last few years and we will continue to focus in that area. And then to your comments about security and ensuring that our chips are secure and people are not somehow breaking into them, those are also very high priorities in our overall development cycle for future generation chips as well.
Sen. Jacky Rosen (D-NV):
Oh, thank you. I look forward to working with all of you again on these important issues. Mr. Chairman.
Sen. Dan Sullivan (R-AK):
Thank you. Senator Sullivan. Thank you, Mr. Chairman. I want to thank the witnesses for the testimony today. I appreciate the Chairman calling this hearing and I agree with Senator Cruz's opening statement about this is a matter of national economic and national security in terms of our race, however you want to call it, competition with China. So I know this topic has been pressed, but I want to dig down a little bit deeper. Do you agree with that all of you? I'm just going to ask some quick questions that this is a huge issue of national security, economic security relative to China and we as America need to win in that regard. Very important. Everybody nodding their head and then I know that had been touched, but is the consensus among the witnesses that we are ahead right now, but as a kind of tentative lead, what would be very quickly, we'll start with you Mr. Altman, what's your assessment on that? I know we've already talked about, I just want to set the context for some of the questions.
Sam Altman:
Yeah, I believe we are leading the world right now. I believe we'll continue to do so. We want to make AI in the United States and we want the whole world to get to benefit from that. I think that is the strongest thing for the United States. I think it's also the right thing to do for all the people of the world, and I really appreciate you all being with us here today because I think we'll need your help and everything you're saying or almost everything you're saying sounds great.
Sen. Dan Sullivan (R-AK):
So as I ask this question, I'll ask if you guys think we're ahead, but then the key things when you say we need your help, what would very succinctly sometimes we're not so smart up here, what would the key things be that you would need from the US government to help us maintain that lead and dominate this space? Which is what I think we need to do. Mr. Altman, again to you real quick on that.
Sam Altman:
We've talked a little bit about infrastructure, but I think we cannot overstate how important that is and the ability to have that whole supply chain or as much of it as possible in the United States. The previous technological revolutions have also been about infrastructure and the supply chain, but AI is different in terms of the magnitude of resources that we need. So projects like Stargate that we're doing in the us, things like bringing chip manufacturing, certainly chip design to the us, permitting power quickly, like these are critical. If we don't get this right, I don't think anything else we do can help. On the model creation side, we've talked about the need for certainty on our ability to train and to have fair footing with the rest of the world to make sure we can remain competitive. The ability to offer products under a reasonable fair like touch regulatory framework where we can go win in the market because the products will be so key to the sort of feedback loops and making them better and better and the ability to deploy them quickly and win at the product level in addition to the model and infrastructure and data area is really quite important.
The ability to bring the most talented people in the world here, the most talented researchers, we have a ton in the United States. There's more out in the world. We should try to get them all here, improving models here. I think those are some of the specifics.
Sen. Dan Sullivan (R-AK):
Good. That's very helpful. Let me ask Mr. Smith, two other ones that I want to touch on. I agree fully with Senator Lummis. I'm sure Senator Cruz has the same view. One of our comparative advantages over China in my view has to be energy. All of the above energy, hopefully you've seen in Alaska, we have a very large scale L and G project that I think we're going to get off the ground here. We've been working on for a long time. We will have a hundred years supply of natural gas, so we want you guys all to come up to Alaska with your data centers. We got cold weather, we've got a lot of cold weather, we got gas, we got land, we got water, we got it all.
Sam Altman:
That's very compelling.
Sen. Dan Sullivan (R-AK):
So yeah, come on up when this project's done, a hundred-year gas supply a little colder than Texas. So two questions that relate to our comparative advantage, Mr. Smith and then any others who want to jump in energy, do we think that is? I think it is. And then second, it's I think somewhat of a disadvantage. It frustrates me, maybe you guys don't see this. We've had American finance companies, venture capital firms, banks, others that remarkably all the opportunities we have in America are helping fund some of these projects in China. I've been a real staunch opponent of Americans who have opportunities to invest in other places, investing in Chinese AI, Chinese quantum, because we all know they're going to use that to help make their military more lethal. I mean that's what they do. I was reading recently about this benchmark capital. I don't know these guys, but they evidently did a $75 million round for an AI company in China. Is that another problem as well? Mr. Smith advantage energy problem, American companies financing our competition,
Brad Smith:
I would connect three things, energy people and access to capital. The US has huge resources in energy, but never underestimate the ability of China to build a lot of electrical power plants, maybe more and faster than any other country. So we are better off going into that with the mindset that we have to keep up and not take anything for granted. But then I would say the number one comparative advantage of the United States throughout the 50 years that have defined digital technology has been bringing the world's best people to our country and giving them access to venture capital. And we should continue to burnish both of those. And I think you're right to ask where else is venture capital going? I'll just say this. If we can keep bringing the best people to the United States and if we can keep educating the best people in the United States, I believe the money will be here to enable them to succeed. But let's make sure we're continuing to bring the best people in the world and giving them the opportunity to build great companies here in the United States.
Sen. Dan Sullivan (R-AK):
American venture capital funds Chinese AI. Is that in our national interest?
Brad Smith:
I think there's a really good question about whether it is, and I recognize that you all are quite rightly focused on that. I'll just keep saying bring the people here. They will have access to the money and we will outcompete the world.
Sen. Dan Sullivan (R-AK):
Great. Thank you. Thank you, Mr. Chairman.
Sen. Ted Cruz (R-TX):
Thank you Senator Markey.
Sen. Ed Markey (D-MA):
Thank you, Mr. Chairman. Very much. I'd like to talk about the environmental impact of artificial intelligence. Artificial intelligence can help us combat climate change by improving weather forecasts and enabling us to better predict power, supply and demand. But designing and training and deploying AI models also poses real risk for our environment. The massive data centers that are critical for AI development require substantial amounts of electricity, putting stress on the grid and potentially raising costs for consumers. These data centers also generate significant heat. Cooling them requires huge volumes of water, often in regions already facing droughts because of climate change. And some data centers have onsite backup diesel generators which can cause respiratory and cardiovascular issues and can increase the risk of cancer for the surrounding community. The truth is we know too little about both the environmental costs and benefits of AI. Mr. Smith, do you agree that it would be helpful for the government to conduct a comprehensive study on environmental impact of artificial intelligence generally?
Brad Smith:
Yes. One study was just completed last December, and I think it's worth updating periodically.
Sen. Ed Markey (D-MA):
Do you think it would be helpful for the government to convene stakeholders including from industry and academia to help better measure AI's environmental impact?
Brad Smith:
I think as well as many other things that need to be measured? Yes. I think there's a role to be played.
Sen. Ed Markey (D-MA):
Mr. Altman, do you agree that the federal government should help with studying and measuring the environmental impact of ai?
Sam Altman:
I think studying and measuring is usually a good thing. I do think that the conversation about the environmental impact of AI and the relative challenges and benefits has gotten somewhat out of whack. I am hopeful that ai, we've been trying to address climate environmental challenges unsuccessfully or not successfully enough for a long time. I think we need help. I think AI can help us do that. We've proposed or we're in the process of building a 10 gigawatt facility and we've got another kind
Sen. Ed Markey (D-MA):
Of call. My question is, should the federal government be on an ongoing basis studying the impact of AI?
Sam Altman:
Sure, and I think you should use AI to help.
Sen. Ed Markey (D-MA):
So that's why this Congress introduced the Artificial Intelligence Environmental Impact Act to study both the positive and negative consequences of AI. As the technology continues to develop, as models become more efficient and as we build out the infrastructure, we need to do it. Yes, AI may find a cure for cancer, it may, but AI also could help to contribute to a climate disaster that's also equally true. So we need to just keep both of those things right on the table, especially as the Trump administration is ignoring the fact that last year 94% of all new installed electrical generation capacity in the United States was wind, solar, and battery, and Trump has said he's going to destroy all incentives for continuation of that. That's something you have to weigh in on. Make sure he does not do that. So I look forward to working with you on that. Now I want to turn to AI's impact on disadvantaged communities. After all, we're not just talking about using artificial intelligence to write emails or plan grocery lists. We're talking about technology used to calculate a family's mortgage screen, an individual's job application, and determine a senior's medical care. When used in these situations, it is absolutely essential that AI powered algorithms are free from bias and discrimination. So let's start with a simple question. Mr. Smith. Can algorithms be biased and cause discrimination?
Brad Smith:
They can, which is why we test to avoid that outcome.
Sen. Ed Markey (D-MA):
Okay, same question Mr. Altman. Can algorithms be biased and cause discrimination? Of course, of course. Of course. Mr. Altman does open AI work to guard against such bias and discrimination in ChatGPT? Of course. Of course. So I'm glad to hear that because you recently stated that the government should not implement privacy regulations on AI, but instead respond very quickly as the problems emerge. And I am very deeply worried about that approach. We don't need to wait and see. A poorly tested and trained algorithms will harm marginalized communities. Artificial intelligence is already supercharging the bias and discrimination prevalent in our society. Biased and discriminatory algorithms mean black and brown families are less likely to obtain a mortgage. It means people with disabilities are less likely to be recommended for a job opening, and it means women are less likely to receive scholarships for higher education.
These are real harms that are happening right now. It is Congress's job to address these existing problems that come with the rapid development and deployment of AI, and it's why I'm the proud author of the AI Civil Rights Act, which would ensure that companies review and eliminate bias and discrimination in their algorithms before developing and deploying them. It has to happen simultaneously and it will hold companies accountable when their algorithms cause harms against marginalized population. I will be fighting to ensure AI does not stand for accelerating inequality in our nation. All of the protections we have in the real world should be moved to the virtual world because the same discriminations, again, women, black brown communities with disabilities L-G-B-T-Q community are going to move online and we have to build in the protections against that bias right upfront because otherwise those same discriminatory practices will just migrate immediately and the responsibility of the industry will be to work with Congress to make sure we put those protections on the books. Thank you, Mr. Chairman.
Sen. Ted Cruz (R-TX):
Thank you. Senator Peters.
Sen. Gary Peters (D-MI):
Thank you, Mr. Chairman, and thanks to all of our witnesses. Thank you for being here. It's incredibly important topic and we appreciate your expertise as we're looking at making sure that the United States is the world leader in AI. Certainly we've been talking about supply chains and infrastructure and all of those aspects, but one area that I want to particularly focus on is workforce and people to make sure that we have the talent there. That's why I author the AI Scholarship for Service Act and the AI Training Act. Both of those were signed into law in 2022. Earlier this year, I introduced my AI and Critical Technology Workforce Framework Act to continue the effort along those lines and love to work with each of you as we look at other legislation necessary to make sure we've got the workforce trained to take advantage of this amazing technology.
I do want to do a shout out to the University of Michigan that actually became the first university in the world to provide generative AI eye tools for their entire student body to prepare them for the workforce of tomorrow. So I want to talk a little bit about the workforce, but Mr. Altman, when we met last year in my office and had a great, you said that upwards of 70% of jobs could be limited by AI and you acknowledge that the possible social disruption of this, if that's happening, we have to prepare for it. We're not going to stand in the way of the incredible opportunities here, but if this is indeed going to occur, we've got to be thinking pretty deeply about how that will be managed and make sure that everybody can benefit from ai, not just a select few that benefit. So talk to me about how you believe leaders in your industry can help mitigate job losses or deal with what could, as you described it last year, a major social disruption.
Sam Altman:
The thing that I think is different this time than previous technological revolutions is the potential speed. Technological revolutions have impacted jobs and the economy for a long time. Some jobs go away, some new jobs get created. Many jobs just get more efficient and people are able to do more and earn more money and create more. And that's great. Over some period of time, society can adapt to a huge amount of job change. And you can look at the last couple of centuries and see how much that's happened. I don't know, I don't think anyone knows exactly how fast this is going to go, but it feels like it could be pretty fast.
The most important thing or one of the most important things I think we can do is to put tools in the hands of people early. We have a principle that we call iterative deployment. We want people to be getting used to this technology as it's developed. We've been doing this now for almost five years, since our first product launch as society and this technology co-evolve, putting great capable tools in the hands of a lot of people and letting them figure out the new things that they're going to do and create for each other and come up with and provide sort of value back to the world on top of this new building block we have and the sort of scaffolding of society that is I think the best thing we can do as open AI and as our industry to help smooth this transition.
Sen. Gary Peters (D-MI):
The idea we want to get to the point where AI isn't displacing work, but actually enhancing work that people are more productive and doing things that we probably can't even imagine what people will do if we look a hundred years ago. We have jobs that no one dreamed.
Sam Altman:
And I don't think we can imagine the jobs on the other side of this, but even if you look today at what's happening with programming, which I'll pick because it's sort of my background and near and dear to my heart. What it means to be a programmer and an effective programmer in May of 2025 is very different than what it meant last time I was here in May of 2023. These tools have really changed what a programmer is capable of the amount of code and software that the world is going to get. And it's not like people don't hire software engineers anymore. They work in a different way and they're way more
Sen. Gary Peters (D-MI):
Productive. Right. Dr. Su, we certainly talk a lot about open source ai, but most of the conversation has been about software. However, making technology open and able to work together matters at every level, as you know from chips, the power of the devices to the servers that are running behind the scenes. So a question for you is what are the benefits of open standards and system interoperability at the hardware level, not the software level, and what are the implications for innovation, national security as well as resilience in the supply chain?
Dr. Lisa Su:
Yeah, thank you for the question, Senator. I think there are a incredible number of advantages to having an open ecosystem at the hardware and the software and the application level. The idea is there's no one organization or one group that has all the good ideas. And so enabling the ecosystem to work together so that you can choose the best solution at every level and then also optimization across a broad set of constituents is a good thing. I think it's also very good from a security standpoint to ensure that again, there are many choices so that we're not dependent on a single ecosystem. So we continue to be very forward-thinking in open standards as well as open ecosystems.
Sen. Gary Peters (D-MI):
So your model is open model, I understand Nvidia is a closed model. What are the advantages, disadvantages, what should we be thinking about?
Dr. Lisa Su:
I think the major advantage in an open model, and that is something that we very much support, is the idea that we can have innovation come from many different parties and whether that is hardware innovation, so on the different chips or that is system innovation on putting all of these things together and our goal is to make sure that we always have the best of the best and there are many different ways, many different parties that can contribute to that and that's why we are very forward-leaning in terms of open ecosystems.
Sen. Gary Peters (D-MI):
Great. Thank you. Thank you, Mr. Chairman.
Sen. Ted Cruz (R-TX):
Thank you. Senator Fetterman.
Sen. John Fetterman (D-PA):
Thank you Mr. Chairman. Hello Mr. Smith. I'm a big supporter of energy. For me, energy security is national security and of course renewables about that, but of course other things as well too fossil, but also that also includes nuclear. Of course nuclear is important and now then there's that kinds of energy transition. My focus is also that I want to make sure that rate payers in Pennsylvania really aren't hit too hard throughout all of this. Now, the Washington Post reported that increasing electricity demand for the data centers is going to raise up residential power bills perhaps as much by 20%. Now that's really a concern for me and certainly for Pennsylvania families. Now the data center has important jobs during construction and doing those things, and that's a great thing of course, but they're not I guess long-term, but the rate those rates might last longer for that.
And now been tracking the plan to reopen TM, I mean I had my own personal story is I had to grab my hamster and evacuate during the meltdown in 1979. You might assume that I was anti-nuclear and that is not it's, I actually am very supportive of nuclear because that's an important part of the stack. If you really want to have and address climate change, you can't turn your back on nuclear, in my opinion. But I know that's the power of Microsoft's data center and I really appreciate that. But if I'm saying now, if we're able to commit that the power purchase agreement, it's not going to raise electricity for Pennsylvania families and
Brad Smith:
No, I think you raise a critical point. We have two principles that we follow when we're constructing these data centers. Number one, we will invest to bring onto the grid in an amount of electricity that equals the amount of electricity that we will use so that we're not tapping a constricted supply. Number two, we will manage all of this in a way that ensures that our activity does not raise the price of electricity to the community. And so I was describing earlier how if there's improvements that need to be made to the grid as there often are, we'll go to the utility commission, we will propose a change in the rate that we are charged so that we can pay for that improvement. I just think it's a fact of life because I think you highlight something critical. There's a lot of jobs when the construction takes place. There are jobs afterwards, but they are not as many. One will wear out the welcome quickly if we tax in effect the neighborhood by asking everyone to pay more for their electricity because we have arrived. We get it. We know we have to be a good and responsible member of the neighborhood.
Sen. John Fetterman (D-PA):
Thank you. Well now one of the perks of being a senator that for me anyway, I get an opportunity to meet people that have much more impressive kinds of jobs or careers that I've led. And now Mr. Altman, now I'm going to count this as a highlight recently. I know the work that you've done and you're really one of the people that are moving AI and now it's an opportunity. I was excited to meet you and people ask me, it's like if you're going to talk about AI and now I get to ask you, I mean like the literal the expert, some people are worried about AI or whatever, and I'm like, what about the singularity? So the people like that, if you would address that please.
Sam Altman:
Thank you, Senator for the kind words and for normalizing hoodies and more spaces. I love to see that. I am incredibly excited about the rate of progress, but I also am cautious and I would say, I dunno, I feel small next to it or something. I think this is beyond something that we all fully yet understand where it's going to go. This is I believe, among the biggest, maybe it was trying to be the biggest technological revolutions humanity will have ever produced. And I feel privileged to be here. I feel curious and interested in what's going to happen, but I do think things are going to change quite substantially. I think humans have a wonderful ability to adapt and things that seem amazing will become the new normal very quickly. We'll figure how to use these tools to just do things we could never do before and I think it will be quite extraordinary. But these are going to be tools that are capable of things that we can't quite wrap our heads around. And some people call that as these tools start helping us to create next future iterations. Some people call that singularity, some people call that the takeoff, whatever it is. It feels like a sort of new era of human history, and I think it's tremendously exciting that we get to live through that and we can make it a wonderful thing, but we've got to approach it with humility and some caution.
Sen. John Fetterman (D-PA):
For me, it's been I get a chance to ask questions to a lot of Edisons as well too. The kinds of things that draw collectively involved are going to transform our society and people will look back 50, 60 years ago and see what's happened. So to me, over to the Chairman, thank you.
Sen. Ted Cruz (R-TX):
Thank you, Senator Fetterman. Senator Klobuchar.
Sen. Amy Klobuchar (D-MN):
Thank you. Good thought. Senator Fetterman. Thank you. So you guys have been sitting here so long that the Pope has been chosen. Wow, we don't know who.
Sen. Ted Cruz (R-TX):
Congratulations Amy.
Sen. Amy Klobuchar (D-MN):
The white smoke has come up.
Sen. Ted Cruz (R-TX):
Congratulations.
Sen. Amy Klobuchar (D-MN):
You're welcome. Probably wouldn't work, but in any case, it was. I left for some other things. Came back. I had one more question that I wanted to ask and it's related to just the whole deepfake issue just because Senator Blackburn and Senator Coons and Senator Tillis and I have worked on this really hard and their Blackburn and Coons are in the lead of the bill, but we have recently seen deepfake videos of Al Roker promoting a cure for high blood pressure, a deepfake of Brad Pitt asking for money from a hospital bed. Sony Music has worked with platforms to remove more than 75,000 songs with unauthorized DeepFakes, including Voices of Harry Styles, Beyonce, I recently met, it's not just famous people, there is a Grammy-nominated artist from Minnesota talked to him about what's going on with digital replica. So there's a real concern and it kind of gets at what center sch and I were talking about earlier with the news bill, but I just wanted to make you all aware of this legislation.
There were some differences on this and now we have gotten a coalition including YouTube supporting it as well as the Recording Industry Association, motion Picture Association, SAG-AFTRA. So it's a big deal and I'm hoping it's something that you'll all look at, but could you just comment, I would go to you Mr. Smith first about protecting people from having their likenesses replicated through AI without permission. And even if you all pledge to do it, our obvious concern is that there will maybe other companies that wouldn't. And that's why I think as we look at what these guardrails are, the protection of people's digital rights should be part of this. Mr. Smith?
Brad Smith:
Yeah, no, I think you're right to point to it. It has become a growing area of concern. During the presidential election last year, both campaigns, both political parties were concerned about the potential for deep fakes to be created. We worked with both campaigns in both parties to address that. We see it being used in really ways that I would call abusive, including of celebrities and the like. I think it starts with an ability to identify when something has been created by AI and is not a genuine, say, photographic or video image. And we do find that AI is much more capable at doing that than say the human eye and human judgment. I think it's right that there be certain guardrails and some of these we can apply voluntarily. We've been doing that across the industry. OpenAI and Microsoft were both part of that last year and there are certain uses that probably should be considered across the line and therefore should be unlawful. And I think that's where the kinds of initiatives that you're describing have a particularly important role to play.
Sen. Amy Klobuchar (D-MN):
And could you look at that legislation?
Brad Smith:
Absolutely.
Sen. Amy Klobuchar (D-MN):
Appreciate it. Mr. Altman, just same question, same thing,
Sam Altman:
Sorry. Of course we'd be happy to look at the legislation. I think this is a big issue and it's one coming quickly. I do not believe, I think there's a few areas to attack it. You can talk about AI that generates content, platforms that distribute it, how take downs work, how we educate society and how we build in robustness to expect this is going to happen. I do not believe it will be possible to stop the generation of the content. I think open source, open weight models are a great thing on the whole and something we need to pursue, but it does mean that and do this. The mass distribution, I think it's possible to put some more guardrails in place and that seems important, but I don't want to neglect the sort of societal education piece. I think with every new technology there's almost always some sort of new scams that come.
The sooner we can get people to understand these, be on the lookout for them, talk about this as a thing that's coming, and then I think that's happening. I think the better people are very quickly understanding that content can be AI-generated and building new kinds of defenses in their own minds about it. But still, if you get a call and it sounds exactly like someone and their panic and they need to help, or if you see a video that, like the videos you talked about, this gets at us in a very deep psychological way and I think we need to build societal resilience because this is coming,
Sen. Amy Klobuchar (D-MN):
It's coming, but there's got to be some ways to protect people. We should do everything privacy.
Sam Altman:
For sure.
Sen. Amy Klobuchar (D-MN):
And you've got to have some way to either enforce it, damages, whatever, there's just not going to be any consequences.
Sam Altman:
Absolutely. We should have all of that. Bad actors still don't always follow the laws and so I think we need an additional shield or whenever we can have them. But yes, we should absolutely protect
Sen. Amy Klobuchar (D-MN):
That. Alright, look forward to working with you on it. Thank you.
Sen. Ted Cruz (R-TX):
So I have to say Senator Klobuchar question about fakes and AI fakes made me feel guilty because I did in fact tweet out an AI generated picture of Senator Federman as the Pope of Greenland. So I am guilty of doing so. Although it may not be a fake, it may be a real thing.
Sen. Amy Klobuchar (D-MN):
Okay. Oh, whoa. Parody is allowed under the law, parody is allowed. That is different than what I'm talking about, but Senator Fetterman should respond.
Sen. Ted Cruz (R-TX):
Yeah, it may be. It's a good shot actually. All right, I have a few more questions and then we will wrap up. Mr. Altman, what has been the most surprising use for ChatGPT you've seen? What are applications that you're seeing that are surprising?
Sam Altman:
People message ChatGPT billions of times per day, so they use it for all sorts of incredibly creative things. I will tell one personal story, which is mentioned earlier. I recently had a newborn. Clearly people did it, but I don't know how people figured out how to take care of newborns without ChatGPT, that has been a real lifesaver.
Sen. Ted Cruz (R-TX):
So I will tell you a story that I've told you before, but my teenage daughter several months ago sent me this long detailed text and it was emotional and it was really well-written, and I actually commented, I'm like, wow, this is really well-written. She said, oh, I use ChatGPT to write it. Like, wait, you're texting your dad and you, it is something about the new generation that it is so seamlessly integrated into life that she's sending an email, she's doing whatever, and she doesn't even hesitate. Think about going to chat GPT to capture her thoughts.
Sam Altman:
I have complicated feelings about that.
Sen. Ted Cruz (R-TX):
Well use the app and then tell me what your thoughts. Okay, Google just revealed that their search traffic on Safari declined for the first time ever. They didn't send me a Christmas card. Will chat GPT replace Google as the primary search engine? And if so when?
Sam Altman:
Probably not. I mean, I think some use cases that people use search engines for today are definitely better done on a service like ChatGPT, but Google is like a ferocious competitor. They have a very strong AI team, a lot of infrastructure, a very well-protected business, and they're making great progress putting AI into their search.
Sen. Ted Cruz (R-TX):
Alright, so a question that I have spent a lot of time talking to business leaders, CEOs in the tech space, ai, and one question that I've asked that I get different answers on, and I'm curious what the four of you say, how big a deal was deeps seek? Is it a major seismic shocking development from China? Is it not that big a deal? Is it somewhere in between and what's coming next and let's each of the four of you?
Sam Altman:
Not a huge deal. There are two things about DeepSeek. One is that they made a good open-source model and the other is that they made a consumer app that for the first time briefly surpassed ChatGPT as the most downloaded AI tool, maybe the most downloaded app. Overall, there are going to be a lot of good open source models and clearly there are incredibly talented people working at Deepsea doing great research, so I'd expect more great models to come. Hopefully. Also us and some of our colleagues will put out great models too on the consumer app. I think if the DeepSeek consumer app looked like it was going to beat ChatGPT and our American colleague’s apps, is the default AI systems that people use, that would be bad. But that does not currently look to us like what's happening.
Dr. Lisa Su:
I would say it's somewhere in between Chairman Cruz. When you think about what we learned, what we learned is there are different ways of doing things, so we have lots of incredibly innovative people in the United States. American models are clearly the best by far. However, when you have constraints that are placed, there are other ways of doing things, and I think we learned a few things in the process. I think the open source nature of DeepSeek was one of the things that probably was most impactful in just terms of how much can be in an open source type of model and an open ecosystem. But clearly the United States is leading and we need to continue, as we said, to accelerate innovation and adoption. As you started this hearing with
Michael Intrator:
I think a DeepSeek did a lot of things. One of the things that it did was it sort of raised the specter of China's AI capability to a much broader audience than was perhaps focused on it prior to that, right? And so you saw that kind of reverberate through the financial markets. You saw broad-based reaction and suddenly everyone knows what DeepSeek is and the fact that China is not theoretically in the race for AI dominance, but actually is very much a formidable competitor. And so it was a starting gun in some ways for the broader population and maybe the broader consciousness of the fact that this is not a fait a compli, and that we're going to have to work as America together to kind of propel our solutions forward. And so I think that was one of the lasting impacts that we will see from that.
Brad Smith:
I would say, Lisa, that somewhere in between it wasn't shocking. I mean, it was one of a number of startups that we were following in China that we saw as having the potential to be innovative in this space. I do think there's a really interesting and important point that constraints encourage innovation in other ways, and I just think one of the interesting facts about DeepSeek is that of their, say 200 or more employees, that was their size when they released these models, almost all of their employees by design were four years or less out of university. They wanted to hire people that would not bring to their work traditional ways of doing things.
Sen. Ted Cruz (R-TX):
So the kids are taking over the world too.
Brad Smith:
Every generation.
Sen. Ted Cruz (R-TX):
Related to that. Were you finished with that, Mr. Smith? Related to that, we talked at the outset about the AI diffusion rule being rescinded, which I'm glad. I think it was a bad rule. I think it was overly complex. I think it put on a number of our trading partners unfair restrictions, and so I'm glad that the president is rescinding it. That doesn't necessarily mean there should be no restrictions and there are a variety of views on what the rules should be concerning AI diffusion. Nvidia has argued that we want American chips everywhere, even in China. Others have argued that we want to restrict at least the most advanced processors. I'm curious, each of the four of you, what do you think the rules should be, if anything, is to replace the AI diffusion rule? Mr. Altman, we'll start with you.
Sam Altman:
I also was glad to see that rescinded. I agree there will need to be some constraints, but I think if the sort of mental model is winning diffusion instead of stopping diffusion that directionally seems right, that doesn't mean there's no guardrails. It doesn't mean we say we're going to go build a bigger data center in some other country than the us. Our intention is to build our biggest and best data centers in the us, do training in the us, build the models here, have our core research here, but then we do want to build inference centers with our partners around the world. And we've been working with the US government on that. I think that'll be good to this point, that influence comes from people adopting US products and services up and down the stack, maybe most obviously if they're using ChatGPT versus DeepSeek, but also if they're using US chips and US data center technology and all of the amazing stuff Microsoft does, that's a win for us and I think we should embrace that. But make sure that the most critical stuff, the creation of these models, that will be so impactful that that should still happen here.
Sen. Ted Cruz (R-TX):
Dr. Su?
Dr. Lisa Su:
I think we would totally agree with the concept that some restrictions are necessary. This is a matter of national security as much as it is about AI diffusion, that being the case, we were happy to see the rescinding as well. And we view this as an opportunity to really simplify, right? At the end of the day, we've talked about the need to drive widespread adoption of our technology and our ecosystem. Simple rules that can be easily applied that really allow our allies to protect our technology while still utilizing the best that the United States has to offer, I think is a good start in terms of where we're going. And again, this is an area where I think the devil's in the details and it requires a lot of balance. And so from an industry standpoint, it's our job to put on the broader hat and work hand in hand with the administration and Congress to make our best recommendations so that it is a policy that has some stability as we go forward as well.
Sen. Ted Cruz (R-TX):
Mr. Intrator.
Michael Intrator:
So I'll echo what Sam and Lisa said, but national security is paramount. And then once you've addressed the limitations around national security, the opportunity to work with regulators to put together a regulatory framework beyond that makes a lot of sense. And the diffusion rule didn't allow us that opportunity to participate fully enough to feel like we're going to come away with what would be an optimal outcome at this point.
Sen. Ted Cruz (R-TX):
Mr. Smith.
Brad Smith:
I think we've all discussed the right recipe. Simplify, eliminate these tier two quantitative restrictions that undermine confidence and access to American technology, but enable even the most advanced GPUs. The country has to be exported to data centers that are run by a trusted provider that meet certain security standards. That means both physical and cybersecurity standards, that there is protection against diversion of the chips, and there are precautions against certain uses. And that means two things. One is that there are controls in place to ensure that say the PLA, the Chinese military isn't accessing and using these advanced models or advanced chips in a data center regardless of the country that it's in. And there are certain harmful uses that one should want to prohibit and preclude like using a model to create the next pandemic, a biological weapon, a nuclear weapon. And I think that there is an approach that is coming together that can be retained and can move forward, and that strikes the right balance.
Sen. Ted Cruz (R-TX):
Okay. Final question for each of you. Would you support a 10 year learning period on states issuing comprehensive AI regulation or some form of federal preemption to create an even playing field for AI developers and deployers?
Sam Altman:
I'm not sure what a 10 year learning period means, but I think having one federal approach focused on light touch and even playing field sounds great to me.
Dr. Lisa Su:
Aligned federal approach with really thoughtful regulation would be very, very much appreciated.
Michael Intrator:
I agree with both of my colleagues.
Brad Smith:
Yeah, I think that builds, obviously on the op-ed that you and Senator Graham published last year, and I think giving the country time, your analogy, your example was this work for the internet. There's a lot of details that need to be hammered out, but giving the federal government the ability to lead, especially in the areas around product safety and pre-release reviews and the like, would help this industry grow.
Sen. Ted Cruz (R-TX):
Well, I want to thank each of the witnesses. This was a very interesting hearing. It was informative. These issues matter. You saw a great deal of interest on both sides of the aisle in this topic. And so I appreciate each of you are very busy and doing a lot of things, and I appreciate your being here today. Senators will have until the close of business on Thursday, May 15th to submit questions for the record and the witnesses will have until the end of the day on Thursday, May 29th to respond to those questions. And with that, that concludes today's hearing. The committee stands adjourned.
Authors
