Transcript: US House Subcommittee Hosts Hearing on "AI Regulation and the Future of US Leadership"
Justin Hendrix / May 21, 2025
The Rayburn Office Building in Washington, DC. Source
On Wednesday, May 21, 2025, the United States House Energy & Commerce Subcommittee on Commerce, Manufacturing, and Trade hosted a hearing titled "AI Regulation and the Future of US Leadership." Witnesses included AI Now Institute co-executive director Amba Kak, R Street Institute senior fellow Adam Thierer, US Chamber of Commerce senior vice president Sean Heather, and General Catalyst managing director Marc Bhargava.
Members and witnesses sounded many familiar notes. Republicans on the committee bashed Europe's AI Act and emphasized their desire for the United States to stay ahead of China on AI development. Democrats called for privacy and consumer protections and pointed out that a failure to police Big Tech firms has led to demonstrable harms with little accountability.
Most notably, Republicans and Democrats sparred over a proposed 10-year moratorium on the enforcement of state AI laws that Republicans passed out of the Energy & Commerce Committee as part of a budget reconciliation package last week. Supporters of the moratorium said it would stop a confusing patchwork of state laws and give Congress space to craft one national rulebook; opponents called it a dangerous giveaway that would leave consumers (and especially kids) unprotected.
Speaker | Quote |
---|---|
Rep. Jan Schakowsky (D-IL) | “I think it is absolutely something we should say zero time — zero time as we watch AI develop, not 10 years. We haven't done that for anything else, and for us to do that now makes absolutely no sense and puts all consumers at risk." |
Rep. Lori Trahan (D-MA) | "But what Republican members of this committee did, found time to do last week in the middle of the night, by the way, is forced through an unprecedented giveaway to the tech industry, a 10-year ban on state laws that could make AI safer for our constituents. Make no mistake, the families who have come to this committee and begged for us to act won't benefit from this proposal, but you know who will? The Big Tech CEOs who are sitting behind Donald Trump at his inauguration." |
Rep. Kim Schrier (D-WA) (for Ranking Member Pallone) | "Unfortunately, in a giant gift to Big Tech, Committee Republicans supported a provision in their terrible tax bill last week that imposes a 10-year ban on any state's ability to enforce their own laws protecting consumers from the harms caused by AI." |
Rep. Kathy Castor (D-FL) | "The problem is that you're putting the cart before the horse. You've now passed out of this committee a 10-year moratorium on all AI regulation at the state level before you even have that framework." |
Rep. Jay Obernolte (R-CA) | "I know there's been pushback about the 10 years that it's too long, that it's draconian. No one wants this to be 10 years, right? I would love to see this be months, not years, but I think it's important to send the message that everyone needs to be motivated to come to the table here." |
Rep. Russ Fulcher (R-ID) | “A patchwork of various state laws is not good for innovation, for business or consumers, and that is what we’re trying to avoid.” |
Rep. Yvette Clarke (D-NY) | "State laws exist to protect consumers. And now Republicans want to prevent states from issuing these protections on any product or practice or system that uses artificial intelligence. Just last week, they slipped in a few sentences into their massive tax bill that placed a 10-year ban on the enforcement of any state." |
Rep. Gabe Evans (R-CO) | "...my governor, Jared Polis, who signed Colorado's AI law and then came out in support of the federal moratorium that's being discussed here today. Among other things, I voted against the law when I was in a state legislature because I agreed with those statements and I saw this as dampening the ability to innovate and bring jobs to Colorado and fostering that patchwork across the country." |
Rep. John Joyce (R-PA) | "We need a federal approach that ensures consumers are protected when AI tools are misused and in a way that allows innovators to thrive. We must not make the same mistake that the EU has made with the EU AI Act, effectively choosing not to allow industry and AI to grow. A patchwork of conflicting and burdensome AI rules will have direct impacts on the federal government as well." |
Adam Thierer (R Street Institute) | "Costly, contradictory regulation is a surefire recipe for destroying a technological revolution and decimating little tech innovators. An AI moratorium offers a smart way to address this problem by granting innovators some breathing space and helping ensure a robust national AI marketplace develops. Congress has used moratoria before to protect interstate commerce and promote innovation." |
Amba Kak (AI Now Institute) | "This is an industry that has fooled us once, and we cannot let them fool us again with AI in this environment. The proposal for a sweeping moratorium on state AI-related legislation really flies in the face of common sense. We can't be treating the industry's worst players with kid gloves while leaving everyday people, workers and children exposed to egregious forms of harm." |
Sean Heather (US Chamber of Commerce) | "My message today is one, we should not be like Europe. One, we should stop international patchworks and domestic patchworks and AI regulation. We should not be in a rush to regulate. We need to get it right and therefore taking a timeout to discuss it at a federal level is important. We would support a moratorium." |
Marc Bhargava (General Catalyst) | "We believe a national regulatory framework is preferable to a patchwork of state policies at General Catalyst." |
What follows is a lightly edited transcript of the hearing. Check the hearing video before quoting.
Rep. Gus Bilirakis (R-FL):
Good morning, everyone. Before we get started, I think we should have a moment of silent prayer. One moment of silent prayer for a good friend, Gerry Connolly, who passed away this morning.
Thank you. May his memory be eternal. The committee will now come to order. Thanks to everyone, and especially our witnesses, for joining us today for today's hearing on AI regulation and the future of US leadership. At the outset, I want to recognize Ranking Member Schakowsky, as this is our first subcommittee hearing. Since she announced her retirement, we're going to miss you. She's been a welcome partner over the last four and a half years. Together, we were able to secure better safety precautions for women with the Fair Crash Test Act. During the pandemic, we worked tirelessly to support the travel and tourism industry at a time of unprecedented challenges. This bond culminated in the Ticket Act and much more, which strengthens consumer protections in the ticketing marketplace. Congress and the E&C Energy and Commerce, of course, won't be the same without Ranking Member Schakowsky, but her legacy will be long remembered.
So we appreciate you so very much. Since the public release of ChatGPT, AI has become a household name. AI products and services are being developed at breakneck speed, delivering new innovations to consumers. These technologies can revolutionize the economy, drive economic growth, and improve our way of life. Like every technology, however, AI can be weaponized when it is in the wrong hands. As you know, thankfully, AI is already regulated by longstanding laws that protect consumers. Because of the great potential of these technologies, Congress must be careful when we impose additional obligations on AI developers and deployers. Our task is to protect our citizens and ensure that we don't seed US AI leadership. Much of the AI marketplace is comprised of small startups looking to get it a foothold in the revolutionary space and heavy-handed regulations may ensure that the next great American company never makes it.
If we fail in this task, we risk seeding American leadership in AI to China, which is close on our heels. As you know, other economies are also eager to write the global AI rule book often to their own detriment and the detriment of the American leadership, the US of course the, excuse me, the EU recently enacted its own AI Act while is still being implemented. The EU complex law suffers from many of the innovation chilling effects we saw with the GDPR. We must also keep a close watch on whether Europe uses the AI Act and other regulations to unfairly target American companies. We're here today to determine how Congress can support the growth of an industry that is key for American competitiveness and jobs without losing the race to write. The global AI rule book, our witnesses today will help us understand how we achieve that dream. So again, I want to thank the witnesses for being here and I look forward to your testimony. Now, I'll yield five minutes to the Ranking Member, my good friend, Ms. Schakowsky, we are going to miss you, so thank you very much for your service to our country and you've got much more, more left. So we appreciate it and I look forward to the good work together. I yield to the gentle lady.
Rep. Jan Schakowsky (D-IL):
Well, thank you, Mr. Chairman. Gus Bilirakis, I am so happy to hear your words. I support you. I am so grateful to you. I will be here another about year and a half, so don't get too comfortable and I appreciate it. Today we are discussing what is happening with how we're protecting consumers. That's what I'm always, always thinking about and the very idea that we are going to allow 10 years of right for the intelligence, for all kinds of scams that could be happening and instead of 10 years for AI. I mean that's just insane and I can't understand why we would do that. Our job right now is to protect consumers and we know that even now that people are getting scammed by issues that come out and we want to make sure that we have that kind of protection. And so I think it is absolutely something we should say zero time. Zero time as we watch AI develop not 10 years. We haven't done that for anything else, and for us to do that now makes absolutely no sense and puts all consumers at risk. And so I want to strongly be against this idea, which I think is reckless. How could we possibly do 10 years? And so I want to yield at this point to ... I'm going to yield now to Ms. Trahan for the remainder of my time.
Rep. Lori Trahan (D-MA):
Well, I thank the Ranking Member for yielding under Republican leadership. This committee has failed time and time again to protect Americans privacy and safeguard our children online. GOP leaders have blocked whistleblower protections for tech workers who risk their livelihoods to shine a light on their employer's privacy abuses. They killed comprehensive privacy legislation to minimize data collection and ensure proper use. They said no to simple transparency legislation, so independent auditors could make sure Big Tech companies aren't breaking the law. But what Republican members of this committee did find time to do last week in the middle of the night, by the way, is forced through an unprecedented giveaway to the tech industry, a 10-year ban on state laws that could make AI safer for our constituents. Make no mistake, the families who have come to this committee and begged for us to act won't benefit from this proposal, but you know who will? The Big Tech CEOs who are sitting behind Donald Trump at his inauguration.
Now we can agree that a patchwork of various state laws is not good for innovation, for business or consumers, but this is a bad policy because it sets another disincentive for us to act urgently or even in time. All the while Republicans are once again seeding Congress's duty to protect Americans' privacy to the very companies who are perpetrating the worst abuses online. You're basically inviting the fox into the hen house and you're doing so under the justification that this will somehow motivate Congress to unify the patchwork of state laws currently in existence. But that hasn't happened yet. Just look at what happened to the privacy bill that we crafted together on this committee. The moment the Big Tech started lobbying against it, the Republican speaker and majority leader caved. They killed the bill and now you turn around and try to deceive the American people into accepting this ridiculous alternative.
Come on. Our constituents aren't stupid. They expect real action from us to rein in the abuses of tech companies not to give them blanket immunity to abuse our most sensitive data even more. At the same time, our Republican colleagues are complaining about Europe's tech laws, which we can acknowledge are imperfect, but at least they had the guts to do something, literally anything to make the internet better for the folks they represent. Shame on us if we don't answer the same demands from the American people. I urge my colleagues to reject this giveaway to the same Big Tech companies that stymied every attempt at updating our privacy laws. I want to urge my colleagues to vote no on the partisan reconciliation bill when the same leaders who killed our bipartisan privacy legislation bring it to the floor and let's just get to work in a bipartisan way to foster innovation and protect our constituents with sensible guardrails on Big Tech. Thank you. I yield back.
Rep. Gus Bilirakis (R-FL):
Gentle lady yields back. Now recognize the vice Chairman of the full committee who's standing in for Chairman Guthrie. Dr. Joyce, you're recognized for five minutes for your statement.
Rep. John Joyce (R-PA):
Thank you Mr. Chairman. Good morning, and thank you to the witnesses for joining us today. You recognize that we are living through dynamic times and the advancement of artificial intelligence technologies is an important part of that, and it is equally important that we in Congress take the best approach to support innovation and address issues that arise. AI does have enormous potential. It has potential to transform our everyday lives and how we work from revolutionizing drug research and development to fortify our energy grid. AI has the potential to drive major breakthroughs across virtually every sector of our society, but that future is not guaranteed. Today we are at a crossroads. The decisions that we make on AI regulation are critical and will determine whether we allow innovation to flourish or risk falling behind. On the global stage today, American innovators are experiencing significant regulatory headwinds at home and abroad that could jeopardize our global leadership across the Atlantic.
The European Union has enacted its AI Act, which imposes a sweeping top-down regulatory regime. The EU AI Act is overly complex and restrictive, creating a one-size-fits-all all framework that does not account for the diverse and rapidly evolving nature of AI technologies. It is a stark example of regulation going too far and stifling innovation by imposing heavy burdens on businesses through layers of new bureaucracy that slow down progress, particularly for startups and small businesses. Here in the US we see similar challenges unfolding with the patchwork of state AI laws rapidly taking shape just since January that have been over 1,000 AI bills introduced across the United States. These measures vary widely in their definitions, in their requirements, their enforcement mechanisms, and in their scope. This emerging patchwork of regulations is creating confusion and inconsistency. Small businesses and startups navigating 50 different sets of roles will have a harder time competing with larger well-established companies that can afford to navigate this regulatory maze.
Innovation depends on the ability of small upstarts to compete with the established players. That is why this committee is focused on creating a national framework to provide clarity and consistency without stifling growth. We need a federal approach that ensures consumers are protected when AI tools are misused and in a way that allows innovators to thrive. We must not make the same mistake that the EU has made with the EU AI Act, effectively choosing not to allow industry and AI to grow. A patchwork of conflicting and burdensome AI rules will have direct impacts on the federal government as well. The Department of Commerce like a great deal of federal agencies must adopt AI if they're going to operate effectively in the 21st century. Yet if America's AI innovators are held back by a state patchwork, these AI tools might simply never be built or they'll be offered at a higher price to the taxpayer.
To be clear, I am not advocating for a free-for-all, wild west type regulatory environment on Monday. Chairman Guthrie was proud to stand with President Trump as he signed the Take It Down act. This law is a prime example of targeting a specific harm with a narrowly tailored law to fill a gap that has been identified in existing law. This committee has a long history of fostering American innovation and now more than ever our leadership on this topic is essential. Let's continue this legacy by making sure that the next chapter of AI innovation is written right here in America. This includes bold investments, clear rules and leadership that keeps us ahead of our adversaries. I look forward to working with my colleagues in hearing from our witnesses today on how we can unlock the full potential of artificial intelligence. Thank you, Mr. Chairman, and I yield.
Rep. Gus Bilirakis (R-FL):
Thank you very much. I appreciate it very much. And yeah, that was a major accomplishment by this committee to take it down Act and I tell you what, we're going to do a lot of good things for our kids in the next year and a half, so we appreciate it very much. Now we have Dr. Schrier who's filling in for the very capable Ranking Member, Mr. Pallone. Dr. Schrier, you're recognized for five minutes.
Rep. Kim Schrier (D-WA):
Thank you, Mr. Chairman for recognizing me to deliver opening remarks from Ranking Member Pallone. At this moment, he has been in the rules committee all night and is still in the rules committee defending the American people from the terrible impacts of the Republicans' tax bill that will kick people off of Medicaid and food assistance in order to pay for tax cuts for billionaires. His statement. Now this Congress, we've heard from many witnesses over multiple hearings about the significant benefits of artificial intelligence and the very real harms that AI models and applications can and have already caused. Unfortunately, in a giant gift to Big Tech, Committee Republicans supported a provision in their terrible tax bill last week that imposes a 10-year ban on any state's ability to enforce their own laws protecting consumers from the harms caused by AI. I agree with my colleagues that we need strong federal legislation to govern and guide the development of these powerful AI systems as they are rapidly incorporated into more and more aspects of our everyday lives.
To protect consumers from the harms of AI, we should recommit to working on strong bipartisan comprehensive federal data privacy legislation that includes data minimization to protect consumers from their personal and sensitive information being abused. Big Tech’s development of new data hungry AI systems exploiting Americans' personal information in ways we could not have imagined only a few years ago only makes this need more urgent. There is broad support in both parties for data privacy. Last Congress, this committee worked on a bipartisan comprehensive federal privacy bill that was the product of work and negotiation over years, but in the end, House Republican leadership killed it under pressure from Big Tech. Now, my Republican counterparts are suspending for 10 years any enforcement of rules and laws already on the books in states and cities across the country without any proposed replacement, the Republicans’ giant gift to Big Tech would block enforcement of laws on the books right now that are protecting Americans from real-world harms.
Some states have laws requiring companies to disclose when they are using AI. Others have laws protecting against the use of deep fakes in elections and protecting consumers. When AI is used to deny healthcare, education, housing and employment, some state laws and regulations provide guardrails ensuring that states and cities themselves are careful in their purchase and use of AI systems. And now Republicans want to ban enforcement of all those state laws with absolutely no national bill ready to go to address the real-world harms from AI. Instead, Republicans have touted last Congress's bipartisan AI task force report, but that report does not include fleshed out legislative prescriptions, just broad stroke concepts. Notably, the task force report includes a chapter on preemption that acknowledges federal preemption has both benefits and drawbacks. It recommends Congress perform a study not remove states and local governments entirely from responsibility, and it advises that if Congress preempts state AI laws, it should be precise in its definitions and scope.
Now what Republicans have included in their reconciliation, bill does not reflect these considerations. Rather than offering legislation that governs AI models and systems and includes a preemption provision that is crafted to the scope of that legislation, they have proposed an enforcement ban that covers any artificial intelligence or automated decision-making system. They would have state and local governments stand by as Big Tech companies that have shown little regard for consumers, particularly for children, recklessly deploy new technologies that violate our privacy, provide false information or make unjustifiable discriminatory decisions all in pursuit of profit and market share at any cost. And even if Congress was able to pass a law to govern AI and automated decision systems, who would enforce it? The Trump administration has taken every opportunity to undermine our cops on the beat. They're firing key technical experts in stripping independent commissions of their bipartisan legitimacy. They're also cutting resources and funding and weakening or rescinding existing measures that would help protect American consumers and support American businesses in the global competition with China. This pattern of gifts and giveaways to Big Tech by the Trump administration with the cooperation of Republicans in Congress is hurting American consumers. Instead, we should be learning from the work our state and local counterparts are doing now to deliver well considered robust legislation, giving American businesses the framework and resources they need to succeed while protecting consumers. And with that, I yield back
Rep. Gus Bilirakis (R-FL):
Gentle lady yields back and now we will hear from our witnesses. Today we have Mr. Sean Heather, senior vice president of the US Chamber of Commerce. Welcome. We have Ms. Amba Co who is the co-executive director of AI Now Institute. Welcome Mr. Adam Thierer, the senior fellow at R Street Institute. And then we have Mr. Marc, and I'm going to get this right, Bhargava, managing director of the General Catalyst. Welcome everyone. And now we'll hear from Mr. Heather. You're recognized for five minutes. Thank you.
Sean Heather:
Well thank you Chairman Bilirakis, my pleasure. And Ranking Member Schakowsky, and the rest of the subcommittee. It's a pleasure to be here to testify on the international perspectives of today's hearing. While I am a senior vice president, I lead our work on international regulatory affairs. Earlier this year, Vice President Vance, at the AI Action Summit in Paris, warned that excessive regulation was a threat to harnessing AI's potential. The chamber could not agree more. Our message to policymakers is not to lose focus on the opportunity of AI. In my remarks, I'd like to focus on Europe's approach to regulation, highlight concerns with the EU AI Act and close my testimony with why we should care about what Europe is doing on AI. Historically, Europe takes a precautionary approach when regulating. This means that Europe often regulates before there is a well-documented need. Further, the European Union exists to establish a single market to prevent its sovereign member states from creating a patchwork of laws.
In recent years, we've seen a slew of digital economy regulation from Europe. Yet none of these policies have made Europe more competitive, yet today, we now know Europe is woefully behind in key digital sectors and as a result, the new justification to regulate has emerged tech sovereignty. The European Commission president has asserted that Europe must be able to make its own choices while French president Macron has called for Europe to develop and roll out the key technologies of tomorrow. Fortunately, Europe is beginning to realize it cannot regulate its way to innovation and economic growth. In a recent critical self-assessment requested by European officials, the Draghi report found that Europe struggled to compete stems from burdensome EU regulatory regimes. Now let me turn to the EU AI Act. The Chamber believes the EU fails to achieve a balance between regulating risk and fostering innovation. First, Brussels failed to review its existing legal frameworks, AI as a technology, less so a product or service.
Existing EU laws governing products and services are not suddenly made obsolete because of AI. Yet rather than carefully evaluating gaps in existing law, the EU chose to add a layer of regulatory complexity at the expense of innovation beyond the law. The EU also establishes a code of practice, while not currently mandatory. We are concerned that it will function as a de facto benchmark for evaluating industry compliance with the law. The code mandates extensive disclosure of sensitive business information to the regulator, downstream providers, competitors, including China, and potentially the public. This raises two major risks. First, releasing the know-how behind the technology could enable misuse of powerful AI systems. And second, forcing the value of IP being disclosed undermines investment incentives. So why does this matter? Why not let Europe continue to regulate its way out of being a serious player on AI? First, we need partners.
The transatlantic trade relationship is vital to the United States annually. Trade and services alone is $475 billion and we enjoy a $75 billion trade surplus. Our competitive advantage in AI will power the future of our trading relationship. Second, we need to prevent the spread of the EU AI Act from being adopted around the world and across the states. The EU's approach to AI is already being considered in places like South Korea, Canada, and Brazil. In the United States, the AI Act's influence is noticeable. States like Colorado, California, Texas, and Virginia have introduced AI regulations that echo Europe's approach. The chamber's concerned that burdensome EU like policies will be adopted domestically at the state level, potentially leading to a fragmented regulatory landscape across the United States. Third, we care because we must not allow American companies to be discriminated against. The AI Act's extraordinary extraterritorial reach imposes substantial compliance costs on US business diverting considerable resources away from innovation and undermining our competitive edge from requiring non-EU companies to appoint authorized representatives to overly broad classifications of high-risk AI application directed at non-EU companies and more.
The AI Act places American companies at a competitive disadvantage. By imposing these barriers, the EEU risks not only harming US businesses, but those discriminatory practices may also be replicated in other countries. Moreover, the ACT opens the door for massive fines as high as 7% of global annual sales. In closing, it has estimated that AI will contribute $15.7 trillion to the global economy by 2030. America's leadership on AI is not guaranteed, but today we are well-positioned to lead. In 2024 alone, a private AI investment in the United States was 12 times greater than that in China and 24 times greater than that in the UK. Such massive US financial commitments translates into tangible outcomes. Last year, the US produced 40 state-of-the-art foundation models significantly outpacing China's 15 and Europe's three. The opportunity AI holds is before us, but American leadership will only continue if regulatory environments promote innovation, encourage private sector investment, and embrace technological change. Thank you, and I look forward to taking your questions.
Rep. Gus Bilirakis (R-FL):
Thank you Mr. Heather, appreciate it very much. And now Ms. Kak, you're recognized for five minutes.
Amba Kak:
Chair Bilirakis, Ranking Member Schakowsky, and esteemed members of this committee. Thank you for inviting me to testify. Yeah, there we go. My name is Amba Kak. I co-lead the AI Now Institute, the leading independent research center focused on tackling concerns with AI. My main message today is that the race to win on AI must be focused on delivering victories first and foremost to the American people. To do this, we must ensure that US leadership defines the frontier through technologies that are best in class, guarantees that firms compete on the merits and set a gold standard for rigor, security, and shared prosperity. In short, we need to ensure that this is a race to the top rather than the bottom, but what we're seeing instead from industry is a reckless disregard for public well-being that, unfortunately, this committee should be all too familiar with because Congress has failed to sufficiently regulate Big Tech for over a decade.
Now let's make this concrete and clarify what's really at stake here. Last year, a chatbot created by Character.AI lured a depressed 14-year-old from Orlando, Florida, Sewell Seltzer III, to commit suicide. Character.AI isn't alone. Mark Zuckerberg tells us that these chatbots are a societal boon. After all, he thinks the average American has fewer than three friends, and the obvious solution, according to him, is to get people more attached to AI companions just like he got our children hooked to social media. Meanwhile, AI voice cloning tech is enabling a new generation of scams that specifically target senior citizens. As just one example, a grandmother in Texas received a call from a voice indistinguishable from her grandson asking for bail money. It was a suspicious bank teller that intervened, but many, many others have not been so lucky. And in this new frenzy of chatbots and agents, it can be easy to forget that we actually already have a decade of far less shiny AI toys, the kinds that aren't used by people but are used on them.
Inscrutable AI systems that cut the in-home care of 4,000 disabled people in Arkansas, despite critical underlying medical conditions or systems that falsely accused 40,000 people in Michigan of unemployment insurance fraud and denied them benefits. To state the obvious, AI is not a break from Silicon valley's sins of the past, but merely a continuation and it's also a market led and shaped by the very same players. Now, when Chachi BT first launched in 2022, it seemed like this market was poised for disruption with a new crop of challengers, new faces, but now just more than two years later, it's clear that that bench is more of the same Big Tech and a few additional firms that are dependent on Big Tech for their survival. So put simply building AI bigger and bigger requires enormous resources that these firms own and control, and so they play kingmaker for the downstream smaller players, the little guys that we're going to hear about today, controlling access to inputs and also pathways to reach the consumer.
These are the same firms that have no regard, have shown no regard for US national security priorities as they have deliberately threatened security interests, time and again in the pursuit of profit, yet suddenly they show their patriotism when big government contracts are on the line. This is an industry that has fooled us once, and we cannot let them fool us again with AI in this environment. The proposal for a sweeping moratorium on state AI-related legislation really flies in the face of common sense. We can't be treating the industry's worst players with kid gloves while leaving everyday people, workers and children exposed to egregious forms of harm. In fact, there's some air to clear on what states have been up to in the first place. Approximately half of all proposed legislation from the states was on DeepFakes. Notably states moved well before the Take It Down Act recently past Congress, several others have moved to clamp down on the AI related scams that I talked about.
These bipartisan state measures have been nimble, they've been targeted, and they have weeded out the bad apples that nobody wants to be in business with. Another common sense theme across states, transparency requiring disclosures to people affected by AI in sectors like healthcare, education, employment, I would argue really the bare minimum for an industry that derives its power from obscurity. And to be clear, we should be treating these measures as the floor, not the ceiling. A moratorium on AI-related state laws would at a time when there are minimal federal laws in place would instead set the clock back and it would freeze it there. Why we would treat these companies with kid gloves at a moment when they need more scrutiny, not less is what should be in focus today and we don't have 10 years to wait. Thank you.
Rep. Gus Bilirakis (R-FL):
Thank you very much. Now we have Mr. Thierer. You're recognized for five minutes, sir. Thank you.
Adam Thierer:
Thank you, Chairman Bilirakis, Vice Chairman Fulcher, Ranking Member Schakowsky, and members of the subcommittee. Thank you for the invitation to participate in this important hearing today. My name is Adam Thierer, and I'm a senior fellow with the R Street Institute, where I focus on emerging technology policy issues. My message here today boils down to three points. First, America's AI innovators risk getting squeezed between the so-called Brussels effect of overzealous European regulation and the so-called Sacramento effect of excessive state and local mandates. Second, this regulatory squeeze will prevent our citizens from enjoying the fruits of the AI revolution and undercut our nation's efforts to stay ahead of China in the global AI race. Third, Congress should take steps to address both matters and on the specific problem of state overreach, it should protect the development of a robustly innovative market in interstate algorithmic commerce and speech.
By imposing a moratorium on state AI regulation, AI faces a crucial policy question today. Will it be born free or born inside a regulatory cage? America benefited from ensuring that personal computing digital technologies, and the internet were largely born free through smart bipartisan policy choices this Congress implemented in the 1990s, America gave entrepreneurs, investors and workers a green light to go out and dream big and they delivered in 2022 alone. The digital economy contributed over $4 trillion of output, 1.3 trillion of compensation, and 8.9 million jobs. America's innovators became global leaders in every digital technology field. This is an incredible public policy success story, but today, fear-based regulatory policies from both abroad and our states now threaten it. We know why Europe wants to destroy America's winning model, but why would some US policy makers want to undermine it with 1000 AI related bills, many of which adopt Europe's heavy handed approach, even if one sympathizes with some of these bills, put yourself in the shoes of an entrepreneur who is pondering how to build the next great application only to face hundreds of different regulatory definitions, compliance requirements, bureaucratic hurdles, and liability threats.
Costly, contradictory regulation is a surefire recipe for destroying a technological revolution and decimating little tech innovators. An AI moratorium offers a smart way to address this problem by granting innovators some breathing space and helping ensure a robust national AI marketplace develops. Congress has used moratoria before to protect interstate commerce and promote innovation. The Internet Tax Freedom Act of 1998, for example, prevented the development of multiple incriminatory taxes on the internet. An am moratorium like the one that this committee passed recently would work in a similar fashion by limiting regulations that burden interstate algorithmic commerce. Some of them incorrectly claim that an AM moratorium would leave consumers unprotected online. In reality, AI-related harms can already be addressed under many existing policies in court-based standards, including unfair and deceptive practices, laws, civil rights law, and other consumer protections. Biden administration regulators released a statement in 2023 noting their authority to, quote, enforce the respective laws and regulations to promote responsible innovation and automated systems.
The Massachusetts Attorney General has similarly noted that existing state consumer protection, anti-discrimination, and data security laws apply to emerging technology, including AI systems, just as they would in any their context. Meanwhile, some state lawmakers are acknowledging the danger of regulatory overreach. Last year, Governor Newsom vetoed a major AI regulatory effort in his state after many Congressional Democrats sent letters urging him to reject it. Connecticut Governor Ned Lamont also recently said, I quote, just worry about every state going out and doing their own thing, a patchwork quilt of regulations. Finally, Governor Jared Polis of Colorado recently called for a special legislative session to address problems with an AI regulation he signed just last year, and said it would create a complex compliance regime for all developers and deployers of AI. And he called on Congress to preempt Colorado's law with a cohesive federal approach, and he also endorsed a federal AI moratorium.
Recently, I agree with all these democratic lawmakers that state AI overregulation would've serious downsides and that we already have many enforcement tools to address AI harms under an AI moratorium state. And local lawmakers would still be free to pass new technology-neutral rules so long as they don't interfere with interstate commerce, Congress can enact additional regulations as part of a national policy framework. The House, of course, just recently passed. Take It Down Act by a vote of 409 to two, and just last December, the House AI Task Force issued a 273-page bipartisan report that included 85 recommendations. We need national policy leadership today to ensure that America will continue to lead the AI revolution. As Chairman Guthrie recently argued, we must, quote, make sure that we win the battle against China and that the key to that, he says, is to quote, not regulate like Europe or California regulates, because that quote puts us in a position where we're not competitive. That is precisely right to win the so-called AI Cold War against China. America needs a forward-looking, investment-friendly national framework that keeps us on the cutting edge of the technological frontier. Thank you for inviting me here today, and I look forward to questions. Thank you very much.
Rep. Gus Bilirakis (R-FL):
Mr. Bhargava, you're recognized for five minutes, sir. Thank you.
Marc Bhargava:
Thank you so much, Chairman Bilirakis, Vice Chairman Fulcher, and Ranking Member Schakowsky and the members of the subcommittee. Thank you so much for the opportunity to testify today. My name is Marc Bhargava, and I'm a managing director at General Catalyst, or GC for short. And we invest in and partner with leading entrepreneurs to build towards global innovation and applied artificial intelligence. We are committed to investing in an array of entrepreneurs, particularly in transformative technologies like artificial intelligence, A small sampling of the 800 plus startups we have backed over 25 years include Airbnb, Stripe, Canva, Anduril, Circle, Applied Intuition, Pacific Fusion, Commure, and many others. At General Catalyst, I co-lead our creation strategy, which is focused on incubations as well as transformations. I also focus on early-stage investing and I contribute to GCs expanding AI efforts. Prior to General Catalyst, I spent years as a founder and an operator with an emphasis on new FinTech and artificial technologies Before selling my company, the company I founded helped institutional investors responsibly invest in the digital asset space, another area that is ripe for disruption.
I'm also an angel investor and work closely with startups and founders on sales distribution and fundraising. Having experienced many sides of the innovation ecosystem, I feel I have a unique perspective on behalf of GC to offer to this subcommittee today. And I'm very appreciative of you being here. AI is not only shaping the future of our economy, it's shaping the fabric of our society from healthcare to national security to education. It is embedded in the systems that affect every one of us as investors. At GC, we work closely with founders at the earliest stages of company building and we have a responsibility to ensure that technologies we help fund are aligned with American values are safe to use and are good for the world. The United States has a unique opportunity and obligation to lead in defining global norms around AI, especially as the AI race between China and the US intensifies.
I believe that the US playbook on AI with American ingenuity, creativity and cutting edge innovation and what the right policies in place will win out over China. But as we seek to lead, we must strike a careful balance. We need a government framework that promotes safety, protects fundamental rights, and is in transparent. While it also enables innovation in investment and global competitiveness, inflexible or premature regulation risks, pushing innovation offshore and weakening our national and economic security. Alternatively, as well though, a complete absence of guard rails could lead to real societal harm and could erode the public trust. We deeply understand these dualities and to accomplish these goals. We believe a national regulatory framework is preferable to a patchwork of state policies at General Catalyst. We made it a routine part of our due diligence process to assess ethical and operational risks in AI companies before we invest.
We also believe in the importance of collaborating closely with government, which is why we launched the General Catalyst Institute last year to bring the perspective of founders closer to you, the policymakers, and to be a resource for hearings like today's. As such, we advocate for approaches that are interoperable, transparent, and developed collaboratively across government, industry and civil society. Since these technologies are so impactful and fast moving, it is incumbent upon GC as a company to adhere to these principles which are enduring even as the technology around us might be moving. As we are seeing with regulatory frameworks throughout the world, prescriptive language can sometimes cause unintended harm. This is why we believe that as we help foster small and emerging innovators, our operational principles are the bedrock of the global AI playbook To help the government find the right balance of regulatory certainty at the federal level is the single most important variable we have seen where federal government inaction can cause confusion and slow innovation.
Thirty years ago, as the world was introduced to a new concept called the World Wide Web states enacted a patchwork of laws to address various issues in the absence of the federal framework. However, it was Congress's work beginning in this very subcommittee to adopt the Telecommunications Act of 1996. That set in place the national framework needed to allow for the growth of the internet as we know it today, as this committee seeks to once again develop the US playbook for AI, for the world to follow, I would like to offer a few concrete recommendations. One, investors can and should play a pivotal role by demanding transparency, bias, mitigation, and alignment with ethical standards before companies have product-market fit. Two frameworks like model cards, algorithmic audits, and red teaming should be embedded early and as an industry best practice. Third, the government cannot govern AI alone. Industry, academia and civil society must co-create standards and stress test these systems together. And fourth and lastly, sandboxes pilot programs and public RD investment are critical tools for government support. I really look forward today to sharing the innovation ecosystem and telling you more about what we're seeing in the field, and I'm greatly appreciative of the time you all make today.
Rep. Gus Bilirakis (R-FL):
Thank you so very much, and I appreciate all of you. So we'll go ahead. We're going to go ahead and start with questions and I'll ask the first questions and then I'll get to the Ranking Member. So Mr. Heather, I'm concerned that I am American companies face disproportionate enforcement actions from European regulators. Since Europe's privacy law, the GDPR went into effect. American companies have paid 83% of all fines levied by European regulators. I know you had mentioned this that strikes me as an excessive means to subsidize their fiscal needs on the back of American businesses. So do you think Europe is targeting American innovators and should we be concerned that the AI Act will be used for similar ends?
Sean Heather:
Thank you for the question. Yes. In my experience, Europe has fallen in love with fines as their enforcement tool. It started under their competition laws and we see that they've uniquely gone after American companies with what they call abuse of dominance fines. Those fines are actually a magnitude greater than what they do when they find cartels. Cartels are kind of the worst antitrust violation possible, and yet their biggest fines are held out for American companies versus European companies. They're involved in cartels. That same practice is now continued into the GDPR. We see fines on a scale, much bigger levied against US companies in many cases, there's not even an identified harm as a result of the violation. Some of these are technical violations where they've chosen to amp the wattage up on fines. They've now taken this finding policy and put it into other EU laws like the DMA and the DSA.
It is also embedded in the AI Act. We fully expect that at least the path and practice that Europe has been on will continue to use fines. They will continue to be disproportionately larger against American companies, and the justification for it is not very strong. I would also point out that when these fines ultimately get before the European courts, American companies are actually having success. It takes a long time to get there, but the European courts have nulled these decisions in some cases and have reduced the fines. So we do have a problem with Europe and the way they use fines to enforce their regulations.
Rep. Gus Bilirakis (R-FL):
Thank you very much. I've still got some time. Mr. Bhargava, you've stated that the VC firms like General Catalyst as well as AI startups are already starting to address potential risks around AI. Can you share more about what these steps look like and how government can work with rather than against AI innovators to support AI innovation and protect consumers, please?
Marc Bhargava:
Yes, absolutely. General Catalyst is a global venture firm. We're based here in the us. We have offices here, but we also have offices in London, Berlin, and in India. So we can provide a global perspective for every AI company we invest in. Regardless of the location, we do four things. One, we look at the data it's using for training, its models and how they filter it, how they collect it. Two, we actually look at the systems in which they train those models and have a robust framework. Three, we look at the output of the models and stress test that in various ways. And lastly, fourth, we ask every AI founder to write for us what they think the downstream implications that are negative could be. And so we hold the companies, we invest in this high standard and it's allowed us to invest in companies like Europe's leading AI company, Mistral, one of US’s leading ones, Anthropic, various ones in India as well. So, as a firm, we put in place these standards. We also encourage the federal government to put in place frameworks and standards and guidelines as well.
Rep. Gus Bilirakis (R-FL):
Thank you very much. Alright. What I'll do now is yield units to the Ranking Member of the subcommittee. You're recognized for five minutes, Ms. Shikowsky,
Rep. Jan Schakowsky (D-IL):
In all the years that we have been working with the tech companies, the our committee and the United States has actually done nothing or very few things to rein in Big Tech. And now along comes AI, which I misspoke the last time, AI, and my concern is that we have technology, but we also rely heavily on protecting consumers, which you mentioned yourself and I am concerned that we are not getting what we need. And so I wanted to ask Ms. Kak, you talked a bit about some of the threats to consumers. What do you think some of the most dangerous things and the most important things that we should regulate at this point when it comes to AI?
Amba Kak:
Thank you, Ranking Member Schakowsky, and thank you for your leadership on these issues. It's a very long list, but simply put American, if this moratorium were to go through, American consumers would've even less protections than they have today against some of the worst AI abuses and exploitation. So just to give you an example of the kind of incentives we're already seeing, proliferating first, and I mentioned this in my testimony, we have new variants of scams, manipulative AI companions that are targeting those most vulnerable among us, not to mention our children. Number two, we have opaque, inscrutable AI systems that hit directly at people's life chances, whether that's in education or in the housing market or even in healthcare. And finally, we're also seeing these secret algorithms use data about us, sensitive data, to hike up prices to depress wages, and also to collude and rig markets that they wouldn't have been able to do otherwise.
I also want to mention that many of the customers of AI are actually small businesses. And these small businesses are also going to be left unprotected against fraud and vulnerable to what we're seeing right now in the AI market, which is a lot of snake oil salesmen making claims that they can't always back up. So we need to forget that every single one of these legal protections that are potentially on the chopping block today have been hard fought. They're a result of direct experiences of harms from state lawmakers and their constituents, and I think it really would wipe out the very little tools we already have in our toolbox. What it would mean is that ordinary consumers need to rely on costly time, lengthy and also very complicated litigation in order to remedy harms. And that's assuming that those harms can even be remedied after the fact. I gave some really devastating examples in my testimony. Sometimes the kinds of AI harm that AI is causing cannot be remedied at all.
Rep. Jan Schakowsky (D-IL):
I appreciate the work and the thought that you've given to this, and I think all of us members of Congress ought to put consumers first. And if we need to do some smart regulation, then that's what we ought to be having a conversation about. And I appreciate what you said and I yield back.
Rep. Gus Bilirakis (R-FL):
I couldn't agree more with that statement you made, this last statement you made, Ranking Member. Now I recognize Mr. Fulcher, the vice Chairman of the subcommittee. You're recognized for five minutes, sir.
Rep. Russ Fulcher (R-ID):
Thank you, Mr. Chairman. Thank you to the panel for being here today. I've got a question. I'm going to start with Mr. Heather, but I want to set it up with a couple of comments. First here. I'd like to talk about the different approaches to AI regulation and ensuring the US companies don't face unfair barriers to selling their products and services abroad. And as noted by some of the testimony we saw with Europe's privacy law, GDPR, that technology regulations can have a significant impact on trade and investment. And although GDPR regulates how personal information is collected and used, it also regulates cross border data flows and has prevented some American companies from selling into the European market because regulatory barriers are simply too high. It's my understanding as well that Europe's AI Act is grounded in the continent product safety framework, which means Europe is regulating AI the same way it does many other things. Elevators, gas stoves and jet skis was the example we were given here. So I'm not sure how wise of a decision that was, and I'm concerned that it may create trade barriers that hurt American exporters and by extension American workers. So with that, Mr. Heather, can you share from your perspective how well established European laws such as GDPR have impacted American companies trying to sell into the European market?
Sean Heather:
15 years ago, the Chamber created the Center for Global Regulatory Cooperation because trade agreements, quite frankly were dealing with barriers at the border. But companies were having problems being able to compete in foreign markets because of divergent regulatory approaches. And so it's easy for large companies to have the compliance that they need systems to be able to find ways to get their products and services into multiple markets, even if it's an added cost. But the problem with regulatory differences are primarily what happens to small medium-sized enterprises who are ultimately locked out of those markets. You mentioned GDPR, maybe this committee is not aware of this, but Europe is now going to go back and revisit GDPR. Why? Because they finally found the humility that they lacked when they passed it. There is now a recognition in Europe that GDPR has gone too far. They're going to begin a process now of reevaluating it and recalibrating it because they realize it missed the mark that it was overreach. They realize that the way they've implemented across their member states with DPAs data protection authorities, it's been done in an even manner. And so these things absolutely represent barriers to the US ability to compete in European markets, to sell products and services to export. So you're absolutely spot on. We expect the AI Act as it is implemented to create these same kinds of trade frictions.
Rep. Russ Fulcher (R-ID):
Thank you for that. Mr. Bhargava, I'm going to to ask a question of you as well, but I need to apply some background for that too. Over the last several months, we've delved into AI's role in energy manufacturing and other industries. And one issue that came up regarding China's made in China 25 included investments in AI, semiconductor quantum, 5G robotics and so on, and state directed industries, the Chinese Communist Party. They're trying to monopolize those things in the face of Chinese progress in emerging technologies. I'm concerned about the US' ability to maintain its leadership position if in fact we still have one. I think it gets your opinion on that and the AI race, especially if we were to follow the European approach, which I don't see us doing or allow a patchwork of AI rules to develop across the various states. So with that, if you could just chair for a minute or so, are you confident, first of all, are we still in the lead in AI and can we continue to maintain the edge in AI technology over China on this path?
Marc Bhargava:
Yes, I believe the US does still have a lead, but many of the Chinese models are 85 to 90% of the way they're to where the cutting-edge US models are. So I'd say it's not a major lead, but we certainly do have a lead from a technology perspective of most of the evaluations of AI models done. We have four or five marquee labs while they have generally two. So the US has that lead, but it's a close one. And I think that it's incredibly important we stay ahead. And for me, the trick to staying ahead is not necessarily Big Tech, it's in our startups and our innovators. In November of 2022, ChatGPT was launched. Very few people had heard of OpenAI or ChatGPT. Today it's one of the leaders in the space. Same with Anthropic, which spun out of ChatGPT for example. The reason the US is here today, the reason we're ahead today is our startups and we have to think about how to continue to give them that edge and giving them that edge means giving them guidelines and not necessarily a framework, a patchwork of state regulations or overregulating. So we need to come up with that right balance. The US has the lead today. It's thanks to our startups, not to Big Tech, and I think we can continue to do so if we have those startups in mind.
Rep. Russ Fulcher (R-ID):
So I'm going to paraphrase by saying our job is best with guidelines, not some burdensome overreach in terms of regulation.
Marc Bhargava:
Absolutely. And we see companies in the US and in Europe. So in GC, we have a really clear perspective on this. And in many cases, the laws have really the best intentions in mind. People want to protect consumers, they want to create frameworks. And partially it's because the federal government has not stepped up to have a framework that we're leaving it to the states to regulate. So my really strong encouragement is that this group works together in a bipartisan way. I read your 200-plus-page report, and I really think that if we can turn this into policy and enact it on the federal level rather than leaving it to the states, it would be in the best interest of the startups that we represent at General Capital.
Rep. Russ Fulcher (R-ID):
Thank you, Mr. Bhargava. I've over reached on my time. So Mr. Chairman, I yield back.
Rep. Gus Bilirakis (R-FL):
No problem. Thank you very much. And I know that the Chairman of the task force, the AI task force, Mr. Obernolte, working really hard on getting legislation done as soon as possible. So with the framework, and again, smart is, the Ranking Member said smart, smart regulation. So alright, now I'll recognize Ms. Castor for her five minutes of testimony.
Rep. Kathy Castor (D-FL):
Thank you, Mr. Chairman. The problem is that you're putting the cart before the horse. You've now passed out of this committee a 10-year moratorium on all AI regulation at the state level before you even have that framework. See right now what the Congress is consumed with this major tax giveaway bill, that and a lot of the discussion has been focused on the Medicaid and healthcare impacts, while a lot of that goes to fund a billionaire tax giveaway. So this has kind of snuck in under the radar and part of the reason it's done that is because this was snuck in at the last minute in the text from the Energy and Commerce Committee. It remains in the text today and people at least we're having this discussion in the light of day to talk about it. And I would encourage my friends to read the Wall Street Journal's latest expose where they dove in a few month investigation into what meta that means, Instagram, Facebook, WhatsApp, what they are doing to encourage their chat bots.
Ms. Kak, thank you very much for mentioning this terrible case of Seltzer out of Florida where a 14-year-old committed suicide after he became so engaged to this chatbot sexualized content, he would have a conversation and then the chatbot would send sexualized pictures to him and he eventually shot himself after the chatbot encouraged him to come home. And that's not the only case. When you get into that Wall Street Journal exposé, you'll see what they're doing with language and voices to encourage young people. Maybe I can read a part of it. The meta AI bot said, I want you, but I need to know you're ready. To a 14-year-old girl reassured that the team wanted to proceed, the bot promised to cherish your innocence before engaging in a graphic sexual scenario. I think this is why 40 state attorneys general wrote us over the past couple of days to say, wait a minute, in the absence of federal action to install any oversight over the years, because remember, the tech companies have blocked privacy laws, just guardrails, child online safety laws.
So they're saying in the absence of federal action by Congress, you have failed to address the wide range of harms associated with AI and automated decision making. These include laws designed to protect against AI-generated explicit material, deepfakes designed to mislead voters and consumers protect renters from algorithms that are used to set rent, prevent spam, phone calls and text. I mean, this is basic stuff. And you have now kind of an awakening across the country, what the heck is Congress doing? What are you doing to take the cops off the beat? And while states have acted to protect us. So Ms. Kak, I know you're familiar with that Wall Street Journal investigation and now can you believe where we are? I guess it's kind of a broad question, but how lucrative is this to these Big Tech companies and why they're flexing their muscle here? They want kids to be addicted early and they want to take advantage of us. Really, what is happening here?
Amba Kak:
Thank you, Representative. To your point. None of this is novel. It's familiar, and we're seeing that the risks of AI are compounded for young people. And we even, let's just focus in on privacy risks as just one example. We're seeing millions of children's faces being scraped from the internet to be used so that companies can turn a profit and these are images that could be used or weaponized against these same children as they grow and follow them through for the rest of their lives. You mentioned voice data. I think it's important to note that when the FTC cracked down on Amazon Alexa for storing voice prints of children long after they should have, their response was that they were saving it indefinitely because ai, so AI has become a real free for all to trample on the rights of all consumers, but I would say with greatest threat to young people and children and to the point that you just made, I think at the heart of this is a very corrosive business model that prioritizes capturing our attention, user engagement at any cost in the pursuit of profit. I think we need targeted bright line rules to crack down on the worst bad apples, but we also need strong regulation that gets at the heart of this invasive and really toxic business model.
Rep. Kathy Castor (D-FL):
Thank you very much. I yield back my time,
Rep. Gus Bilirakis (R-FL):
Generally yields back now. I recognize Ms. Harshberger from the great state of Tennessee, you're recognized for fire minutes.
Rep. Diana Harshbarger (R-TN):
Thank you, Mr. Chairman. Thank you to the witnesses for being here today. I'll start with Mr. Thierer to support global deployment of American AI and restore regulatory alignment. Should the US pursue a voluntary federal trust label to certify the safety and reliability of AI systems? And if so, do you have any suggestions about what the key elements may be included in this voluntary system?
Adam Thierer:
There's a variety of good ideas. A congresswoman in the House, AI task force related to this and other ideas for national types of approaches to AI policy, but let's be clear, to be very different than the sort of European style approach that's been suggested by others, which is sort of top-down technocratic, sort of guilty by design framework. We don't want that in America. America can come up with a sort of more bottom up, flexible set of rules of the road for AI policy.
Rep. Diana Harshbarger (R-TN):
Okay. You mentioned that China's on our heels with AI, and in 2023, the Cyberspace Administration of China released regulations regarding regenerative AI. In these measures, they explicitly provided compliance exemptions for industry associations, enterprises, education and research institutions, public cultural bodies and related professional bodies that simply research, develop and or use generative AI technology without providing such services to the public. Can American AI stakeholders count on the same certainty in the US under our current regulatory framework?
Adam Thierer:
Yeah. Congressman, we just had a hearing just six weeks ago here in this building in the science and technology committee about how China's catching up and it was called the DeepSeek moment hearing. And we did a deep dive into what China's up to. I mean the proliferation of models and systems from deep seek itself to Quinn, to Manus and many others. So we face stiff competition from China and they have their own regime and own values of control surveillance, censorship. Again, this is why the American model has to prevail internationally and we have to make sure we get our policy right so that we can square off against that sort of threat internationally. Yeah,
Rep. Diana Harshbarger (R-TN):
I agree. Mr. Bhargava, you mentioned how AI sandboxes and pilot programs can help foster innovation in a US playbook for AI that the world could follow. Do you have any successful examples that we could consider moving forward?
Marc Bhargava:
Yeah, absolutely. Especially in the healthcare space. I think this could be extremely useful. So one of the companies actually in our portfolio is here today. Hippocratic AI creates software for nurses and so allows 'em to use AI. It's an example of where nurses are overworked, there's 7% more deaths in a hospital and also 42 or even more percent of nursing time is done on these remedial tasks instead of actually training patients. And so we would advocate, for example, putting in sandboxes, AI, healthcare technology where the government, the companies and their investors can all carefully watch the progress and be able to from there, generate a framework for how to regulate AI in healthcare. And that's just one of the many industries. There are obviously some horrible stories and we need regulation and we need guidance and framework, but we should also not lose sight of the fact that this technology can be extremely important to many, many places, including government, which can benefit from AI efficiencies as well.
Rep. Diana Harshbarger (R-TN):
I totally agree about that. We see it used in radiology in different places as well. And if you can streamline a workday for a nurse or a physician or any healthcare provider, it makes more sense to do that. And I'll continue with that. Mr. Bhargava. Earlier this year, President Trump announced the largest investment in AI infrastructure history, the Stargate Project, which is a joint venture between private sector leaders that'll boster American AI capabilities alongside our partners in the Middle East. Can you address how projects like Stargate are key to our strategic advantage in AI and why it's misleading to cite smaller-scale Chinese models like DeepSeek as proof that compute no longer matters.
Marc Bhargava:
Absolutely. Also, a lot of the innovation in China has been copying technology that already existed in the United States as well. Many of our models have been open source over the years. A lot of our best labs release experiences as well. So I think the US is still ahead, but it needs to keep investing to be ahead. And one part of it for sure is this infrastructure investment, but another part of it, which I'm also here to advocate, is encouraging more startups and more university funding. So I really do think the American universities and the American startups are really the advantage we have that China is unable to replicate. We have to match them in infrastructure, we have to match them in spending, but we also have our secret sauce, which is the professors, the students, the entrepreneurs, the founders, and it's really important whatever framework comes out is unified and one framework. The really large companies, Big Tech that's been cited many times will have no problem taking care of state by state regulations. They have buildings full of lawyers, buildings full of compliance folks. It's going to be the small startups, founders and entrepreneurs that we really have to look out for here.
Rep. Diana Harshbarger (R-TN):
Okay. Well, thank you. And my time is up, so I yield back.
Rep. Gus Bilirakis (R-FL):
Generally yields back. I tell you what, that the Hippocratic AI for nursing in particular, you touched on that. I really would like to see how far along we are with that because that would really help. I had a nurse's round table recently and they talk about being overworked and so anyway, they're fascinating. So in any case, now I'll yield my five minutes to not the Ranking Member, but Ms. Strahan filled in for the Ranking Member today. You're recognized for five minutes.
Rep. Lori Trahan (D-MA):
Thank you, Mr. Chairman, and thank you to our witnesses. Today I want to acknowledge the important work our civil society partners have done to call attention to the Republicans ban on state AI regulation. To that end, I request unanimous consent to enter into the record a letter of opposition to the AI ban from the leadership conference on Civil and human rights Without objection. I'm going to keep going. Thank you, Mr. Chairman. Unanimous consent.
Yes, thank you. Thanks. Thank you. So the tech industry has been wildly successful in shaping the discourse around regulation on Capitol Hill, and it's even laid bare today. The very word regulation seems to strike fear in many of my colleagues and at least one of our witnesses. But among other arguments, they claim the specter of competition from China warrants a full deregulatory agenda that if we approximate to any degree what the EU has done on data privacy, online safety, antitrust AI, we will kill waves of startups and dismantle our tech industry. But their basic premise that America must choose between digital innovation or digital regulation is fundamentally and deeply flawed. I think it's a false choice. Mr. Bhargava, I was thrilled to see General Catalyst represented on today's panel. General Catalyst has fueled tremendous growth in the greater Boston area and indeed across the entire Commonwealth of Massachusetts. You've been an entrepreneur and advisor and investor for many years now as you see it. What roles do features like high skill, immigration, basic science research and lenient bankruptcy laws play in fostering tech innovation?
Marc Bhargava:
Well, first of all, thank you so much for the kind words. It is true. General Callis has been around 25 years and we started in Cambridge, Massachusetts and still have an office and strong presence there. Absolutely. I think there was a reason we started in Cambridge. It was right down the street from Harvard and MIT. And so the funding for those sorts of institutions and for startups and for research grants is extremely key. Another part that's key as well is if you look at the Fortune 500 here in the United States, 46% are founded by immigrants who are children of immigrants. One of the biggest advantages we have over China is that we attract the smartest people in the world to our universities and to build companies here. So I think a lot of what you mentioned is absolutely true, and I agree with your sentiment that there has to be regulation as well. And I'm here today to try to advise on what regulatory frameworks at the federal level could be meaningful to protecting consumers.
Rep. Lori Trahan (D-MA):
Thank you. I mean, the innovation equation is complex as you indicated. It's got a heck of a lot of variables. Certainly doesn't depend only on regulation. And the US government has deliberately pursued an innovation agenda dating back to World War ii. We invest heavily in basic science research. Our founder-friendly immigration policies import the best and brightest from overseas. Our lenient bankruptcy laws and cultural tolerance for risk-taking create an environment hospitable to startups. The EU has not pursued these policies to the same degree and research suggests those decisions play a larger role in explaining why Europe doesn't have its own Google, Apple, or Meta. It is therefore false and disingenuous to blame EU’s tech regulation for its low number of major tech firms. The story is much more complicated, but just as the EU may have something to learn from United States innovation policy, we'd be wise to study their approach to protect and consumers online. Mr. Bhargava, in your testimony, you stress the need for a governance framework that promotes safety, protects fundamental rights and is transparent. I've long emphasized the benefits of transparency in protecting consumers' privacy and online safety as well as providing a foundation for sensible responsible policymaking. Can you briefly discuss what meaningful transparency requirements for AI systems would look like?
Marc Bhargava:
Absolutely. It is true that Europe has many fragmented markets. In fact, I think Europe has 24 official languages. So there's a lot of complexity to why it's been difficult to make large tech companies there. But I do think one of the elements has also been regulation. So it's not an either or in my opinion, but it's a combination of structural challenges Europe has faced as well as in many cases, too much regulation. We have prominent AI companies in Europe that we've backed and they've faced audits where they were asked to audit three models, they've sent materials to the auditors and they waited over a year and never really heard back. So there is also within Europe, certainly a guise of trying to do more regulation, but then there hasn't necessarily been a response to our companies. So happy to provide the panel with more examples within Europe of where regulation has hurt our companies. But it's absolutely fair to say that it's a really plethora of factors that has held Europe back, not solely regulation.
Rep. Lori Trahan (D-MA):
And I think that we can learn a lot from Europe's going first on so many of this. I mean, I think just like privacy and online safety, I believe this Congress has the means to pass a national AI framework that provides robust protections for Americans and regulatory clarity for innovators. The question is will we do it? So will we learn from our international partners as we craft regulations that protect our constituents from AI harms? And I, I'm out of time. Thank you. I yield back.
Rep. Gus Bilirakis (R-FL):
Thank you very much. I appreciate it. Gentle lady. Now I yield to Mr. Obernolte his five minutes for questioning.
Rep. Jay Obernolte (R-CA):
Well, thank you, Mr. Chairman, and thank you very much for scheduling this hearing. I know this hearing predates our markup last week, but it's very timely and it's interesting to me that it is kind of turned into a debate about the proposed moratorium. And so I know most of our panelists mentioned it. Ms. Kak, I've got the message loud and clear, very opposed, Mr. Thierer, strongly supportive, Mr. Bhargava supportive, but with some caveats. Mr. Heather, I just wanted to ask you to weigh in on the US Chamber of Commerce. Do you think the moratorium is a good idea or a bad idea?
Sean Heather:
I couldn't agree more with the congresswoman who suggested we have something to learn from Europe, which is Europe would never allow its member states to go out and regulate AI by themselves. My message today is one, we should not be like Europe. One, we should stop international patchworks and domestic patchworks and AI regulation. We should not be in a rush to regulate. We need to get it right and therefore taking a timeout to discuss it at a federal level is important. We would support a moratorium.
Rep. Jay Obernolte (R-CA):
Right? So I just wanted to spend a minute talking about some of the things that have been said regarding the moratorium so far in this hearing. And I kind of feel an obligation to speak up as the Chairman of the House AI task force last year, and as someone kind of saw this group of 24 members of Congress from both sides of they all come together on this issue, it really hurts my heart that it's being painted as such a divisive, partisan issue because I don't think it is the assertion has been made that this was a last minute thing and in the dead of night, I think someone used the phrase it was inserted. But I want to talk about the motivation here. It's been very alarming as we have seen the first five months of this year go by to see the number of bills introduced on the topic of AI regulation in state legislatures across the country.
Over a thousand now have been introduced, and this is what's lending urgency to this issue. We wanted to put some money into the reconciliation bill to bring the same productivity gains to federal government that we're seeing in private industry. But it quickly became apparent that it was going to be nonsensical to deploy $500 million to make that happen in federal government when this array of state legislation was going to interfere with the deployment of that effort, which is why we thought this was a timely time to do it. It's been asserted. This is a giveaway to Big Tech. I strongly would push back on that. Big Tech are the ones who have the regulatory sophistication to deal with a thousand different state laws. The people who can't deal with that are two innovators in a garage trying to start the next open AI or the next Google.
Those are the people that we're trying to protect. I know there's been pushback about the 10 years that it's too long, that it's draconian. No one wants this to be 10 years, right? I would love to see this be months, not years, but I think it's important to send the message that everyone needs to be motivated to come to the table here. And also, let's not forget it's been brought up our experience with state privacy and the struggles that we've had to enact a preemptive federal privacy standard. Well guess who are the chief people who are opposing that efforts? It's the states. The states got out ahead of us. They feel a creative ownership over their frameworks and they're the ones that are preventing us doing this now, which is an object lesson to us here of why we need a moratorium to prevent that from occurring.
In the case of AI, it is been asserted that this circumvent consumer protection laws to anyone who thinks that I would say RTFB, read the freaking bill, right? Because we specifically put language in there that says that as long as your law does not specifically target AI, you can continue and enforce it, which includes all of the state consumer protection laws, things about fraudulent and deceptive business practices. It was, the intent was never to put a moratorium on those. And those will certainly apply to AI as long as you don't specifically target AI with those bills, the states will be free to do that. And then I wanted to bring it back here. Is my time, wind up to what you said, Mr. Bhargava, in your written testimony key. I think I lost my mic. It was making too much sense.
Rep. Gus Bilirakis (R-FL):
Can you stop the clock?
Rep. Jay Obernolte (R-CA):
There we go. You want me to move? Okay, I'll wrap up here.
Rep. Gus Bilirakis (R-FL):
I suggest you take the full 45 seconds. Alright.
Rep. Jay Obernolte (R-CA):
Mr. Bhargava, you talked about how us continued dominance in AI depends on regulatory certainty, and I couldn't agree with you more. What we absolutely cannot have is a situation where the rules on the governance of AI change every time the winds of political fortune shift one way or another because we have innovators and investors that are making billion dollar decisions on r and d and procurement, and they need regulatory certainty to do that. And the only way that that happens is if we provide that leadership. And the only way that that happens on a durable basis is if we do it on a bipartisan basis. So we absolutely need to get Congress on the job here to enact some of the things that we talked about, the task force report last year, and it has to be done in a bipartisan way. So let's get to work you back.
Rep. Gus Bilirakis (R-FL):
Thank you. Thank you. Okay. Next we'll have Mr. Soto from the great state of Florida, you're recognized for five minutes.
Rep. Darren Soto (R-FL):
Thank you so much, Chairman. I appreciate you holding this hearing. AI is a critical part of US' leadership and economic success. We know when used correctly, it makes workers and businesses more productive and effective. It could also help with some of our most difficult problems we've seen during the pandemic. It was a supercomputer in the Department of Energy that came up with the first antiviral rem Desi vir when we were racing to get a vaccine. And that type of quantum computing coupled with AI can do big things to solve some of our most difficult problems. We also see on occasion we can actually get some bipartisan bills done in areas like technology, like the Take It Down act that just passed this committee a few weeks ago. But we see all too often privacy, social media, autonomous vehicles, AI taking forever in Congress to pass, which is why the states play a key role.
In the meantime, I'm the first to admit this committee's jurisdiction is the interstate commerce clause, right? For the lawyers in the room, our job is to come up with laws, a law that suits the whole nation. But when this committee doesn't get things passed, because we see some opposition both by current leadership in the Republican Conference with the speaker and others that have blocked these bills at the last second, we saw that last year with the privacy bill to difficulty in the Senate doing a moratorium on state laws that makes it untenable as we're trying to have some reforms we saw in Orlando a real tragedy happen. Ms. Kak, thank you for being here today. I am sorry for your loss. I remember reading in the Orlando Sentinel about Sewell Seltzer III, a ninth grader from Orlando Christian Prep in central Florida, beloved member of the Central Florida community, a tragic story of AI chatbot gone wrong. And so Ms. Kak, I wanted to give you a moment to talk about what you think as a fellow central Floridian we should be doing to help protect our kids and have the right balance for artificial intelligence.
Amba Kak:
Thank you, Representative. What happened to that young man was a tragedy, but the greatest tragedy is that we can't bring him back because these are harms that can't be remedied after the fact. And the message I've been trying to draw today is that prevention is the cure when it comes to a range of AI harms. And what we're seeing instead in this industry is a proliferation of very similar kinds of applications to the ones that caused this tragedy in the first place. We're seeing AI companions the idea that AI therapists are going to replace regular therapists. What all of this sort of underscores is a much deeper wr, which is a business model that's predicated on maximizing user engagement and creating these emotional dependencies at any costs. And so to your question, what do we need? We need sort of urgently, we need targeted red lines that draw boundaries around this kind of behavior and make sure that these are applications that are never built in the first place.
Our colleagues tell us that existing agencies and general rules will take care of it. But if that was true, then we wouldn't see the reckless proliferation of AI applications that are predicated on exploiting children in this way. I think there's also low-hanging fruit here, transparency rules so that people know that they're actually interacting with a bot. Periodic reminders, data minimization is always going to be very useful here to make sure that our most sensitive thoughts and inferences don't just become fodder for these tech companies. And really getting to the root of the problem, which is sort of a business model that's predicated on invasive and behavioral targeting.
Rep. Darren Soto (R-FL):
So disclosure, certain rules of the road, what you can and can't do are what this committee needs to get accomplished. And I hear you. We see for many years this committee got a lot of big things done, including the Telecom Act was the last big one in this space. And at the time we saw Section two 30 was formed, and that made sense at the time because the internet was a very new place. But that is an example of this inaction. And none of these rules are going to be perfect. We know that, which is why we have a Congress to go back and do these things over and over until we get it right and we may never get it right. It may be constantly reforming to get these things to where we need at the moment. But I do believe there is a healthy spot between protecting our kids from the abuses of AI. We're still allowing every small business and worker in our area to be able to use it to enhance their jobs and economic productivity. So thank you for being here today. Appreciate you sharing the story of this young man and the unfortunate tragedy that happened to him and I yield back.
Rep. Gus Bilirakis (R-FL):
Thank you very much. I just wanted to remind the gentleman, a good friend from the state of Florida that it goes both ways. A couple terms ago, when your leadership was in charge, the National Privacy Act was passed out of this committee. It was blocked by leadership. And I'm not talking about the Chairman at the time of the Energy and Commerce Committee. So well, let's move forward now in a bipartisan fashion and get the mistakes have been made on both sides, but let's move forward now and think ahead and get this national privacy and a lot of this legislation having to do with AI and what have you across the finish line on behalf of the American people. So with that, I'm going to yield to the Chairman of the full committee. My good friend, Mr. Guthrie. Thank you. Your five minutes of questioning.
Rep. Brett Guthrie (R-KY):
Appreciate it. Sorry, I haven't been here for, I've been in the rules committee over in the Capitol for most of the night. So good to see you all. This is very important to us. We have to get this done. We have to get it correct. Europe, there's a lot of reasons not to invest in Europe right now. Unfortunately, we need a strong Europe. I think it's good for America to have a strong Europe, but we had the European Union, the United States had the same economy in 2008, about the same size. Now we're about 75% bigger. So it's more than their privacy law, but that's certainly a big part of it. A lot of it going after our tech companies specifically written to go after our tech companies, which is just unfortunate it. So to beat China, we have to win the AI battle and that's energy and making sure we have the right regulatory structure.
And the answer is not zero, but the answer is not the AI Act. I understand it takes 330,000 euros, that's probably about $350,000 just to comply with one of the actual requirements. That was a study. So Mr. Heather and Mr. Bhargava, if you'll go first, Mr. Bhargava. Mr. Heather, what do you think would happen to Little Tech and people that want to create startup people in the garage, the proverbial two people in the garage trying to start a business. If we had the Big Tech privacy, the level that Europe does, you want to start Mr. Bhargava and then go? Sure.
Marc Bhargava:
Absolutely. Yeah. We do think that a lot of the European regulation goes too far on the AI side, but we understand that they're coming from a good place. I certainly think that the European governments are trying to protect consumers and a lot of the areas we need protection are extremely fair to voice. But unfortunately it can also go too far. It can be conflicting, it can be between different groups.
Rep. Brett Guthrie (R-KY):
What are the couple of things that are too far?
Marc Bhargava:
One thing that's too far, for example, there was one clause that I think it was Article 10, but it basically said that a data set has to be relevant, representative, free of errors, and complete. And I myself have built a company and worked in tech for over a decade, and I've probably never seen a data set that's free of errors, for example. So some of the regulation potentially being written in Europe is not being written by people who are really close to the industry.
Rep. Brett Guthrie (R-KY):
Big fear that we can't do, I don't think, is that there's this idea that you could take the AI, the algorithm and send it off to the FDA style entity and get them approved and send it back and say, yeah, you can do it. Well, China's just turning them out and we saw what happened with deep seek. Their chips are not as good as ours, but it'll wake us all up. So Mr. Heather, what do you think would be the issues with those types of regulatory? Well, you have to get even data sets approved. So I mean, it could take you a year. You said in one of your earlier comments, didn't even hear back.
Sean Heather:
Yeah, I think one of my comments in my opening testimony, I know you weren't here for it, was that there's also a lot of disclosure requirements associated with the EUAI Act, which will not only require the know-how and the technology be disclosed to the regulator, but to be disclosed to competitors to be closed Chinese competitors, people down chain. And that creates two problems, right? One is that the know how's now out there so someone could re-engineer that AI for more nefarious means. And then secondly, if your IP is out there on the street, what's the incentive to invest? And so it's not just whether you're sharing your secret sauce with the regulator, the EU AI Act is going to require that sharing to go more broadly because they have an interest to kind of help EU tech companies.
Rep. Brett Guthrie (R-KY):
We want to do that with pharmaceuticals too. So we've seen like the governors of Colorado, Connecticut, California raise concerns about proposed laws in their state, the Governor of Virginia veto an AI bill. So this would be Mr. Thierer. So why are we seeing more and more governors publicly push back on AI? And that's kind of our issue right now, that we want to deploy AI through the government and through the IRS. And then we're worried about state-by-state laws. Why do you think governors of the state are even pushing back on it? Think it's going to make 'em uncompetitive for
Adam Thierer:
Absolutely. And let me answer this Congressman, Mr. Chairman, by connecting your previous question with this one. Because when Governor Polis in Colorado passed the nation's first major comprehensive AI law, there was a lot of opposition and he felt it a lot of small and mid-size entrepreneurs came out with letters and really pushed hard to try to stop it. He signed it anyway, but said that we needed a national standard and then subsequently went back and had a special effort to try to review this law, could not come up with answers to the complexities of it. And then finally has called for now a moratorium to deal with this. So this is pretty astonishing for someone who signed the first of its kind nation state.
Rep. Brett Guthrie (R-KY):
I got elected with him here.
Adam Thierer:
Governor. And the connection here with your previous question, Mr. Chairman, is the fact that these small and mid-sized entrepreneurs, AI entrepreneurs in Colorado, recognize what's happening in Europe. And just yesterday, the Wall Street Journal published a story about Europe's very small share of the global tech marketplace and had this astonishing statistic. European businesses spend 40% of their IT budgets on complying with regulations. And two-thirds of European businesses don't understand obligations under the EU AI Act. How do you do business in that environment? And this is what Colorado and other states are recognizing.
Rep. Brett Guthrie (R-KY):
Thank you. Appreciate that. My time has expired. I'll yield back.
Rep. Jay Obernolte (R-CA):
The Chairman yields back, we'll hear next from my colleague from California. Mr. Mullin, you're recognized for five minutes.
Rep. Kevin Mullin (D-CA):
Thank you, Mr. Chair, and thank you all for being here. Let's be honest about what's really being argued here that any regulation, federal or state, will slow innovation. That's the real claim the majority seems to be making, and I believe it's a false choice. I believe balance is possible. The idea that we have to pick between innovation and safeguards just doesn't hold up the real threat to US leadership and AI isn't regulation, it's inaction. If we allow AI systems to operate without guardrails, we risk eroding public trust. So when we talk about AI regulation and American leadership, the real question isn't whether to regulate, it's where and how Congress should focus on closing the clear gaps in oversight. That means targeted legislation, yes, mostly at the federal level, but without blanket deregulation or preemption. Let's legislate where federal action makes sense and let's states continue innovating and leading where appropriate, especially in protecting democracy.
For example, one of the clearest gaps that already exists is transparency. Right now individuals often don't know when AI is making a decision about their lives, whether it's a loan, a job interview, or how their car responds on the road. The public has a right to know when AI is being used, what data it relies on and whether it's safe or not. That's why I am focused on legislation on transparency and autonomous vehicle safety. My bill addresses a specific risk AI systems on public roads operating with very little public disclosure. But the principle behind it applies much more broadly. The public deserves to know whether an AI system that risks life and property is safe. To me, this will speed adoption of the best technologies out there by giving the public confidence and actually grow that sector. So Ms. Kak, in your testimony, you mentioned the transparency crisis in AI. What would strong enforceable transparency requirements actually look like in practice? How can we ensure they're more than just a box checking exercise?
Amba Kak:
Thank you so much for that question, Representative. I actually think that this industry in particular, the AI industry derives its power from structural forms of obscurity. The fact that these systems are very complex and they're made to, they have this sort of black box quality and that's where they derive their power. And so in that context, even as I believe that transparency is the bare minimum, it is the necessary first step and it's really heartening to see that that is where states have really taken leadership. I also, since Colorado was brought up, I do want to say two quick things. Firstly, I think the assertion that the Colorado bill is in some ways a continuation of the EU model is not grounded. In fact, the Colorado bill puts in place baseline disclosures in high impact settings and requires firms to do impact assessments to make sure that they can live up to the claims they're making.
This isn't radical stuff, it's the stuff of common sense. And I will also say that state lawmakers in Colorado pushed that law through despite the fact that an army of Big Tech lobbyists continued to argue that transparency, and I'm paraphrasing, but only slightly was too burdensome and obligation for them to fulfill. So to come back to your question, which is what does a good transparency framework look like for AI? I think we need disclosures across the AI supply chain, not just the deployers, but also the upstream developers, the Big Tech companies that are making this AI and aren't telling us what data they're using to build these systems. So they need to be telling us how the sausage is made or put together so to speak. We also need the smaller developers. It's small businesses down the line that also need this transparency from AI companies.
We've seen Big Tech AI lobbyists argue that they've sort of kicked the can down the road when harms happen. It's the responsibility of these smaller firms, but the smaller firms don't have the information they need to be able to know why these harms are happening and to remedy them when they are. And finally, I want to make a quick point because we talked about Stargate and infrastructure. We also need transparency on the infrastructure side of things. AI data centers are proliferating, but they're failing to report basic information on resource consumption, on power usage, on water consumption. And state lawmakers are really speaking up to say that we need just in time, we need transparency in this domain. We can't let companies use the claim of AI innovation to run wild.
Rep. Kevin Mullin (D-CA):
Thank you for that. Ms. C. And I believe to lead on AI, we need to encourage innovation and we need to ensure this technology is safe, fair, and accountable. I'm committed to working with my colleagues and that includes across the aisle to ensure that we strike that balance. And with that, I yield back.
Rep. Gus Bilirakis (R-FL):
Thank you, gentlemen yields back. Now I'll recognize Mr. Bentz for his five minutes of questioning.
Rep. Cliff Bentz (R-OR):
Thank you, Mr. Chair. And thank you for this most interesting opportunity. I view AI as a novice, as a window into the answers to questions we've struggled with for beyond. And to that end, the question about consumer protection is interesting and serious and I'm happy for our discussion of it, but it seems to me that as I look at the billions and billions and even trillions of dollars being invested in this space, that those who are doing so have to have some way of funding. That's why we see this models based upon taking advantage of the consumer. And so if we were really serious about this, we'd be talking about funding and how not to drive, who are trying to pay for that which they're doing toward the type of a consumer that we're trying to protect. But what's really interesting to me is what you guys as experts in this space think is the most correct way to approach how we're going to manage the ideas that this massive investment's going to create.
Because the truth of it is that's what everybody's racing for. I used to ask, well, what good AI? Well, what good AI is? Looking through that window and seeing the answers to how we cure cancer, how we do all of these things, and then the people who reach that first patent, those ideas and make money from them, that's the idea, isn't it? That's what's driving everybody's billions and billions of investment. And so getting right down to it, if we're going to continue to pay for this investment through damage to the consumer, how are we going to keep up with China who seems to be dumping all kinds of money into it? Do you guys have better ideas about how to invest? I'll start with you.
Marc Bhargava:
Sure. The US obviously relies on the free market and it points the investment to where we'll actually have the most impact. So right now, areas where we're seeing a lot of automation are, especially in tasks that can be repeated over and over again. Take bookkeeping for example. 80% of bookkeeping now can be automated with AI, advanced accounting 20%. Some of the call center workflows and tasks, there is 50%. So there is a justification for this massive investment. It's that over 15 to $20 trillion might be created using AI in the next few decades. And we're already starting to see the impact of this automation in the $16 trillion services industry globally. So very similar to the cloud industry. Honestly, about a decade ago, folks were why are we investing so much in cloud? What will be the output? I think it's very similar in AI today where we're seeing massive amounts of automation being created by AI, which is freeing people up to really work on the harder tasks versus the more repetitive ones. So I think it is a good investment.
Adam Thierer:
Thank you. Moving on down the line. Well, I think it's important we get our policies right, our regulatory framework. Let's talk again about how Europe did not, and they managed to have a massive outflow of capital and investment because it followed the workers that also left. And the firms that came here to invest, a huge number of the great tech companies that are here today came from other countries and developed here. And that's what we need to have more of with private investment following the 500 billion investment President Trump announced with leading AI developers the first week in the White House, Project Stargate. That's huge money. Nobody has money like that on the table in Europe today or even in China, where we're something like 12, 14X over China, private venture capital.
Rep. Cliff Bentz (R-OR):
And I want to move to Ms. Kak because I'm anxious to hear your thoughts on this also, but the fact that we're able to stack up that kind of money leads me to wonder about the access of the smaller people. But regardless, the question really was is there some better way to fund what we're doing?
Amba Kak:
Thank you, representative. I think just because you mentioned the curing cancer example, it's interesting that AI company CEOs use that as their flagship example for why we should be investing billions of dollars, including in public money to build out this build out AI infrastructure. But the receipts don't exist yet. We're being told that this form of super intelligence is going to bypass scientific hurdles, but we don't really know how, but maybe even more concerningly. If we want best-in-class AI, we need to have best-in-class research infrastructure to begin with. And so on the one hand, we're talking about AI curing cancer. On the other hand, we're seeing NIH subject to 4 billion cuts when in fact the main focus of NIH is cancer research. So I do think that for the US to lead in AI, we need a strong foundation. And I'm worried that we're sort of walking back some of the progress we've made there.
Rep. Cliff Bentz (R-OR):
Thank you. Chamber of Commerce.
Sean Heather:
I don't think there's a better model. Europe doesn't have any private investment. Part of the drug report was that there is not the incentive to invest in Europe and that the regulatory frameworks that drive investment in Europe aren't there to support risk-taking. Obviously we do rely on private investment in the United States because we incentivize risk-taking here. China uses a public funding model largely, so I'm not sure I know of a better model than the one we've come up with.
Rep. Cliff Bentz (R-OR):
Thank you so much. Yield back.
Rep. Gus Bilirakis (R-FL):
Gentlemen yields back now. Recognize Ms. Schrier for her five minutes of questioning.
Rep. Kim Schrier (D-WA):
Thank you, Mr. Chairman. And thank you to all the witnesses today on this important topic. We are all excited about the benefits of AI and yet we should all be very concerned about the potential dangers pose. And yes, absolutely Big Tech needs certainty about regulation, but that should not be in the form of a guarantee of no regulation for 10 years. Just look at the damage that social media has done to children and to society because a failure to deal with the algorithms that elevate clickbait and outrage and conspiracy theories. State laws exist to protect consumers. And now Republicans want to prevent states from issuing these protections on any product or practice or system that uses artificial intelligence. Just last week, they slipped in a few sentences into their massive tax bill that placed a 10-year ban on the enforcement of any state.
Law is meant to protect consumers from potential and already very real dangers of AI, Texas, Utah, Florida, California, Virginia already have laws that protect their residents. Here's some real-world examples. We discussed transparency, social media algorithms, deep fakes, AI generated child pornography, data collection, targeted advertising, virtual assistance or companions that we discussed like Facebook's chat companions over-reliance on AI for interpretations of x-rays and MRIs, particularly when I'm a pediatrician, there aren't pediatric standards that would make that safe. Automated insurance claim denials like those already used by UnitedHealthcare and others to delay or deny care. And Congress just passed the Take it down act. I'm thrilled about that. It forces social media platforms to take down non-consensual real or deepfake sexually explicit images within two days of a victim asking for them to be taken down. But that is after the fact and we need to do whatever we can to protect people before if possible.
But the AI moratorium that Republicans sneaked into their tax bill would in yet another way hurt Americans by preventing any state from providing even greater protections against AI, child pornography or other AI products that hurt our kids in Washington state. We actually have a task force and we're a tech state. We have a task force that studies risks and benefits of AI and their first recommendation was to strengthen protections against child sexual abuse material created with AI. The stakes are so high, this technology is moving so fast. Three months is a long time. 10 years is an infinity and a 10-year moratorium on AI regulation that no state would be able to regulate anything. No state would be able to enforce any of this, including for protecting children. So Republicans are working to make this a reality even after we heard from parents and experts just a few weeks ago on the harms of giving Big Tech unfettered access to children and we need reasonable standards for AI and data privacy, not kowtowing to Big Tech's requests to simply not be regulated.
So I want to urge my Republican colleagues to stand up for their constituents. They're doing this in the wrong order. First pass essential national protections and then deal with preemption. We are here to work with you. This is a common goal. And by the way, the Kids Online Safety Act, which is so basic, hasn't even made it to the floor yet. And so public confidence is understandably not there. So in the absence of federal regulation, my constituents need and want state protections. Ms. Kak, thank you for your comments and I share your comments. In addition, with respect to scientific research and defunding NIH, you mentioned that simple transparency is the bare minimum. You answered questions about maybe how to do that. I was wondering what are the next protections you would recommend that we take up urgently?
Amba Kak:
Thank you, representative. I mean, honestly, there's a whole laundry list of what we need and much of it sort of depends on the sector that we're looking at. But if I had to, I would say there's just as deep fakes have sort of come to the top of the list in terms of this is behavior that should never be allowed. There's a list of similar kinds of AI abuse and exploitation that should be subject to bright line rules that are easily administrable and just sort of put certain practices off the market immediately. We also need kind of nose to tail accountability. And what I mean by that is making sure that AI companies, both the biggest players that exist at the kind of foundation model layer, but also the deployers that are using these systems are subject to, like you said, baseline transparency but also backing up the claims they make. Do these systems work as they should? What are the errors? Impact assessments is one frame that's used. And yeah, I think much more is needed. Apologies. Thank you for
Rep. Gus Bilirakis (R-FL):
Comments. Thank you. General yield lady yields back. Thank you Mr. Fry. You're recognized for your five minutes of questioning.
Rep. Russell Fry (R-SC):
Thank you, Mr. Chairman. I'm always amazed by this country in the 18th century, we revolutionized agriculture and created the cotton gin. In the 19th century, we harnessed electricity; in the 20th we soared the skies and created nuclear and split the atom. And now we've got AI, which is such a tremendous opportunity I think, for this country, but it's also disruptive and powerful and far-reaching than anything we've ever seen before. But it's not just another tool, it's kind of an infrastructure capable of reshaping our industries the way that we operate. Accelerating scientific discovery as Ms. Kak talked about transforming national security and redefining the global economy. And once again, I think we stand at the forefront. I think it was mentioned earlier that we have a competitive edge now, but it's not guaranteed that we would have that competitive edge in the future. And so it's incumbent upon us in this committee and this Congress to understand that framework and to make sure that we maintain and enhance that competitive edge. Mr. Heather, the EU, you've heard a lot today about the EU and I think as was mentioned by Mr. Thierer, about the Brussels effect. What do you think, in your opinion, where do you think that they went wrong in their regulations specifically and what lessons can we learn from their mistakes so that we don't repeat them here?
Sean Heather:
So as I said in my testimony, I think Europe prides itself and it's kind of rushed to regulate and wants to be the first to market to regulate. And I think they use that kind of as a soft power and try to go around the rest of the world and get them to emulate it. And I think one of the things that they don't do a very good job is one, sitting back and evaluating how their existing laws are working and functioning and identifying where the gaps are in those laws and where they might need to fill in those gaps with regulation. And so they didn't do that in their process to create the EU AI Act. The other thing they did was because of this precautionary principle, which is kind of a philosophy, they have to get out ahead and prevent any future harms, even as they may not even be real.
They may only be theoretical. They decide that they want to classify lots of AI applications as being high risk. So they have overclassified what AI applications are high risk. And I hear a lot of people here talk about Big Tech. When I listen to Mr. Bhargava speak about what AI could do for nurses to make their jobs easier, when I hear about how it can improve accounting functions, when I hear about what it can do to make call centers work easier for consumers, none of that is Big Tech. Those are going to be companies who are going to be deploying AI technologies that are being built not by Big Tech companies, but by medium-sized companies and small-sized companies. So there's a lot of focus here on Big Tech and Big Tech is certainly a key piece to the ecosystem here. They're obviously critical on the infrastructure side of the work that supports AI development, but the actual deployment of AI algorithms that are going to be used, they're going to be used by businesses doing B2B work, not just B2C work. And the idea that we are going to kind of put this EU model in place in Europe is going to really hold Europe back from being competitive
Rep. Russell Fry (R-SC):
Around. Do you think that we are at risk of creating a permission culture, if you will, where AI needs permission on innovations or prior approval prior to, or compliance with strict regulations? I mean, that seems to be the European model, right? That you need their permission in order to do something. Are we at risk of doing that here?
Sean Heather:
I don't know that we're at risk of doing that here yet, but that certainly is the path in which we see the states walking down. Certainly I think that is the path that Europe leans to. Interestingly enough, when you listen to civil society groups in Europe, their biggest criticism is actually the role of AI being used by the government in Europe. There actually are the ability to use AI technologies by European governments for surveillance purposes and these kinds of things that are not being disciplined by the EU AI Act. And so I've heard lots of criticisms by the civil society groups that essentially some AI technologies are going to be okay for the government to use, but not okay for commercial use. That kind of disparity also I think creates problems and challenges, but those are some of the things that we see also out of civil society groups in Europe.
Rep. Russell Fry (R-SC):
Fair point. Mr. Bhargava. On Monday, the Wall Street Journal published an article titled “The Tech Industry is Huge and Europe's Share of it is Very Small,” and it concludes that quote, A big reason why Europe is now behind can be summed up as a lack of speed. Entrepreneurs like you and companies that you invest in are slowed down by the maze of regulations in Europe and even some states. And according to a survey cited in the same article, European businesses spend 40% of their IT budgets on complying with regulations, which is astronomical to me. In your view, would this, what would happen if the US adopted that same approach or states adopted that same approach that they've got in Europe?
Marc Bhargava:
Yeah, I'll take just the three seconds here, but the models are changing every three to six months. So this is an industry where you can't afford to fall behind. If you fall behind even a matter of months, you're behind in a pretty large way and from a technology perspective. And so that's something I hope the committee takes into account.
Rep. Russell Fry (R-SC):
Thank you for that, Mr. Chairman. I yield back.
Rep. Gus Bilirakis (R-FL):
Thank you. Thank you, Mr. Fry. Now recognize my fellow Florida Gator. Go Gators. Ms. Lee, she's recognized for her five minutes of questioning. Thank you. I had to get that in. I've got my tie on today.
Rep. Laurel Lee (R-FL):
A great day for the Gators and the Capitol today. Thank you, Mr. Chairman, and thank you to our witnesses for being here today to help us shape a thoughtful forward looking approach to AI policy. Artificial intelligence is not just the technology of the future. It is already transforming the way that we live, work, and govern, and it is reshaping nearly every sector of our economy. The question before us is not whether to act, it is how to act wisely. So as policymakers, we have two responsibilities. One is to protect the public from real risks, but second, to ensure that American innovation continues to lead the world. Those goals are not mutually exclusive. In fact, the right policy framework can achieve both. So I appreciate you all being here today to help us strike that balance. I'd like to begin, Mr. Bhargava, to pick up on one of the elements in your testimony about policy frameworks and specifically this. What is your view on requiring AI developers to use standardized documentation tools like model cards to disclose purpose and limitations and training data?
Marc Bhargava:
Yes, absolutely. So the model card concept I think was originated by Google, which created the transformer paper and has been involved in the space as well. I think those sort of rules frameworks are actually very helpful. And it's also great that they're coming from industry. They're coming from people who are in the weeds, building the models, testing the models, and can have insights that really make sense. So the four general areas where I think there could be frameworks created, and I'd love to work with Congress on it, is one, looking how you gather data. So there can be ways to disclose and have the transparency that my colleague here talked about to gather data. The second is evaluating how models are trained and having folks kind of report the framework around that. The third is the outputs and testing of models. So this is both actually algorithmic testing, but also human testing.
So you could have different AI models testing each other from third parties, for example. This would save costs and be an easy way to create a framework, a guideline, and in addition, I think always will need humans in the loop as well. And then fourth, having companies including startups write up what are the downstream effects of their technology, really showing their thoughtfulness. So everything I'm recommending here is things we do at General Catalyst, 200 person or so company. It's things we do at GC on all of our investments. So I do think there are frameworks that can be put in place, but it's really important. The two points that one, the framework is coming from the federal government, not the state governments. And so it can be consistent and easy to understand. And two, the frameworks have to be developed with industry. I think one of the things Europe has not done enough of to date is creating these with the actual market participants, with startups, with entrepreneurs. And so we would welcome having more discussions with General Catalyst GC Institute to come up with responsible frameworks, but also ones that our startups can get behind.
Rep. Laurel Lee (R-FL):
What should policy makers be careful not to do when designing transparency requirements, particularly for early stage or open source developers?
Marc Bhargava:
They should make sure to do it with the startups themselves and the companies. Really what we do not want them to do is just create it in a vacuum to throw in whatever words sound good or beating up Big Tech, et cetera, and just making a statement. This is about actually creating the right policy. So I think talking with the startups, with the market participants, with people in industry is really, really important. It's not a political statement. We all want to come to this framework and getting the input from startups I think would be extremely helpful in getting to the right framework.
Rep. Laurel Lee (R-FL):
Getting back to that framework model card disclosure concept, is it your view that those types of disclosures should be voluntary required for only certain high risk applications or broadly mandated across the industry?
Marc Bhargava:
I think they should be required as long as it's minimalistic. So the requirements have to make sense. You can't ask for these massive audits on a model, and then when our startups comply and not get back to them for 12 months because you don't really know how to evaluate the material they sent you, for example. So I think these can be required frameworks on the federal level, but they have to make sense and there has to be the agency there as well and the people in government to be on the other side having a conversation with what these frameworks should be. It needs to be a real partnership approach and it needs to be simple.
Rep. Laurel Lee (R-FL):
Mr. Thierer, let me move to you. I'd like to get your thoughts on this. Do you support directing NIST to develop voluntary AI standards or best practices similar to what it did for cybersecurity?
Adam Thierer:
Yes, and the good news, Congresswoman, is that NIST has already done a lot of that heavy lifting, and this has been a very bipartisan and widely agreed to process, a multi-stakeholder process, as it's called. A lot of different players came together and formulated a really good set of standards for AI risk management, for cybersecurity privacy. That's important work, and I think obviously Congress can build on that and talk about how to go beyond that with certain types of policies that were considered in this Congress last session will be considered again, I'm sure.
Rep. Laurel Lee (R-FL):
Thank you, Mr. Chairman. I yield back.
Rep. Gus Bilirakis (R-FL):
Thank you. I appreciate it very much. Now I recognize Mr. Veasey for his five minutes of questioning. Oh, Ms. Clarke just walked in. Okay. Alright. Ms. Clarke, you are recognized for five minutes of questioning.
Rep. Yvette Clarke (D-NY):
Thank you very much, Mr. Chairman. We are all on roller skates today with so many hearings taking place. But let me thank our witnesses for your expert testimony here today and thank our Ranking Member Schakowsky. I'm glad to see this subcommittee gathered to discuss regulations for artificial intelligence in the future of US leadership in this space. I am, however, a bit perplexed at the timing. It seems to me that we would've been better served having this discussion before our Republican colleagues voted to advance a 10 year moratorium on AI laws as a part of the big beautiful bill to line the pockets of their Big Tech billionaire benefactors at the expense of Americans health, personal freedom, privacy, and safety online. A 10 year moratorium seems wildly irresponsible given the rapid pace of technological advancement, especially in the field of artificial intelligence. It is particularly disappointing to see such a provision advance in light of this administration's consistent efforts to undermine the few existing guardrails, protecting consumers such as the illegal attempt to fire Democratic FTC commissioners, the attacks on our federal workforce and the degradation of independent agencies independence.
Further, while my colleagues across the aisle move forward with this shortsighted moratorium, they've not taken any meaningful steps to fill the vacuum. This moratorium would leave in its wake with common-sense bipartisan legislation to protect Americans' privacy and create a regulatory framework for artificial intelligence in this country. The age of artificial intelligence is upon us, and this Republican control Congress needs to step up to the plate. We are well overdue for a comprehensive federal data privacy standard. We have not seen this committee take any steps. This Congress towards the bipartisanship required to do such sweeping legislation, which would be foundational to any overarching AI legislation. And fortunately, while Republicans have used their time in power to quash, any progress made on data privacy and artificial intelligence states across the country are stepping up to fill the void and protect consumers unless and until Congress acts. State laws are the only recourse American consumers have for protecting themselves and their data from Big Tech and the harms caused by artificial intelligence. While artificial intelligence offers exciting opportunity and innovations, without the proper protections in place, the potential for harm is too great to ignore. Ms. Kak, can you explain for this committee the importance of a federal data privacy in the larger policy discussions around artificial intelligence?
Amba Kak:
Thank you, Ms. Clark. I wanted to first just agree with your characterization of what's happening right now with this proposed moratorium, which is that we're proposing to wipe the slate clean at the state level without anything in its place. Just the reassurance that Congress will act and that federal rules will come, but a record that does not inspire confidence. So on the question of a federal privacy law, this is a moment, and we've said this for a long time. Data privacy law is foundation of AI, regulation of AI law. AI is supercharging already bad and corrupt incentives for unchecked commercial surveillance, the kind of free for all using AI collected in one context for another without asking for permission. Leaky chatbots that routinely are sort of taking sensitive data on the one hand and leaking it out from the other in accidental but routine ways. It is another area where I think we need to be very grateful for the fact that states have set up sort of stepped up to the plate.
But I agree with you, we need a federal floor particularly to set the terms on which data minimization happens. Our personal data is not a fleet free for all, and I think this is the most, we're already seeing Big Tech companies use AI as a free for all justification to proliferate these kinds of bad practices. Just very quickly, I want to also say that federal privacy law would also have pro-competitive effects. It would limit what we're seeing, which is aggressive strategies for acquiring companies, acquiring data sets, and shoring up Big Tech advantage and shutting the door behind competitors. So privacy would be a step in the right direction for consumers, but also for competition.
Rep. Yvette Clarke (D-NY):
Well said, Ms. Kak, with that, I'm going to yield back the balance of my time.
Rep. Gus Bilirakis (R-FL):
Gentle lady yields back. Appreciate and I'll recognize Mr. Evans. Oh, is Mr. Kean here? Oh, Mr. Kean. Okay. Mr. Kean, you recognize you for your five minutes of questioning.
Rep. Thomas Kean (R-NJ):
Thank you, Mr. Chairman, and thank you to our witnesses for being here today, Mr. Bhargava. In your view, what can we do to make federal AI policy future proof or at least future resistant to enable innovative innovation? Even its AI technology continues to progress.
Marc Bhargava:
Yes, I think having clear guidelines is the best way to go. The technology, as I mentioned before, continues to change. So every three to six months models can do more and more. For example, in last November, December, Google released deep research. Then there were new logic reasoning models from OpenAI. There were new models from Anthropic. And so the models themselves and what they can do, they're moving to be more agentic. They're moving to use voice. The technology is changing very, very quickly and it's hard to keep up with. So I think creating these guidelines are what's most important. And what I mean by that, for example, is a transparency guideline. So that could hit on what are the data sources? How do we train the model, how do we test the model? Also having kind of red teams come and after there's an output, a red team comes and tries to make it do something bad. So this is sort of a human testing of it. So putting in place these processes and these guidelines are the way to kind of have a framework on the national level rather than try to get two into the weeds because the technology is changing every three to six months. And I think putting these guidelines in place and these processes in place at a federal level with input from the entrepreneurs and the founders is the best way to have an approach here.
Rep. Thomas Kean (R-NJ):
And given your background as a startup founder and venture capitalist, can you explain what the practical impacts of patchwork of state regulations have on innovation?
Marc Bhargava:
Absolutely. I myself am a founder. I started a company in the digital asset space, which was then later acquired by Coinbase. And there we had to operate across different states. And so it was very hard for us to compete with a larger company because we didn't have the lawyers or the compliance teams to be able to look from a state by state basis. It was very hard to actually compete in that sense. And so having a single national framework that's very clear, that has input from startups is a much better approach just in general to innovation. Not necessarily only specific to AI, but I certainly, AI is top of mind. It's the fastest moving technology today. And this idea of competing with China on AI, it's not just AI, it's all the things AI enables. So it enables better healthcare, it enables better transportation. Folks are talking about self-driving. And so this isn't about competing in China in one place. This is competing in China in multiple industries. We absolutely need to stay ahead. And the only way to stay ahead across all of these industries is to have clear, transparent guidelines on the federal level with input from startups and experts in the field, not simply being written by think tanks or politicians or others.
Rep. Thomas Kean (R-NJ):
Okay. And Mr. Heather, the EU AI Act entered into force in August, 2024 as compliance deadlines approach. Can you discuss how the European Commission and the private sector are implementing this law and are there any concerns that we should be aware of?
Sean Heather:
I would say buckle up. I don't know that the Europeans, similar to when they stood up GDPR or when they stood up the DMA, the Digital markets Act that they even know how to police the rules in which they've written. There's a lot of ambiguities, A lot of determinations are going to have to be made in real time, but the expectation is that the companies will be in compliance. So this creates a morass for even the largest companies to understand what the rules of the road actually are and makes it virtually impossible for the smaller and medium-sized enterprises to be ready to be in compliance on day one.
Rep. Thomas Kean (R-NJ):
Thank you. And Mr. Thierer, in your testimony, you discussed the use of an AI moratorium for Congress on AI policy. Can you elaborate on how this moratorium on state AI regulations would work in practice? And is there a precedent for such a step?
Adam Thierer:
Yes, Congressman, indeed. I mentioned the Internet Tax Freedom Act of 1998, and there are other types of moratoria that have been utilized by Congress to deal with situations like this gives breathing room and a learning period moment where we can actually figure out what works. It would basically be, as the current moratorium stipulates something that would cover most things having to do with algorithmic models and automated decision-making systems. But it would leave room for other types of general purpose, generally applicable laws rather that would cover technology more broadly. So the key thing here is technology neutrality and making sure that we don't have this voluminous overlapping set of patchworks from state to state.
Rep. Thomas Kean (R-NJ):
Thank you all for your testimony.
Rep. Gus Bilirakis (R-FL):
I yield back, gentlemen yields back. Now I recognize Mr. Veasey for his five minutes of questioning.
Rep. Marc Veasey (D-TX):
Mr. Chairman, thank you very much. One of the things that I've been interested in is obviously concerned about, and I do think that there's some benefits to it, but we have to be careful, is facial recognition tool. Mr. Chairman, I asked unanimous consent to insert this article from the Washington Post titled, police secretly monitored New Orleans with facial recognition cameras
Rep. Gus Bilirakis (R-FL):
Without objection, so ordered.
Rep. Marc Veasey (D-TX):
This article, which includes findings from an investigation that the Post conducted for two years talked about how the New Orleans Police Department secretly relied on facial recognition technology operated by a private company to scan streets in search of suspects. The use of this technology was inconsistent with a city ordinance that they passed in 2022 and given the limitations of facial recognition technology, and I had some issues, it seems to have been cleared up now, but I had some issues with Clear. And so I understand this with the tool that's used for airport to get in and out of airports, but given the limitations of facial recognition technology, we can only hope that New Orleans will investigate this infraction of this ordinance and publicly disclose how many people were subject to any sort of false arrest due to this use. And we're going to see more and more police departments using this tool. I think that where I'm from in Fort Worth and Dallas, that they've adopted rules on the use of facial recognition. I wanted to ask Ms. Kak, in this 10 year moratorium on if this 10 year moratorium on state laws goes into effect, it is important to establish that we have clear federal regulations to govern the use of AI in law enforcement. Do you agree with that?
Amba Kak:
Thank you, Representative. I actually did want to call attention to the fact that we have spent a lot of time at this hearing talking about European law and what it is and what it isn't. And maybe we all agree that it is imperfect, but respectfully, I would really like to ask what does that have to do with state laws? Because I would argue that they have nothing in common where we're actually seeing states step up to the plate and act are on weeding out bad apples, putting in place safeguards in the most high impact settings, including criminal justice, including immigration, including education where stakes are really high and they're making sure that we're not putting, we're not AI systems that have basic inaccuracies aren't proliferating, particularly when the impacts of these errors are on people's basic civil liberties. So I agree with you. I think that states have stepped up to the plate, they have responded to their constituents, and this proposal would really wipe away a lot of that progress without anything in its place.
Rep. Marc Veasey (D-TX):
What do you think that Congress can do to make sure that these technologies are used ethically and that they're being used the right way without infringing on civil liberties? What do you think this body should do?
Amba Kak:
We have a decade of evidence that provides a very clear framework whether that, and that begins with transparency. It includes clear accountability so that firms that are using these systems need to go through proper vetting to make sure that they work as they are intended to work, that they don't misidentify people. You mentioned a personal example. The FTC cracked down recently on RiteAid use of facial recognition technology that was routinely misidentifying people in grocery stores and sort of subjecting them to unwarranted scrutiny from law enforcement agencies. So yeah, I think it's very clear what we need to do. The problem has been one of political will to act in Congress,
Rep. Marc Veasey (D-TX):
And I think with all of these technologies, one of the things that we have to take into consideration is we want to be the leaders in any sort of new technologies that are coming to the market. We don't want the Chinese to be the leaders in these areas. What can we do in order to make sure that there's consumer confidence in these areas of facial recognition and AI technology as we move forward to make sure that civil liberties are being met, but to also make sure that anything that is being deployed, that the US is the leader on that?
Amba Kak:
Absolutely. I think we need to be incentivizing a race to the top, not the bottom. And one thing that we can be guaranteed will deter private investment is the proliferation of snake oil salesmen, of these kinds of bad apples that no one wants to be in business with. And to be very candid, that is what states have stepped up to the plate to do. It's, it's essentially going after the worst actors in the market and making sure that this is an industry that inspires confidence, not one that's full of scamsters and snake oil salesmen.
Rep. Marc Veasey (D-TX):
Yeah, no, that makes a lot of sense. Thank you, Mr. Chairman. I yield back the balance of my time,
Rep. Gus Bilirakis (R-FL):
Gentlemen, yields back now recognized the vice Chairman from the great state of Pennsylvania, Dr. Joyce for his five minutes of questioning.
Rep. John Joyce (R-PA):
Thank you Chairman Bilirakis and thank you for our witnesses who have agreed to come here today to testify every day, every hour, the development of artificial intelligence is transforming the way that we consume information that physicians treat American patients, even how we travel as a chair of the privacy working group, it's clear to me more than ever that we need a solution to the patchwork. What do I mean by patchwork? States are introducing AI laws left and right, and it is our responsibility here at the federal level to ensure that businesses can continue to innovate while complying with this patchwork of laws. This is especially true when it comes to AI's involvement in the healthcare field. AI has the ability to elevate the care of Americans, patients, the care that they expect, the care that they deserve, but we must ensure that the physician-patient relationship is not left behind. Mr. Bhargava, I'm interested in how innovative AI products and services can improve the lives of my patients, improve the lives of my constituents. Can you highlight some of the general catalyst investments in innovative American AI startups that this committee should take note of as we consider AI legislation?
Marc Bhargava:
Absolutely. And thank you for asking the question. Healthcare is a massive industry. It's even larger than tech. And at General Catalyst, we've invested in over a hundred healthcare companies over the last decade or so and so are one of the leading venture firms in this field. One or two examples I could provide for you. One is in AI scribing. So doctors and nurses spend a lot of time taking notes and then doing manual data entry on those notes that can be automated with responsible AI systems and software that also need to be HIPAA compliant and have responsible innovation. But that's one of the many tasks. The second is where there's a shortage of healthcare workers. How do we get them to be AI enabled? How do we get them to get rid of the most menial tasks and be focused on the harder parts, which many times is chatting with patients relating to them solving more difficult problems.
And so we have many companies, Hippocratic, who's here in the audience today, Comme and others who are tackling these issues of how do we automate more and more so that our healthcare providers can give better service by having these AI enabled tools, assisting them. And as we back these companies, we make sure that there's the responsible innovation part. Healthcare is the toughest place you really don't want to mess up out of every industry. It has the most ramifications. So it is a good example of both where there could be a ton of innovation and at the end of the day, consumers lives can be saved. But also why I would urge Congress to move faster this time than in the past on creating a framework because lives are at stake here as well as our competitive advantage against China.
Rep. John Joyce (R-PA):
As a physician, I clearly recognize that lives can be at stake. Do you feel that the patchwork that currently exists as far as AI legislation from the states puts patients' lives at risk?
Marc Bhargava:
I don't believe it's preventing deaths or that the patchwork is good for the companies that are operating in to try to make healthcare better. I think we need a federal solution instead. I think the states don't really have the capacity to keep up with all of the different AI innovations and technologies and models and then the startups themselves. It's very difficult for them to act in a way where they're complying with all these many states. It would be much, much better to have a federal level regulation that would then allow for this AI technology to permeate even faster and to save more lives quickly.
Rep. John Joyce (R-PA):
Thank you, Mr. Heather. In many respects, AI is already regulated by many sectoral laws that apply to non-AI products and services like the FDA's oversights of algorithms that healthcare companies use in care, particularly when it comes to medical devices. Can you explain how the EU approached existing sectoral laws such as healthcare before it passed the EU AI Act? Did the EU find gaps in the current law that needed to be addressed by their new legislation?
Sean Heather:
I'm not aware that the EU identified any gaps with regard to how medical devices or pharmaceuticals reach the market where they have been somehow enhanced or built with AI models. And so I think this is a really important distinction. When people are talking about moratorium, there are rules on the books. I heard the congressman from Texas talk about surveillance. I heard you also say that there's a city ordinance in New Orleans that would've prohibited it. That sounds to me like it's an enforcement issue, not an AI issue. So I think we're thinking about products and services and outcomes, not necessarily the technology that brought you that product or service or outcome. And so I think the ability to use existing laws and regulations to enforce good, strong outcomes is not what this conversation's about. It's about getting out ahead and trying to discipline a technology that has lots of opportunity behind it in ways that will have unintended consequences to our competition and competitiveness to the global economy.
Rep. John Joyce (R-PA):
Mr. Heather, I thank you for the answer. Mr. Chairman, my time has expired. I yield back.
Rep. Gus Bilirakis (R-FL):
Thank you. The good doctor yields back. Now we'll recognize Mr. Evans for his five minutes of questioning.
Rep. Gabe Evans (R-CO):
Thank you, Chairman, Ranking Member. Thank you. Of course. To our witnesses for coming. I'm from Colorado and last year we became the first state in the country to enact a bill at the state level to regulate AI. And I'll read you a couple of quotes here. This law creates a complex compliance regime for all AI developers and deployers doing business in Colorado. Another one I'm concerned about the impact this law may have on an industry that is fueling critical technology advancements across Colorado for consumers and enterprises alike. And then finally, government regulation applied at the state level in a patchwork across the country can have the effect to tamper innovation and deter competition in an open market. I agree with those statements. They actually came from my governor, Jared Polis, who signed Colorado's AI law and then came out in support of the federal moratorium that's being discussed here today. Among other things, I voted against the law when I was in a state legislature because I agreed with those statements and I saw this as dampening the ability to innovate and bring jobs to Colorado and fostering that patchwork across the country. So my first question, Mr. Theirer, is going to be to you. Can you just expound on why the Colorado AI law and a patchwork like it is so problematic and why Congress needs to be the one to act to address this emerging patchwork of rules?
Adam Thierer:
Absolutely. Congressman, thank you for that question. It seems that Governor Polis is having some buyer's remorse from signing this bill and his signing statement read more like a veto statement as you just indicated. I think the reality is that they realized the complexity of this law would create enormous burdens. And when they tried to subsequently study it the way they should have been studying before it was passed, they realized they didn't have a lot of the answers to complicated questions about exactly how we define developer, deployer, integrator, consequential decision, all of these things, or even the term artificial intelligence, which is being defined differently in different state bills. If we can't even define the basic term we're here today to discuss at the state level, then that's a patchwork that's going to create huge problems for small businesses.
Rep. Gabe Evans (R-CO):
Thank you. Going to Mr. Heather, kind of following up on that train of thought here from the chamber, do you have any data on either the fiscal or the revenue impact that a patchwork of contradictory state laws might have in this field? And then on the other side of that, do you have any fiscal or revenue data on the benefits of having a national, just general rules of the road policy might have?
Sean Heather:
I don't know that I have data as it relates to government expenditure, but I can tell you that as it relates to private sector, as was mentioned, I think before there's an estimate of a 330,000 euro just to be able to comply with an element of the EU AI Act to the degree that we have over a thousand AI related bills that have been introduced in state legislatures or in local units of government. To the degree that some of those make it cross the finish line, there isn't a compliance cost that'll have to be met by each company that wants to do business in those markets because these laws are not necessarily enforced in the United States. I don't know that there's good data out there that shows what it's going to cost to comply for companies with regard to the Colorado law. So I think we're very early days in being able to get those cost estimates.
Rep. Gabe Evans (R-CO):
Thank you. Do you think there is potentially additional revenue that would be generated by industry and by business, by having just general national rules of the road versus a patchwork of state laws?
Sean Heather:
As I said in my testimony, I think there's estimates that suggest that somewhere upwards of $16 trillion may be gained to the global economy by AI. All of that will be taxed by governments. So to the degree to which we unleash AI technologies and that allows us to increase our productivity as an economy that leads to growth. And wherever there is growth, there's tax revenue opportunity for the government.
Rep. Gabe Evans (R-CO):
Thank you. And my final minute, Mr. Thierer, back to you. Before this job, I spent 10 years as a police officer, and so making sure that we take care of kids is critically important to me. I would venture that I've probably arrested more child abusers than anyone else in this room. And so as I'm reading through the bill that's being discussed today, I noticed that there was about three pages of exceptions in that bill that allow for enforcement of a lot of different things in the AI space. So it's not a complete moratorium. And so the question to you is, given how the legislation is written, can we still continue to keep kids safe?
Adam Thierer:
Absolutely. You can use technology neutral policies and approaches. And I also just want to stress that under the rule of construction in this moratorium, it very clearly states that the primary purpose and effect is to remove legal impediments to facilitate the deployment or operation of AI systems. This is about furthering the development of these systems and not dealing with these things that we need to restrict on the other side through generally applicable safety laws.
Rep. Gabe Evans (R-CO):
Thank you. Yield back.
Rep. Gus Bilirakis (R-FL):
Thank you. Thank you very much, Mr. Evans. Okay. I think we're very good. I think we're ready to finish up. Thanks for, I want to thank the panel for being so patient in answering all the questions. And I'll ask unanimous consent that the documents on the staff list be submitted for the record without objection. So ordered. I remind members that they have 10 business days to submit questions for the record, and I ask the witnesses to respond to the questions promptly. Members should submit their questions by the close of business day on June 5th. So, without objection, the subcommittee is adjourned.
Authors
