Transcript: Senate Hearing on The Need for Transparency in Artificial Intelligence
Justin Hendrix / Sep 13, 2023On Tuesday, September 12, 2023, US Senator John Hickenlooper (D-CO), Chair of the Commerce Subcommittee on Consumer Protection, Product Safety and Data Security, convened a hearing titled “The Need for Transparency in Artificial Intelligence.”
Witnesses included:
- Victoria Espinel, Chief Executive Officer, BSA | The Software Alliance (written testimony)
- Dr. Ramayya Krishnan, Dean of the Heinz College of Information Systems and Public Policy, Carnegie Mellon University (written testimony)
- Sam Gregory, Executive Director of WITNESS (written testimony)
- Rob Strayer, Executive Vice President for Policy, Information Technology Industry Council (written testimony)
What follows is a lightly edited transcript.
Sen. John Hickenlooper (D-CO):
Safety and data security will now come to order. While artificial intelligence has been part of our lies for years and years, its newer forms have now captured it's fair to say the world's attention. We're now far beyond the era of asking Alexa to play a song or Siri to dial our spouse. These are the examples of narrow AI, chatGPT, new generative AI systems can now plan a custom travel itinerary, create artwork, remix the song, help you write computer code. It's obviously that AI is a powerful technology that will revolutionize our economy. Just like the first car or personal computer. AI is a transformative technology that has just both benefits and risks for consumers. That means we have to proceed with intention and care. Our goal today is to identify how we do that. Specifically, we need to begin to help Americans understand AI's capabilities and limitations to reduce AI's potential risks relative to consumers and to increase the public's trust in AI systems through transparency.
The fact that we need to be careful with AI doesn't negate how important it is or the massive potential it has to transform our lives. From helping with your tedious daily tasks to helping doctors properly diagnose and find the right treatments for an illness, the possibilities go beyond what we can imagine today. Far beyond what we must also confront, the fact that AI can be misused by bad actors. AI can be used to make scams, fraud, cyber attacks more harmful, more effective companies developing and deploying AI we believe have a role to build a safe, secure, and reliable system that over time will earn the trust of the public. Congress will play a role by setting reasonable rules of the road to inform and protect consumers. The federal government, academia, the private sector, will all need to work together to establish thoughtful AI policy. In April, Senator Blackburn and I sent a letter to tech companies asking how they are adopting the NIST AI risk management framework.
The responses showed how the framework is helping companies build accountability, transparency, and fairness into their products. Today, Senator and I sent a letter to the Office of Science and Technology policy to stress the importance of developing federal standards to help consumers understand and identify AI generated content. This is going to be more critical for building trust as AI expands into larger and larger aspects of our lives. Several other federal AI initiatives are currently underway to name a few. The White House has convened leading tech companies, bringing people together to build a shared understanding and a voluntary commitment to build trustworthy AI systems. NIST formed a group, a public generative AI working group to build on its AI risk management framework. Also, the National AI Initiative Office is coordinating a whole of government effort to develop AI safety and transparency with guidelines, with input from experts in civil society and academia and the private sector. We're fortunate to have two NAIAC or NAIAC, but two NAIAC members as witnesses here today. These are all encouraging steps, but it doesn't mean we've done that. We're done when it comes to making sure we've created a framework in which AI will be safe and transparent for consumers. The AI powered future comes with many challenges that we can already see.
Building a talented STEM trained workforce, providing efficient computing power, ensuring that we protect consumer data privacy. We know that AI trains on publicly available data, and this data can be collected from everyday consumers everywhere in all parts of their lives. There are too many open questions about what rights people have to their own data and how it's used, which is why Congress needs to pass comprehensive data privacy protections. This will empower consumers, creators, and help us grow our modern AI enabled economy. This issue is complicated. It's going to require bipartisanship to require results. Committees across Congress are examining AI's impact on society through different lenses. Each sharing is an invitation for policymakers and families at dinner tables across America to think about how AI will impact their everyday lives. Today's discussion is that next step as we work towards building what ultimately will become hopefully necessarily a global consensus. This committee is well positioned to examine all these important issues with the goal of promoting transparency and the goal of creating an AI system that consumers will have confidence in. I'd like to welcome each of our witnesses who are joining us today. Ms. Victoria Espinal, C E O of Business Software Alliance, B Ss A. Dr. Aya Christian, Dean of the College of Information Systems, Carnegie Mellon University.
Mr. Sam Gregory, executive director, WITNESS. Mr. Rob Strayer, executive vice President for Policy Information Technology Industry Council, ITI. Now would like to recognize ranking member Blackburn for her opening remarks.
Sen. Marsha Blackburn (R-TN):
And thank you Mr. Chairman. I certainly appreciate that we are having this hearing. This is kind of AI week on the hill and we have a judiciary committee hearing going on this afternoon. And of course we have our member forum that is going to take place tomorrow and Thursday, so we are pleased to be putting attention on this. AI, as you said, has been around for years, whether it's autocorrect or autofill or voice assist or facial recognition, things that people have become accustomed to using. But with chat G P T in November, it is like people said, wait a minute, what is this? And of course, generative AI is something that people have turned their attention to and saying, how is this happening? How is it taking place? And Tennessee, my state, is really quite a leader in this field of I. We have several automotive companies that are investing and innovating with EVs.
We have farmers that are really leading the way in smart agriculture and it's so interesting to hear some of their concepts. We've got thought leaders, people at the University of Tennessee and also Oak Ridge National Lab, who are pushing boundaries on AI every single day. With innovators like these, the future of AI can sometimes border on the unimaginable, especially as the technology continues advancing at a pace that is more rapid than any other. In our recent memory, this swift rate of advancement, however, has caused many concerns. Many of the discussions that I've heard around AI has focused on the doomsday scenarios. While it's important to prevent catastrophic events, we must also not lose sight of the transformational benefits of ai. For instance, in addition to the examples that I've previously mentioned, AI has profound implications for the financial and healthcare industries, two industries that are critical to our state.
That is why any regulatory action from Congress or from federal agencies must balance safety with the preservation of an innovative economy. Our adversaries like China are not slowing down on AI and we cannot give them any advantage In the deployment of this emerging technology and fact. Earlier this year, Senator Ossoff and I convened a hearing on the judiciary subcommittee on human rights in the law where our witnesses discussed the Chinese Communist Party's interest in rolling out AI systems to enhance the regimes ability to surveil their citizens, which brings us here today. This afternoon, we will explore ways to mitigate consumer harm, promote trust and transparency, and identify potential risk stemming from AI technologies. And I'm looking forward to hearing from each of you on these. But before we turn to our witnesses, I wanted to remind my fellow lawmakers that amiss the hyperfocus on ai.
We must not forget about other issues that are as critical to US technological advancement and global leadership. First for a decade, I have worked to bring about a comprehensive data privacy law. As you mentioned. That is something that should be a first step, and I know Madam Chairman is well aware and joins me in wanting to see a federal standard, and it is vital that my colleagues keep in mind the need for that federal privacy standard. As we look at AI in our judiciary hearings that we've had on AI, everybody mentions the need to have that so people can protect their data. It is virtually impossible to talk about these new and advanced systems without a real discussion about how online consumers will be able to protect what I term their virtual you, which is their presence online. Second, as AI systems require more and more computing power, the need for high performance and quantum computing will become vital. This is why I have already introduced two bipartisan bills on this topic, and I encourage this committee to move on reauthorizing the National Quantum Initiative Act. We need to do that this year. So thank you to each of you for the leadership you have on the issue for being here as our witnesses, and thank you for the hearing.
Sen. John Hickenlooper (D-CO):
Thank you, Senator Blackburn. Now I'm going to turn it over to the chair of the Commerce Committee who has had a long, long history of probably more direct history with AI than any other Senator, Senator Cantwell from Washington.
Sen. Maria Cantwell (D-WA):
Thank you, Mr. Chairman, and thank you to yourself and to Senator Blackburn at the subcommittee level for holding this important hearing. I think we're demonstrating that just as AI needs to be open and transparent, we're going to have an open and transparent process as we consider legislation in this area. And I want to thank Senator Blackburn for her comments about privacy because I do think these things go hand in hand having good, strong privacy protections, certainly Ben, the kind of abuse or misuse of information that could cause substantial harm to individuals. And I think the witnesses for being here today to help us in this discussion. I recently was informed about a situation in my state that I found quite alarming. A family in Pierce County, Washington received a phone call, a scammer used AI to spoof the voice of their daughter, telling them that she had been in a car accident and that a man was threatening to harm her if they didn't wire $10,000.
So I can't imagine what this deep fake meant to that family or the concerns that they have. And a recent deepfake image claimed a bombing occurred at the Pentagon and that fake image sparked a dip in the stock market. DARPA is leading the way on important developments to approach detecting AI generated media, and I plan to introduce legislation in this area. I think that AI, as was discussed by my two colleagues, has amazing potential. I held an AI summit in my state and saw some of those amazing technology already being pushed by the Allen Brain Institute and some of their early technologies, certainly helping in things like climate, in farming, in detecting illegal activities and helping us move forward in important areas of research. We know that we have choices here. We know we want to continue to empower consumers and make sure that we're stopping the fraudsters, and we want to make sure that any misuse of AI that we are stopping, that whatever we can do to make sure that we're protecting American's privacies.
So I hope that today's hearing will give us some ideas about how to drive innovation and maintain US leadership in this very important security related technology and the issues of global competitiveness, that we talk and discover ideas, the deep fakes and potential national security issues, the framework for legislation protect online privacy and combat discrimination. I know that we need to grow education in general in our workforce, and information age has already put great transformations in place. The jobs of tomorrow are here today, but the skill levels for people to do them are not. We know that we need to invest more from the Chips and Science Act in skilling a workforce for tomorrow. That was before AI with AI. There is an accelerant on that, and that is why I believe we need something as grand as the GI Bill was after World War II in empowering Americans for new opportunities in this area. So I look forward to hearing the comments from our witnesses, and thank you again, Mr. Chairman, for holding this very important hearing about the potential and challenges facing us, but clearly we need an open and transparent system just as we did with the internet so that innovation can flourish. Thank you.
Sen. John Hickenlooper (D-CO):
Thank you, Madam Chair. Appreciate your being here and helping create this hearing. Now we go and we'll have the opening statements from each of the witnesses. Ms. Espinel,
Victoria Espinel:
Good afternoon, Chair Hickenlooper Ranking member Blackburn, Chair Cantwell, and members of the subcommittee. My name is Victoria Espinel and I'm the c e O of B ss A, the software Alliance. BSA is the advocate for the global enterprise software industry. BSA members are at the forefront of developing cutting edge services, including AI and their products are used by businesses across every sector of the economy. I commend the subcommittee for convening today's hearing, and I thank you for the opportunity to testify. Here are two things that need to be done. Companies that develop and use AI must act responsibly to identify and address risks, and Congress needs to establish thoughtful, effective rules that protect consumers and promote responsible innovation. AI has real world benefits. Think about extreme weather events, hurricanes, wildfires, tornadoes that have affected many states this year. As we know, there are families wondering whether the eye of a hurricane will hit their hometown and whether they will be safe if they stay or if they should pack up and go.
How will they know whether they should leave? And if they do, which nearby destination is the safest to ride out the storm? AI is helping to provide these answers with ai. Weather forecasters are better able to predict extreme weather events helping people prepare before disaster strikes, and what happens to those families who are in the storm's path? How do they get food in the aftermath of a storm? How do rescue workers know that they need help? AI is helping relief workers anticipate where medical equipment, food, water, and supplies are most needed in response to natural disasters in the face of extreme danger. AI's predictions can save lives. More needs to be done, however, so that we can see greater benefits and with thoughtful rules in place, innovation in AI will continue to advance and the responsible use of artificial intelligence will serve society. There has been a wave of attention on AI since ChatGPT launched publicly nine months ago, but this committee began studying the issue in a thoughtful manner years earlier, nearly six years ago, I testified here about the building blocks of machine learning and artificial intelligence Chair Cantwell and Senator Young introduced one of the first AI bills in 2017.
We also appreciate the committee's recent work to establish the National Artificial Intelligence Initiative and your request to BSA about how our member companies are using the NIST AI risk management framework to responsibly develop and use AI. The pace of AI development and use has increased significantly since 2017. As with any new technology, there are legitimate concerns that need to be addressed, including the risk of bias and discrimination. This committee is well placed to move legislation that sets rules around ai. The US economy will benefit from responsible and broad-based AI adoption. An important part of facilitating that adoption is passing a strong national law. The countries that best support responsible AI innovation will see the greatest benefits of economic and job growth in the coming years. Moreover, other countries are moving quickly on regulations that affect US companies. The US should be part of shaping the global approach to responsible AI.
The window for the US to lead those conversations globally is rapidly closing. This is what we think legislation should do. It should focus on high risk uses of AI, like those that decide whether a person can get a job or home healthcare. It should require companies to have risk management programs. It should require companies to conduct impact assessments, and it should require companies to publicly certify that they have met those requirements. This will include some concrete steps. I've set these out in more detail in my reads and testimony, and I hope we have a chance to discuss those. It is important that legislation reflects different roles. Some companies develop AI, some companies use ai. Our companies do both, and both roles have to be covered. Legislation should set distinct obligations for developers and users because each will know different things about the AI system in question, and each will be able to take different actions to identify and mitigate risks. So my message to Congress is simple. Do not wait. AI legislation can build on work by governmental organizations, industry and civil society. These steps provide a collective basis for action. You can develop and pass AI legislation now that creates meaningful rules to reduce risks and promote innovation. We are ready to help you do so. Thank you for the opportunity to testify, and I look forward to your questions.
Sen. John Hickenlooper (D-CO):
Thank you, Ms. Espinel, Dr. Krishnan.
Dr. Ramayya Krishnan:
Good afternoon Chair Cantwell, chairman Hickenlooper ranking Member Blackburn and members of the committee. I'm grateful for this opportunity to testify today. My name is Ramiah Krishnan. I'm the dean of the Heinz College of Information Systems and Public Policy. I've served as president of INFORMS, a global operations research and analytics society, and my perspective is shaped by my own work in this area as well as my leadership of the Block Center for Technology and Society, a university-wide initiative, the study's responsible use of AI and the future of work. I'm a member of the NAIAC, but I'm not here representing the NAIAC. You've already heard about the vast number of applications and the potential for AI. I'd like to highlight that in addition to all of that, AI has the capacity to enable breakthroughs in science and drug discovery that will unlock solutions that are currently intractable, currently intractable problems in human health and beyond.
These are among the many important socially and economically beneficial users of the technology. As AI technologies are considered for use in high stakes applications such as autonomous vehicles, healthcare, recruiting, criminal justice, the unwillingness of leading vendors to disclose the attributes and provenance of the data that they've used to train and tune their models and the processes they've employed for model training and alignment to minimize the risk of toxic or harmful responses needs to be urgently addressed. This lack of transparency creates threats to privacy, security, and uncompensated use of intellectual property to copyrighted content. In addition to the harms, cost to individuals and communities due to biased and unreliable performance, there's a need for greater accountability and transparency. In my testimony, I would like to propose four decisive recommendations. The first is on promoting responsible AI. Congress should require all federal agencies to use the NIST AI R M F in design, development, procurement use, and management of their AI use cases.
When this AI R M F was developed with multiple stakeholder inputs and establishing it as a standard, we'll have numerous benefits at home and abroad. My next recommendation is a two-part recommendation. The first relates to advancing data transparency in the AI pipeline. The AI pipeline consists of training data models and applications and data transparency. Congress should require standardized documentation and like audited financial statements, they should be verifiable by a trusted third party like an auditor about the AI training data. The metaphor to think about is to think of these as akin to nutrition labels, so it's clear what went into producing the model. There are details about what these should contain, but at the minimum, it should document the sources and rights that the model developers have to be able to use the data that they did and the structural analysis that they've done to check for biases and the like and perhaps even for adversarial attacks against this data.
The second part of this recommendation is promoting model validation and evaluation of AI system. Congress should direct NIST to develop standards for high stakes domain such as healthcare, recruiting, criminal justice, and require a model validation report for AI systems deployed in high stakes applications. The metaphor here is to think of this as being akin to an underwriter's lab like report that will objectively assess the risk and performance of an AI system in these high stakes applications. The third recommendation is on content transparency. It's about content labeling and detection. The main idea here is that as we've just heard, the generative AI has increased the capacity to create multimodal content, audio, video, text that is indistinguishable from human created output. And currently, there's no standardized way to label the content as being AI generated or human generated. There are consortia like C2PA that are coming around with standards, but we know we need standards here, and Congress should require all AI models open source or otherwise that produce content to be able to label their content with watermarking technology and provide a tool to detect that label.
While the usual concern about labeling is with regard to consumers, this is going to be equally important for model developers to know of the data that they're using in their models is human generated or AI generated. The last recommendation is about investing in a trust infrastructure for AI, much like we did in the late 1980s when we stood up a trust infrastructure and respond to cybersecurity attacks, the Morris Worm in 1988, and we set up the computer emergency response team. We need to do something similar for AI. Congress should stand up a capability which could be done relatively quickly using the capacity of the F ffr dcs in nist, in the SS e i, MITRE and other agencies, the federal government agencies that will connect vendors, catalog incidents, record vulnerabilities, and test and verify models and disseminate best practices. This will go a long way towards improving our trust capability, especially since the technology is moving so quickly that we will always have to need this quick response capability. Finally, in closing, the success of these recommendations will impart rest on a comprehensive approach to enhance AI skills across A through 12 and community colleges as well as policies and strategies like wage insurance to address the impact of AI. Thank you for this opportunity to testify to the committee.
Sen. John Hickenlooper (D-CO):
Thank you, Dr. Krishnan. Mr. Gregory,
Sam Gregory:
Chairman Hickenlooper, ranking member Blackburn and members of the subcommittee. I'm Sam Gregory, executive director of WITNESS, a human Rights organization. Since 2018, WITNESS has led a global effort to prepare, don't panic, to inclusively prepare society for deep fakes and synthetic media technologies, and more recently, generative AI. Now the moment to act has come increased volumes of easily made realistic synthetic photos, video and audio, more targeted and personalized is a paradigm shift alongside creative and commercial benefits. From generative ai. There are already harms in the US and globally with disproportionate impacts on at-risk groups. Women are targeted with non-consensual sexual images. Simulated audio scams are proliferating as are AI generated child sexual abuse images. Claims of AI generation are used to dismiss critical human rights and journalistic content that is real. Text to image tools perpetuate discriminatory patterns and creatives see their work used for training AI models without consent.
As you develop legislation, first, consult broadly with the community's most impacted by AI in the US and globally given differential impacts, a resilient approach grounded in civil and human rights will best future-proof. Legislative responses to AI elections are poised to be deeply influenced by generative AI and recent polls find the American public is fearful of its impact under-resourced newsrooms and community leaders are under pressure and do not have access to reliable tools that can detect DeepFakes and AI manipulated content. It is also unreasonable to expect consumers to spot deceptive, yet realistic imagery and voices guidance to look for the six fingered hand or spot. Virtual errors in a pope in the puffer jacket do not help in the long run. I believe that Congress should think about AI governance as a pipeline of responsibility that is distributed across technology actors from the foundation models to those designing and deploying software and apps, and to platforms that disseminate content.
This should be supported by testing, auditing, transparency and pre and post impact assessment within that solutions that help show the provenance of AI and if desired human generated content can use this pipeline responsibility to bring transparency to consumers. This may the take the shape of direct, visible or indirect machine readable disclosure around how media is made, created and or distributed approaches like this were included in the White House voluntary commitments and in the EU’s AI Act. As I note in my written testimony, visible watermarks and direct disclosure have use cases, for example, potentially in marking election related materials, but they are easily removed and not nuanced for the complex production era that we are about to enter. Invisible watermarks at the pixel and dataset level are another option. Cryptographically signed metadata like the C2PA standard show the creation, production and distribution process over time, which is as important, is important as we increasingly intermingle.
AI and human generated approaches like this also allow creators to indicate how and if their content may be used for training. AI models. These disclosure approaches are not a punitive measure to single out AI content nor indicate deception. This provenance data only provides additional information signals, but does not provide any truth to the content. To safeguard constitutional and human rights approaches to provenance could meet some core criteria. They should first protect privacy. Personally. Identifiiable information should not be required. Knowing the how of AI based production is key, but does not require correlating the identity of who made the content or the tool. Allowing redaction is also key, particularly when combining AI-based media with other human generated media. Secondly, opt-in while disclosure indicating content with AI generated could be a legal requirement. In certain cases, there shouldn't be a requirement for all provenance tools, especially those for non-AI content, which should always be opt-in.
And thirdly, standards should be developed attentive to potential authoritarian and misuse in a global context. Another response complimentary to content provenance is after the fact detection for content believed to be AI generated from witnesses, experience the skills and tools to detect AI generated media remain unavailable to the people who need the most journalists, rights defenders and election officials domestically and globally. It remains critical to support federal research and investment in this area to close this detection, access and equity gap. To conclude, I encourage you to incorporate broad consultation with groups disproportionately experiencing existing harms and AI impacts into upcoming legislative approaches to go beyond risk-based and voluntary approaches and support a rights-based framework for action. And finally, to support research and legislation on standardized watermarking and provenance that takes into account global implications and centers privacy and accessibility. Thank you for the opportunity to testify today.
Sen. John Hickenlooper (D-CO):
Thank you, Mr. Gregory. Mr. Strayer.
Rob Strayer:
Sorry. Thank you. Chairman Hickenlooper, ranking Member Blackburn and members of the committee. Thank you for the opportunity to testify today. My name is Rob Strayer, and I lead the global policy team at the Information Technology Industry Council, or ITI. ITI represents companies from all corners of the technology sector and from across the AI ecosystem, including those involved in both developing AI models and deploying cutting edge AI applications. ITI was pleased to provide a very detailed response to this subcommittee's inquiry earlier this year about how our companies are operationalizing the NIST AI risk management framework as a means of building public trust. We are encouraged by the bipartisan efforts in Congress to address the challenges and opportunities from AI In my remaining time, I will address the competitive global context for AI and then turn to transparency and accountability. The development and adoption of AI technology will be transformation across all sectors of the economy.
Estimates the total global economic benefits of AI in the years ahead range from 14 trillion to $25 trillion. It's absolutely massive. As just one example of the transformational nature of AI, the cybersecurity industry is able to deploy machine learning to detect and stop the most sophisticated attacks. Using zero day exploits, AI can defeat these pernicious attacks using insights about activity rather than having to rely only on known malware signatures. Adversaries will certainly be using AI to improve their attacks, and we need to be able to leverage AI to protect our critical infrastructure and IT systems. AI also will play an essential role in future national security applications for the military and for the intelligence community. The good news is that today the United States is leading AI development, deployment and innovation globally. Nonetheless, foreign competitors are working hard on AI breakthroughs and to deploy AI and new use cases in their markets.
And with access to open source models and decreasing model training, compute costs, AI developers and deployers will be able to move anywhere in the world with interconnections to avoid stifling regulations. Therefore, US policymaking involving AI needs to be understood in a global context and consider how new policies affecting AI will help the United States maintain its technological leadership rather than seed it to competitors including authoritarian states. So how does the United States create a pro-innovation AI policy framework that manages risk? US AI policy should have two primary components, one, promoting innovation and investment, and two, building public trust and accountability. My written testimony covers the importance of investment, and so I'll focus on public trust and accountability here. Transparency is a key means by which to establish public trust. Consumer trust will increase adoption of AI and expand the AI ecosystem in the United States ITIC companies are working to ensure that users understand when they are interacting with an AI system and generally how the system works.
ITIC companies also are producing AI model cards so that consumers have access to the information about features and limitations of AI models in clear plain language. So what is the government's role to avoid regulations being overly broad? Risk should be identified in the context of a specific AI use case with risk identified is then imperative that the government review the existing regulatory landscape, legal regimes such as fraud and criminal law, as well as statutes like the Civil Rights Act can address AI related risks. It is critical to understand how these legal regimes function and where they may not be fit for purpose to address AI risk before creating new legislation or regulatory frameworks. Finally, before adopting new AI regulatory requirements, policymakers should understand the status of international consensus-based standards and the ability of those standards to meet regulatory requirements without specific standards for risk management processes such as the measurement and evaluation of the risk in models, it will not be possible to implement regulations effectively or harmonize rules globally. To wrap up Congress and private sector stakeholders can work together to ensure that the United States builds on its competitive lead in AI As AI transforms all sectors of the economy and generates trillions of dollars in economic growth, this will benefit US companies and citizens for decades into the future. Thank you and I look forward to your questions.
Sen. John Hickenlooper (D-CO):
Great. Thank each of you for being here for your opening statements for all the work you've done on these issues already. This first question I have might be considered obvious. AI systems learn about the world by processing written, visual and audio files, works created by people, by humans. Dr. Christian, just to kind of lay out, because a lot of people are coming to this issue fairly freshly, what rights already exist for consumers to decide how AI systems access their creations and what additional rights do they have?
Dr. Ramayya Krishnan:
Thank you for the question. So perhaps the place to start is to think about the AI pipeline first, and then I'll focus in particular on the creators of content and the need to appropriately balance what a typical creator's interests are and how that might be protected. So we think about the AI pipeline. It includes training data models and applications. When you have data that models use that involves creative artifacts, be it music, images, video that is copyrighted, a group of creators may actually want to seek advertising off this content that they've created and therefore may post such information on the web with the full expectation that consumers interested in this content may sample it and then they may be able to earn income from it. The concern here is that if this content is then scraped and used in AI models, which then produce output in the style of the same kind of content that the creators have created, that could potentially take away revenue from the individual creators who have created this content.
So this issue on the one hand of protecting creators who'd like to have income generated from the creative acts that they're engaged in on the one hand, the other is the capacity of these models to use these types of data for the purposes of creating the capabilities that we have witnessed. So one potential path forward, on the one hand you want to have copyright and the benefits that accrue from licensing that come from that is perhaps to use technology. There's work from the University of Chicago that allows for individuals to upload their creative content and the technology makes modifications to that content which is not visible to the human eye. So the human sees it as an image just like the artist intended it to, but it's not trainable by an AI model so that the AI model can't produce it in the likeness of the artist. And if the model developer wants to obtain access to that content, they can license it. And that might be potentially a wave on the one hand, providing notice and disclosure, which currently doesn't exist to those people who have created this content whose content got scraped while at the same time meeting the needs both of the model developer and of the artist.
Sen. John Hickenlooper (D-CO):
Right. Got it. But Mr. Gregory, what would you ask add to that as the next step? That's a broad brush to start with.
I think there are also ways in which content developers,
Sam Gregory:
People creating content, excuse me. Thank you. People creating content can also choose to add information to their data. So there are ways we can do this at the level of very minor modifications. There's also ways in which you could be tracking those desired usages, for example, using the C2PA standards. So I think the more options we can give people that are standardized for understanding the choices they make about the information they consume, but also the information they place online would be appropriate.
Sen. John Hickenlooper (D-CO):
Great. Thank you. Ms. Espinal, what are some evidence-based steps that your member companies have been using to develop AI with the safety of consumers in mind within that notion of the AI risk management framework?
Victoria Espinel:
Thank you. So BSA members have, many of them have implemented very extensive risk management programs that include as part of that impact assessments. And I'll talk a little bit about what they do and how they can use evidence to make the kinds of determinations that you're talking about to both increase transparency but also reduce the risk of bias and discrimination. So as an example, if A BSA member is acting as a developer of AI, they can assess the training data that they are using to ensure that it is representative of the community and they can use the evidence that it's gathered from that assessment to ensure that the risk of bias and discrimination is as low as possible. That is certainly in line with the NIST AI, the risk management framework that was developed by NIST there. But I would say as important as the NIST risk management framework is, and as much commendation I give to the Department of Commerce for coming up with it, we don't think it is sufficient to, we think it would be best if legislation required companies in high risk situations to be doing impact assessments and have internal risk management programs.
So yes, there's much that our companies have been doing. I think there's probably much that many companies have been doing, but we think in order to bring clarity and predictability to the system and to ensure that use of artificial intelligence is as responsible as possible for Congress to require impact assessments and require mismanage risk management programs is essential.
Sen. John Hickenlooper (D-CO):
Great. Thank you very much. I'm going to come back for more questions later. Ranking Member Blackburn.
Sen. Marsha Blackburn (R-TN):
Thank you so much Ms. Espinel. I want to come to you first because I know that the EU is looking at implementing their AI act later this year. I was recently over there and working on the privacy issue and holding some meetings, and I think they're a little bit frustrated that we haven't moved forward on some of these guidelines and governance that would help our innovators here in the country. So talk for just a minute about how your members would navigate a patchwork system around the globe when it comes to AI and the guardrails that are going to be put around it.
Victoria Espinel:
Thank you very much. So let me start off by thanking you. When you were in Brussels, you had an opportunity to visit us at BSA and visit many of our member companies. You led a fantastic round table there. So thank you very much for that. But also thank you for that question because I think that is a very important issue. So as you said, the EU is moving forward with legislation. They are not the only country that is moving forward with legislation. Governments around the world are moving forward with legislation as well. And I think one of the challenges, but an avoidable challenge is if we end up in a situation where there is, as you said, an inconsistent patchwork of regulations around the world. I think because there is so much policymaker focus around the world on artificial intelligence, as you said, in part because of the launch of Chad G B T, there's a real opportunity right now to move towards a harmonized approach globally. And that may not include every country in the world, but I think it could include a lot. And I think if the United States, I think the United States has a very important role to play there in terms of moving a large number of SS to a harmonized approach. Should we be
Sen. Marsha Blackburn (R-TN):
To set standards and lead this?
Victoria Espinel:
In my opinion, yes. Okay. The United States is a leader in innovation. We should be a leader here as well.
Sen. Marsha Blackburn (R-TN):
Absolutely. Mr. Strayer, when you're working with your members, national data privacy law, how did they put importance on that before we move forward with some of these other components to dealing with AI?
Rob Strayer:
We believe a comprehensive national privacy law is absolutely critical, and that ensures a lot of the issues that come about, about data training sets and other data that emerges from AI systems that are used by businesses every day. So we very much support that acting quickly. We don't think those need to be done, that needs to be done first before moving on AI regulation, but we think it's both have to be done. And the thing I'd say about standards is US-based companies and western companies generally are leading in developing standards through the international standards organization. They're working now on an AI management system standard. Those will be hopefully the bedrock for when the EU finishes their legislation, the standards that should apply globally. But that's not yet been fully resolved with the European Union, but those standards should be the harmonized global standards for the future.
Sen. Marsha Blackburn (R-TN):
Yeah. Mr. Gregory, do you have anything that you want to add on that?
Sam Gregory:
I would note that one of the areas that the EU is focused on is labeling and disclosure of AI-based content. I think there's a real opportunity for the US to lead on this to come up with a standardized way that respects privacy, that presents information that's useful to consumers, and to set a standard there that is applicable as well.
Sen. Marsha Blackburn (R-TN):
Yeah, Dr. Krishnan,
Dr. Ramayya Krishnan:
I think the NIST AI R M F offers an opportunity here through what's called NIST AI RMF profiles. And through the use of these profiles, I think we could with the appropriate standard setting for high risk applications, both on the data input side, the data transparency side, as well as with model validation, we can actually come up with standards that then get adopted because it's considerable support for AI RMF, both here at home and abroad.
Sen. Marsha Blackburn (R-TN):
Okay. Ms. Espinel, let me come back to you. I've got just a little bit of time left. I'd like for you to just briefly talk about how your companies have worked to have policies that are transparent, that are interpretable, that are explainable when you're dealing with AI. And Mr. Strayer, I'll come to you for the follow on that.
Victoria Espinel:
So it's very important that we have a system that builds trust in AI, and transparency is clearly part of what is important in order to build trust. Let me give you just a few examples of ways that there could be transparency. One is to let consumers know if they are interacting with an AI service such as a chatbot. Another example would be let consumers know if the image that they are seeing has been generated by artificial intelligence. There is important work that's being done by other witnesses at this table in terms of content authenticity and letting consumers know if images have been altered or manipulated In some ways, those are just three examples. But I want to end by saying in addition to transparency practices, I do think it is very important that we have regulation and that we have regulation that requires in high risk uses for companies that are developing or using AI to be doing impact assessments to try to mitigate those risks.
Sen. Marsha Blackburn (R-TN):
Mr. Strayer.
Rob Strayer:
I would just add that companies are also setting up fact sheets or what they call model cards that explain the features and limitations to talk about where the data sets came from, the intended uses. So these are pretty fulsome explanations. It's important in the area of transparency to think about who the intended audience is and for what purpose. So is it a consumer? Is it businesses along the chain? Is it for a deployer? So once you think about all those, when they think about what requirements should be set in this area, thank
Sen. Marsha Blackburn (R-TN):
You. Thank you.
Rob Strayer:
Great. Thank you. Senator Moran
Sen. Jerry Moran (R-KS):
Chairman Hickenlooper. Thank you. Thank you all for your presence and testimony today. It's annoying that we're here now on AI. When we've been unsuccessful in reaching conclusions on data privacy legislation, it just seems like one issue piles up after another. Both of huge significance. Ms. Banal, let me start with you. I want to talk about the NIST AI risk management framework launched after NDAA authorization in 2023. I'm working on legislation in fact offered an amendment to N D A A this year that would require federal agencies to apply the AI framework when using AI systems attempt to ensure that government acts response and in implementing AI systems and in a manner that limits potential risks not only to Americans and their data, but to governmental agencies and their missions. Can you talk about Ms. Espinel about the implementation of policies based on NIST AI risk management framework that can establish a baseline of good behavior when implementing artificial intelligence systems, which can actually unleash beneficial AI technologies instead of just hindering the development of AI?
Victoria Espinel:
I would be delighted to. It's very important. The NIST AI framework is flexible. It provides a roadmap for companies in terms of how they can put practices and policies in place to responsibly develop and use AI. We support it being used by the US government. We support it being used in the context of procurement. And I will close by saying I think it is a place, as you kind of alluded to at the end, and this ranking member Blackburn raised, where the US can show leadership. I think a model or an approach similar to the NIST AI risk framework is one that could be usefully adopted by other countries as well. And so very supportive of that. Thank you.
Sen. Jerry Moran (R-KS):
Thank you, Mr. St. Strand. I'm going to go to you based upon your past history as Deputy Assistant Secretary for Cyber and International Communications and Information Policy at the State Department. That's a long title.
Rob Strayer:
I give you a long titles at State Department.
Sen. Jerry Moran (R-KS):
Yes, sir. I hope the pay was commiserate. Let me suggest the story to you of Huawei launching a phone last week containing a suspected homegrown semiconductor that represents a leap forward in their ability to produce advanced microprocessors despite the efforts by the US Department of Commerce to deprive that company of us and partner technology to develop and manufacture advanced semiconductors. A lot of details yet to be known about that as part of that related effort to deny China the ability to develop these technologies. In August, the President Biden issued an executive order limiting outbound investment of Chinese companies that develop advanced technologies including AI. What are the national security implications for the US if adversarial nations take a leading position in the advancement of artificial intelligence? Do you believe the US can appropriately mitigate this risk through the current strategy of denying access to key technologies and investments? And what can we learn from our past and present efforts at restriction in the development of semiconductors?
Rob Strayer:
Thanks, Senator. That's quite a compound question. I'll try to do my best job.
Sen. Jerry Moran (R-KS):
It goes with your title.
Rob Strayer:
Touche. So first is to maintain our leadership. We need to focus on running faster. That is how we innovate and make it to the next cycle of r and d in a position that puts us ahead of our adversaries, the United States that is. So we need to continue to develop the best technology for all purposes, which will obviously benefit our military and national security uses for those. On the defensive side, we've seen the October 7th of last year export controls, executive order regulation, and now this most recent outbound investment restriction, we're still seeing those play out. There's open comment periods on these. They're trying to tighten them up. So we'll see how those play out. There's a very important issue though with these, and that is we really need to have a strong discussion with the private sector about how you can do the minimal amount of damage to future r and d and revenues for US-based companies while ensuring that none of these tools end up in the hands of adversaries who are going to use those for military purposes. So really sharpening the focus on where it might be used for dual use or military purposes while benefiting US and western companies to our asymmetric advantage because they need to keep maintaining those revenues over the long term. So I think we need a stronger focus on working with the private sector to get that balance right that is stopping the military uses of other countries and enhancing our own use and market competitive development of the technology.
Sen. Jerry Moran (R-KS):
Thank you. I have a long list of questions as well, but my time, at least at this moment has expired.
Sen. John Hickenlooper (D-CO):
Well, we'll have a second round. Did you get an answer on the restriction? Is the restriction effective? Because I'm not quite sure we all, I'd want to hear a little more about that
Sen. Jerry Moran (R-KS):
As long as that's on your time. That's a great question.
Sen. John Hickenlooper (D-CO):
My time is your time.
Sen. Jerry Moran (R-KS):
Good.
Rob Strayer:
So these restrictions are quite robust in the case of export controls from the October 7th regulation that the Commerce Department issued on more advanced semiconductors with regard to the outbound investment restrictions, they're starting more narrow and they're doing a rulemaking through the Treasury Department on these, and I think that's a smart way to start and then start to expand if they need to. Beyond that, the really key issue with these restrictions is that we don't want to force all technology or key innovative technologies to move outside the United States. So you need to bring allies and partners along with the United States and the export controls for semiconductors. Japan and others have come along. Taiwan have come along the United States, Netherlands on the equipment as well. That has not yet occurred on the outbound investment restrictions. So I think one needs to think about how that's done in a multilateral way so that we're not making the United States a place where you don't do the investment in these areas, and we're seeding that leadership to other even Western countries isolating ourselves.
Sen. John Hickenlooper (D-CO):
Got it. Thank you. Senator Klobuchar, we have remotely for a few questions. Can't hear you
Sen. Amy Klobuchar (D-MN):
Judiciary. So I truly appreciate you taking me remotely and with many of the same issues. We are focused on many things with AI and we should be from the security risks to the effects on innovation to of course the potential that we have to use the technology for good. And one of the things that we are really concerned about, which is off this topic a little, but it's just to always keep in mind as the experts that you are, is just the effect this is going to have on our democracy and our elections. And so two things. One is that Senator Holly and Senator Collins and Senator Coons and I just introduced a bill in the last hour about deceptive AI generated content. And that is the most extreme right. That is the stuff where you've got people acting like they're the candidate when they're not, which is going to, or AI generated images acting like they're candidate when they're not, which is going to create complete confusion regardless of whether it's labeled or not.
And I thought it was really important to do this on a bipartisan basis, so I hope my colleagues will. You can't get much more bipartisan than Holly and Collins and CLO and Koons, and so I hope that my colleagues will look at this. It's about fraudulent AI generated content in political ads. The second thing that we're looking at for another class of AI generated political ads would be disclaimers and watermarks and the like for the ones that don't meet the standard for the most extreme deceptive with of course exceptions for satire because we all like a lot of satire around here. Okay, so I'm going to just ask you about the watermark piece because we just introduced the other bill, the disclosure piece, Mr. Gregory, Dr. Krishnan, do you agree that without giving people information to determine whether an image or video is created by AI, that generative AI poses a risk to our ability to hold free and fair elections?
Sam Gregory:
The evidence already suggests that this is a problem both in the US and globally given the capacities of these tools. So yes, I also believe that election content is a first place where it is possible to start with both visible disclosure and particularly indirect disclosure, i e labeling a piece of content and also providing metadata that could explain it. The key part would be to protect the capacity for satire. As you note, that is essential to protect. Yes.
Sen. Amy Klobuchar (D-MN):
Okay. Well, I really do appreciate that answer and also the timeliness of it. Given that we're in election season, we've already seen the use of this against some of my colleagues on both sides of the aisle, and so people are very aware. I think it also, by the way, extends and I'll get your answer, Dr. Krishnan in writing if that's okay, because I want to move on to another question. I wanted move into the AI risk assessment and mitigation issue. We know that these systems have the potential to impact individuals in many key areas, especially if it evaluates rental insurance applications. I'm working with UNE on a bill to require companies that develop and use AI to identify risks and implement procedures to mitigate risks. This involves Department of Commerce oversight. Ms. Espinal, do you agree that both developers and deployers of AI systems bear responsibility to mitigate risk before they release the AI on the market?
Victoria Espinel:
I believe that both developers and deployers users of AI should have obligations. I believe they both have responsibility. I would emphasize that I think the obligations that are put on them should differ depending so that it reflects what they do. So a developer of AI is going to have information about how the data, the data that was used and how the AI was trained to use an example of a deployer. A bank is going to have information about how loans were actually made and whether or not loans were made in a way that was disproportionately negatively impacted a part of the community. So a hundred percent, I agree with you and thank you for thinking about the distinction between developers and deployers, the fact that both should have obligations and that those obligations and requirements should reflect what they do, the information that they're going to have about the AI system in question and the different steps that each can and should take to identify and mitigate risk.
Sen. Amy Klobuchar (D-MN):
Thank you. We've also seen intellectual property issues with AI songs and the like copyrights only around half the states have logs that give individuals control over the use of their name, image, and voice. Do you agree, Ms. Esp note that we need stronger protections for the image likeness and voices of creators.
Victoria Espinel:
I know there have been instances where there's been AI generated information or content that has pretended to be someone that it wasn't, that is clearly wrong and should be stopped. I think thinking about right of publicity as you point out, that is not something that exists consistently throughout the United States. And so I think thinking about solutions to that problem, which is clearly wrong and needs to be addressed is very important.
Sen. Amy Klobuchar (D-MN):
And just one quick other question of you, Mr. Gregory. In addition to some of the copyright issues we're talking about, we also have journalism issues and we have Senator Kennedy and I have a bill that allows the companies to, the companies to have to negotiate with news organizations on the issue of their content. And in addition to newspapers, AI systems are trained on other content like lifestyle magazines, most of which were not compensated for that content content. Do you agree, Mr. Gregory, that there is more we need to do to ensure that content creators are fairly compensated for their contributions to AI models?
Sam Gregory:
Yes. I think there need to be stronger ways to understand which content is being ingested into AI models and the decisions that are made around that. And I would particularly highlight that journalists already face significant pressures, including of course that they are unable to detect AI generated media. So they face pressures both that their content is ingested and also that they are on the front lines of defending the truth in the context we face. Now.
Sen. Amy Klobuchar (D-MN):
Thank you very much. Appreciate it all of you, and thank you to the Chairman ranking member for having this hearing.
Sen. John Hickenlooper (D-CO):
Thank you, Senator. Appreciate that. We don't have the next video senator to question, so I will wade in. Oh, there we have Senator Young, just the nick of time.
Sen. Todd Young (R-IN):
Thank you chairman for acknowledging my presence in this room and for chairing this hearing. I thank our witnesses for being here. I'll just dive in. I know you've been responding to a number of inquiries. I'll begin with the observation that artificial intelligence wasn't something that we invented yesterday. It's been around for decades now. In fact, for years we've seen AI, technology and products and services across nearly every sector of our economy and in a wide variety of use cases. Analysis of each of these use cases and concerns of an AI enabled society should, in my view, start with the same litmus test. Does existing law address whatever potential vulnerability or concerned about? I found through my interactions with a number of experts in this area, that existing law would address the vast majority of concerns that we have.
Not everyone, though we have to closely evaluate and carefully target areas where existing law doesn't address these vulnerabilities. That's why of course we're here today to identify high risk use cases of AI and discuss potential guardrails to minimize those risks. Recent advancements in generative AI platforms like chat G P T have raised concerns among my constituents and many others about a dystopian future straight out of science fiction that could be right around the corner. Truth is nobody knows what future this incredible technology will usher in. And it's human nature to fear uncertainty. History shows innovations that democratize access to information into media. The printing press recorded sound film have been met with similar concerns, usually exaggerated concerns. But history also shows these innovations have brought great benefits to society, to national security and the economy. As we evaluate the need for any level of AI regulation, it's important we don't lose sight of the many potential benefits that harnessing the power of AI presents.
These benefits include self-driving cars, medical advances, immersive technology, educational opportunities and more. So I sort of want to get on record high level perspective. With that said, you're here today and let us not focus on the unknowns, but rather the knowns here and now risks as we think about trust, transparency, and explainability within AI. The goal is not to stifle growth, but rather to increase adoption and innovation in the long term. Ms. Espinel, can you briefly discuss two things? First, the important distinction between a developer and a deployer, and then second, how should Congress think about the role of transparency between businesses and consumers as opposed to transparency between businesses and government? And I ask that you answer these pretty tightly if you could, so I can ask some follow-up questions. Thanks.
Victoria Espinel:
Thank you. So developers and deployers developers and users, developers of AI systems are the ones that are producing the AI system. They're creating the AI system and the deployers are the users. To give an example, a developer is a software company that is developing a speech recognition system and a bank to go is using a deployer, using an AI system to help make determinations about who should get loans. Those are very distinct roles, and the developer and the deployer will know very different things about the AI system, both how it's being developed and how it's being used. And because they know different things, they'll be able to do very different things in terms of addressing and mitigating that risks. So as you were thinking about legislation, clearly distinguishing between developers and deployers is critical in order for the legislation to be effective and workable in terms of transparent.
So you also alluded to the fact that, or you mentioned the fact that AI has been used for a long time, right? It has. It's also used in many different types of circumstances, and some of those are high risk and some of them are lower risk. And it is our belief that in high risk situations, so for example, a government making a determination about access to public benefits, as an example, if there are consequential decisions that impact a person's rights, we believe there should be legislation requiring that an impact assessment be done and that those risks be mitigated. But there are also uses, as you said, that have been around for a long time that are relatively low risk. So reminding me when I send an email that I may have left off an attachment or one that's been quite popular lately, adjusting the background, if you are on a video conferencing call, those are relatively low risks and having impact assessments required in those cases, we believe would be overly burdensome and not add a lot of value. But where there are those consequential decisions, whether companies to consumers or government to its citizens, we believe impact assessments should be required.
Sen. Todd Young (R-IN):
Well, thank you. Does anyone else want to chime in on any of those points? Otherwise I'll turn to Dr. Krishnan. Okay. Dr. What are the clear high risk use cases to your mind for AI that members of Congress should be thinking about right now?
Dr. Ramayya Krishnan:
The ones that come to mind immediately are autonomous vehicles, healthcare, hiring, recruiting, housing. They're importance, care resources are being allocated via AI. Those would all represent where there's harm either to the individual or to society if things didn't go well.
Sen. Todd Young (R-IN):
Right. And so you are bringing up use cases I'm not surprised by, and then you'd probably acknowledge there's some in the national security realm.
Dr. Ramayya Krishnan:
Oh, without a doubt. Yeah.
Sen. Todd Young (R-IN):
Okay. Yes. Okay. I guess the last thing I'd like to ask is stepping back, AI has of course garnered all sorts of attention over the last number of months. Is AI all that different? I'll ask Dr. Krishnan from other major technological advances, or is it just the latest shiny object that we're attentive to? Why is this so important? Or perhaps you'll just say, this is every other technology
Dr. Ramayya Krishnan:
At one level, it's like other technologies we have dealt with in terms of having this impact on the way work is done, tasks that are impacted in others, there are special characteristics of the technology in terms of working with the kinds of modality, audio, video, text, things that we haven't typically we've seen as not been part of a technological capability. Like you mentioned, Chad G P T in your opening remarks, that kind of capability was not something that at least the typical citizen was thinking that this was something that a computer could do. And so the difference I guess is also in terms of how the technology is able to learn over time with data. So there are some differences that are technical differences with regard to this technology. And then there are differences with regard to how to govern the use of this technology. And that's why in my testimony I talk about data transparency and model transparency and having standards for high risk applications and then also having this trust infrastructure because you can't predict exactly how this technology is going to evolve to ensure that we are able to capture vulnerabilities, deal with failures, and come up with solutions that can be cemented
Sen. Todd Young (R-IN):
On the fly. Right. The trust infrastructure. I guess
Dr. Ramayya Krishnan:
I could just, that's like the search for AI is how I think about it. Like what we have done for cybersecurity. Sir,
Sen. Todd Young (R-IN):
Maybe I could pull on that thread just ever briefly and you can respond to what for me has been an observation. I'll leave it to you to determine whether or not it's been an insight. But my conclusion is we're going to need a lot more expertise on this technology, a lot more sophistication within government in individual agencies, perhaps at the White House, so that on an ongoing basis we can figure out how to apply existing statutes to emerging threats or concerns or challenges or opportunities or to flag when new legislative authorities may be needed. Is that your estimation? Is a human resources challenge within government?
Dr. Ramayya Krishnan:
Yes. And in industry as well. So I think a scholarship for service type program for AI would be very, very valuable.
Sen. Todd Young (R-IN):
Thank you.
Sen. John Hickenlooper (D-CO):
That last point was worth the price of admission. I thank you, Senator. And I couldn't agree more. I think that if you look at the, try to estimate the cost of government keeping up with the rate of change and the innovation that's going to be required, it is a staggering thought. And I have a son in college and all the kids that are in STEM are looking at high paying jobs right out of school just to start without the experience to be able to help government keep pace. And the competition's going to just fuel greater intensity and greater inflation among the wages, which again, good thing for the kids, but hard for the success of government relations within the industry. Thank you. I want to go into the second round. I could do a third and a fourth round. I'll probably try and let you out of here by four o'clock a little bit after Mr.
Gregory, you recently co-chaired the threats and harms task force within the coalition for content providence and authenticity. My staff tells me it's referred to as C2PA. C2PA refers to provenance as the basic trustworthy facts about the origins of a piece of digital content. This could help users distinguish between human and AI generated content. It could reveal personal information about the creator of that content as well. So the question I have is how do we protect? Do you protect? We protect the privacy of content creators while being transparent with consumers about the content they see when they're online.
Sam Gregory:
As the senator notes, I co-chair the threats and harms task force. I don't speak for the C2PA here, but I'll make some observations about how we protect privacy in this context. I think this is absolutely critical. A starting point is to recognize we're moving into a much more complex media ecosystem. So the idea of understanding how media is made, how it has evolved, right? Where the AI has come in, where the human element has come in, I think is going to become increasingly important. When we think about that future though, we need to make sure that these systems are not either accidentally or deliberately, perhaps by authoritarian governments who might adopt them systems, which they can track people's activity. That's why when we start looking at these types of approaches, we have to start from the principle that they do not oblige personal information or identity to be part of it.
That is particularly easy with AI generated content because really what matters with AI generation is how not who, right? The AI was used, the who could be helpful, but it's not necessary and it could be helpful to the wrong people. So when we start from that premise, I think that's very important as Congress looks at how to standardize this and how to approach this, that they understand how are we going to reflect the evolving nature of media production in a way that protects privacy, doesn't oblige personally identifiable information and we'll be usable worldwide. Senator Blackburn's question earlier about how the US can compete. We should be developing standards that can be applicable globally that need to be accessible, privacy protecting and usable in a variety of context globally.
Sen. John Hickenlooper (D-CO):
Absolutely. Ms. Espinal, increasing the transparency of AI systems is one of the key vehicles by which we gain confidence and trust among the users. Without appropriate guardrails around the risks from ai, I think developers will struggle to compete in the US and certainly internationally as well. So it is in our best interest to demonstrate leadership on a safe and responsible deployment. What transparency steps do you prioritize, do you think are most crucial in terms of gaining the trust of consumers?
Victoria Espinel:
So I went off by saying that I think building trust, as you say, is important for our ability to compete. I think it's important for the US economy and it's obviously important in terms of protecting consumers. So I think that's an absolutely critical step. Impact assessments are a tool that we think that organizations should be using, whether again, whether they're creating the AI or they're using the AI, if they're in a high risk situation, if the AI is being used to make a consequential decision, then impact assessments are an accountability tool that should be required. And by requiring impact assessments, you will increase transparency. Consumers need to have confidence that if AI is being used in a way that could have an impact on their rights or have a significant consequence for their life, that that AI is being vetted and that it is being continuously monitored to be as safe, as secure, as non-discriminatory as possible. And so I would go back to saying having a requirement for impact assessments by developers or deployers and high risk situations. Having a strong national law from the United States I think is very important in terms of protecting consumers and our economy. And then going to your last point, I think it's also very important for the United States to have an affirmative model of effective legislation when other countries are moving quickly to regulate. And I think having the United States be a leader in shaping that conversation and the global approach to responsible AI is critically important.
Sen. John Hickenlooper (D-CO):
An affirmative model. What a concept. Dr. Christian, you've done a lot of research with consumers and social behavior within digital environments. So on that same subject, what information should be disclosed to consumers to establish trust in online services around AI?
Dr. Ramayya Krishnan:
Well, first and foremost, I think when you're interacting with an AI system, I think you need to know that you're interacting with an AI system. So disclosure, that's the first step. The second is if data is going to be collected by let's say a chat G p T or bar during your interaction, you should be explicitly given the option of opting in for the purposes of saying, is my data then going to be used by the AI to further for training purposes that I think we can learn from much of our prior work in privacy to apply it in this kind of context. So the opt-in and then the third I think is with regard to the trust interaction that individuals have to a large extent, individuals build trust based on their own experience. Much as we talk about data transparency, model transparency, my interaction with this tool, does it actually behave the way I expect it to?
That actually builds trust over time. And I think it's a combination of these that will result in the intended what we'd like to see as an outcome. One quick additional point I want to make is while we've been talking about NIST R M F and the like, I think it'd be great to have demonstration projects for citizens to recognize the value that AI can actually bring to the table. Chat G P T was perhaps the first time that they got to interact with AI in the kind of scale that we thought they would be great to see. Something like the Khan Academy's education products, things of that nature. It gives them a clear indication of the value that this brings. I think that would be very good too.
Sen. John Hickenlooper (D-CO):
Couldn't agree more. I was speaking one last question before the chair comes back. I dunno if you guys play bridge, but the chair trumps every suit to be clear, Mr. Gregory, let me go by. Switch to my question for Mr. Thayer. The AI ecosystem can be generally viewed as those that develop AI and those that use AI, as we've heard, data's nuance, a lot of nuance around that. Risk management principles should be tailored both to developers and users, employers and certainly there's not going to be any one size fits all. No silver bullet. How can we create a oversight and enforcement framework to make sure that we can hold bad actors accountable, that people that use the AI systems maliciously
Rob Strayer:
Well on the true malicious actors out there, there's going to need to be law enforcement cooperation and also enforcement of some of our existing laws when it comes to standard risk management, a number of the appropriate risk management tools are going to make the model more resilient, more robust, less susceptible to compromise and manipulation. So those are important steps. The other thing just to keep in mind is at these risk management steps, there should be higher risk management for the highest risk use cases and lesser requirements on something that's just doing predictive text in an email. And then finally also to think a little bit about how small businesses and those that might be just doing experimentation that aren't planning for commercial deployment might be required at a lower standard than those that are going to make massive commercial deployments of things.
Sen. John Hickenlooper (D-CO):
That's such a good point. That small business aspect gets overlooked so often. I'm going to have to go vote, but I'm going to leave you in the very capable hands of Senator Cantwell who, as I said earlier, really knows probably more about this issue than anybody else in the Senate. So I'm not worried that you're going to get, oops, I'm not worried you're going to get lost in the forest.
Sen. Maria Cantwell (D-WA):
Thank you Chair Hickenlooper, and again, thank you to you and Senator Blackburn for holding this important hearing and for all our witnesses participating in this, I'm sure it's been a robust discussion on many. I wanted to go back to the particulars of what you all think we should do on the deepfake side as we see technology being developed and DARPA playing a pretty key role as it is in looking at DeepFakes and deepfake information, what is it you think is the landscape of a federal role in identifying? Some have described a system of a watermark, some have described immediate information similar to what the Amber alerts are or something of that nature. What do you all see as the key tools for effectiveness in developing a system to respond to DeepFakes? And we'll just go right down the,
Victoria Espinel:
So it's a very important issue. I think there's a lot of great work that is being done. Some of it's spearheaded by BSA member company named Adobe that has been working on the content authenticity initiative. And I think in terms of giving, I know a lot of that is focused on making sure that consumers have more accurate information that is truly easily accessible that they could access and use and take into account about the generation of AI content and about whether or not that content has been manipulated or altered in other ways. But I also know that there are witnesses at this table that are devoting a great deal of their life and energy to that thought. So I'm going to seed the mic to them.
Dr. Ramayya Krishnan:
Senator first, a broad comment about trust. I think trust is a system level construct. So when you think about humans interacting with machines, machines interacting with machines, one needs to think about what are the ways in which we can enable trusted interactions, trusted transactions to happen between them. DeepFakes as an example. I think content labeling and detection tools to go along with content labeling is absolutely essential to allow for individuals. So when I'm interacting with a piece of content, for me to know that whether it was actually AI produced, whether it's a deep fake, so that to have that information equally well beyond the technology piece, you need education for individuals to know how to actually process this information so that they can arrive at the right outcome with regard to this interaction between human and machine. Similarly, you could also have machine to machine exchanges of data where you could have, I produce a piece of video content and I pass it on to another machine. There has to be, this is where standards are important. This is where C2PA, the standard you heard about combined with watermarking, could actually provide the trust infrastructure to address this deepfake problem.
Sen. Maria Cantwell (D-WA):
Okay, Mr. Gregory.
Sam Gregory:
I believe there's a number of steps the federal government can take. The first is to have a strong understanding of the existing harms and impacts and really be able to understand where to prioritize with groups who are impacted. That includes harms we know already, like non-consensual sexual images, but also the growing number of scams. The second area would be to focus on provenance and to come up with a standardized way for people to understand both AI provenance and opt-in human generated provenance. The third would be to focus on detection. Detection is not a silver bullet. It is flawed, but its availability is still limited to the people who need it most on the front lines of journalism, human rights and democracy. So continued investment from DARPA and others to really resource and support in diverse circumstances. I believe there's a space for legislation around some specific areas such as non-consensual sexual images, AI generated C S A, and potentially political ads that could be taken. And I believe it is the role also to look ahead and understand that this continuing ease of generation of synthetic media means that we'll get more and more personalized and this will have an impact in spaces like social media and platforms. So we should look ahead to those dimensions and be ready to consider those.
Sen. Maria Cantwell (D-WA):
Okay. Mr. Strayer,
Rob Strayer:
I won't repeat what's already been said, but two things on the technical side, very much to emphasize the importance of having an open standard for providence. And secondly on the social dimension, digital literacy is going to be really important for these things to be implemented. So bringing to other stakeholders that include the media platforms, consumers on the digital literacy side for how these tools will be implemented effectively.
Sen. Maria Cantwell (D-WA):
So who do you think should be in charge of this? Anybody? Mr. Gregory, you look like you're going to volunteer here.
Sam Gregory:
I'm going to volunteer, but I'm probably not the best placed. So I will note that I see good leadership from agencies like the F T C that have been doing good work to support consumers to date. So supporting existing agencies that are doing good work with the resourcing and the support in terms of the legislative gaps, I am not well placed to observe where those should come from In terms of the r and d, I think that is broad support that ideally also goes outside of DARPA to other research facilities and facilities more broadly in the us
Dr. Ramayya Krishnan:
In my testimony, I think with regard to the content being produced, I think congress should require closed source and open source models to actually create this watermarking label and a detection tool to go with this label. This is for images and video text is a huge issue as to what it's because you could have deep fakes with regard to text as well. And I think research is needed there. So I think it's a combination of things, but I think Congress should take a leadership role.
Rob Strayer:
Understood. Congress obviously has a very important role to play. I also think that NIST is a place where over time we've seen them deal with some very difficult problems, come up, new profiles for addressing very specific challenges in developing standards that are universally accepted through in this process. And so I think NTA has a key role to play here too.
Sen. Maria Cantwell (D-WA):
Well, that is why in the original legislation that we did with the NAIAC was to establish getting everybody together and figure out what we think the US government's role and responsibility should be. And while they haven't finished all of their findings, they've certainly made a list of the directions and recommendations. And so I think they are a good place to look for on this issue as well, at least from a discussion perspective. But today's hearing was about stimulating some input about the issues around that. And what you basically are saying is there's no fail safe way to do this. It's going to need constant participation both on the side of making sure there's not mistakes. It's one of the reasons why I support getting a privacy bill that establishes a hard line against discriminatory action because then you could always take that action again when somebody's had substantial harm given by a direction.
I think the privacy framework we've already laid out to basically stop that kind of activity and protect people. We've heard a lot from the civil liberties community about this, about what you might see as online redlining and you worry about something in the machine learning environment, just putting that into a system and then it being there for years and years without anybody even understanding that there was a discriminatory tactic against somebody. And all of a sudden all of these people don't have the same kind of thing alone that they wanted. And so this is something we definitely want to have a forceful bright line in my opinion against and say that if these kinds of activities do exist, that we will stop them and that we have a strong law on the books to prevent them from happening. What do you think on the collaboration level from an international basis as it relates to deep fakes and communication? Anybody given that thought about how that framework should operate?
Rob Strayer:
I just point out one analogy of the past was there was a lot of challenge with violent extremist content online. Roughly in the mid two thousands, post nine 11, there was something formed called the Global Internet Forum to Counter Terrorism, and that was really the major platforms, but then many other players came together to form practices and for getting this extremist content off the internet. And so some kind of multi-stakeholder group coming together to do this is probably one of the best ways that we can see this addressed expeditiously as the problem will grow very quickly as well.
Sen. Maria Cantwell (D-WA):
Didn't Interpol play a big role in the early days of the internet and trying to do a similar thing, trying to police against pornography online and catching bad actors who were perpetrating content? Absolutely, yeah. And so that was where an international organization was working and organizations working with them to try to police, I guess, or create standards or information for people to stop those activities,
Rob Strayer:
Sort of a clearinghouse model. I think that's how they pursued it.
Sen. Maria Cantwell (D-WA):
And do you think that was successful?
Rob Strayer:
They were, I think a big component of it. I think the United States shouldn't shy away from taking credit for a lot of work that it did bilaterally through the Department of Justice to educate foreign partners about the ways that they can address things like pornography that rise to that level, that it's criminal. So I think the United States has been a real leader in ensuring security and safety on the internet. Thank
Sen. Maria Cantwell (D-WA):
You, Mr. Gregory.
Sam Gregory:
To add there, one of the gaps that we see frequently and we support local journalists who are trying to identify deep fakes as well as local civil society, is they don't have access to skills and resources. So looking at mechanisms to share skills, capacity fellowship that would bring that expertise closer, that people who need it, the circumstance we see very frequently right now is people claiming that real content is AI generated and people being unable to prove it's real, and that is corrosive in many contexts around the world. And a lot of that is to do with the lack of access to skills and resources. So thinking about opportunities for the US government to support that.
Sen. Maria Cantwell (D-WA):
So what would that be? Now you're talking about a subject very near and dear to my heart, and that is the erosion of local journalism by the commoditization of advertising, and I would say the non-air use of big companies not giving media their fair value for content. You're not really your content to keep the advertising revenue when it's within your browser instead of going to the Seattle Times or some other website. So this is a problem and we have to fix that as well. But you're saying their job is truth justice in the American way, and how can they detect that if they can't do the kind of investigations? Is that your point?
Sam Gregory:
Yes, that they don't have access to the tools that they need. And so as DARPA and others build tools, making sure they're accessible and relevant to journalism and others it's skills so that those are available and that could be facilitated through existing programs that provide skill sharing. I agree with you. There is a larger context where this is, but a small symptom of a broader challenge to journalism where AI increases those challenges as well as provides opportunities for journalists to use it.
Sen. Maria Cantwell (D-WA):
Well, we definitely heard that in Seattle at our summit that we already have a problem as it relates to keeping and saving local journalism. And I'm very concerned about it because we've existed as a country for hundreds of years with this kind of oversight to make sure that the process that we all participate in works and functions and the issues are brought up. And clearly we're seeing places in the United States where the journalism has ceased to have a credible model that's a financial model. And thus we've seen the rise of a lot of very unfortunate issues including corruption because there's no one there to cover and watch the day-to-day. So it's a very interesting question. You're posing beyond what we do as a government in detecting deep fakes. How do you bring the oversight to those whose job is to do oversight
Sam Gregory:
And whose job will get even more complicated in the coming years with the growth of AI generated
Sen. Maria Cantwell (D-WA):
Content? Yeah. And so do you think that's about misinformation or do you think it's bigger than just misinformation?
Sam Gregory:
I believe it's a question of misinformation to some extent. It's a question of the easy capacity to create a volume of information that journalists have to triage and interpret it is a question of that against the backdrop of lack of resources.
Sen. Maria Cantwell (D-WA):
Okay. And so what would you do about that
Sam Gregory:
In the US context? It's very hard to work out how to direct further resources towards local journalism. One option would be to consider as we look at the way in which content is being ingested into AI models, is there any financial support to journalistic entities as they do that? This is obviously something that's been considered in the social media context in other countries. I don't know whether that would be a viable option to address local journalism's needs. So
Sen. Maria Cantwell (D-WA):
How exactly would it work?
Sam Gregory:
I don't know the model that would work in our context. We've certainly seen other contexts globally where governments have looked for ways to finance journalism from social media, but it's not a viable option here in the us. Well,
Sen. Maria Cantwell (D-WA):
Okay, I like that the phraseology should be local journalism is financing these websites and their models. That's what's happening here and we just haven't been able to find the tools to claw that back. But if we have to go and look at this fair use issue, we'll go back and look at it because we're not going to keep going in this direction. And AI is an accelerant. It's an accelerant on everything. The information age is putting challenges and AI will accelerate that, but we got to harness the things that we care about and make sure that we get them right because we want the innovation, but we also want these particular issues to be resolved. So we certainly in Seattle have had that discussion. But Mr,
Dr. Ramayya Krishnan:
Can I briefly comment on this? Go ahead. So on the first part with regard to the tools, I do think that the kind of infrastructure for trust that we have built up with information security, with the cert, with csa for instance, that that kind of capability, if you built it for AI as well, which could be fairly quickly stood up with FFRDCs, that gives us the capacity even across countries to track deep fakes even if they don't necessarily adhere to a content standard like C2PA, because I don't think any individual organization has that capacity, but something like the cert could have that capacity because it'll span.mill.com.gov concerns and this capability and expertise will reside in something like that. That's with regard to your first question with regard to how do we manage and harmonize standards across countries. With regard to the second point, I think it's spot on with regard to values.
On the one hand, the capacity to license copyrighted content, and then how do you actually assess that's on the input side. So if you think of the AI models as taking input data from, say the Seattle Times or things of that nature, how do they declare first that they're using this data and then compensating the Seattle Times fairly for the use of that on the output side? The interesting question is, is it the case that the Seattle Times is getting more traffic from the Chad GPTs and the Googles of the world, or is it the case that the revenue that should have come to Seattle Times is really going to Chad GPT or Bard? I mean, the argument has been that because they provide that entry point into a content that they're actually directing traffic that otherwise would not have found you. So I think that requires, I think analysis and research of the traffic with regard to who's going where and who's directing what to do these sites. Because I think that gets at this revenue point
Sen. Maria Cantwell (D-WA):
That, well, I'm pretty sure about 25% of the traffic that's generated online that big sites are getting from news organizations are really revenue that belong to news organizations. Regardless of the commoditization of advertising, it is still revenue that belongs to the newspapers. And so to my point about this is that our report that this committee, at least when we were the authors of a report, we found that local journalism was the trusted news source. This is the point in that you have many voices that that's the ecosystem that keeps the trust. I mean, somebody could go awry, but guess what? The rest of the ecosystem keeps that trust. So I think the Seattle Times would say it's a very viable, identifiable source of trust. If you were creating information off of their historical database of all Seattle Times ever published stories, which is a very long time, that's probably some of the most trusted journalistic information you could ever get because they had to be in that business, right? But anybody who would then take that content to then who knows do what with it obviously is a very, very different equation. So look, I want to go back to the international point for a second because I do think you mentioned a lot of organizations, I'm not sure everybody grasped, or maybe I didn't grasp everything you were saying about that.
Do you think the NAIAC should be working in coordination right now with international organizations to discuss what a framework looks like? Or are you thinking this is more siloed within organizations like national security issues versus consumer issues versus other things?
Dr. Ramayya Krishnan:
So then NAG does have a group that Ms. Espinel leads as a working group, the AI futures working group that I lead with regard to this trust infrastructure point that I was making. We've been focused on that, but it does have international implications, but perhaps Mrs. Espinel can speak more to it.
Victoria Espinel:
So I have the honor of chairing the international working group for the NAIAC advisory committee. I would be, there are conversations that we're having internally about ways that NAIAC as a committee could be helpful and either in terms of making recommendations to the administration, which is our mandate, or perhaps NYAC as a committee, and I would be, some of them, I can't talk about publicly here, although I'd be happy to have follow up conversations. I can tell you about one though that I think goes to what you're talking about, which is I think we believe that it is very important as governments are thinking about what the right approach is to regulating AI or to trying to address some of the concerns that have been raised by artificial intelligence to make sure that those conversations are happening not just with the United States, not just with the United States and the eu, not just inside the G seven, the O E C D, but to try to have that be a broad based conversation, including bringing in emerging economies that have not typically been as much a part of some of these discussions as I think should be the case.
And so I think if we are going to end up with solutions that are really effective, for example, on DeepFakes, that is going to have to be a global initiative. And I think it will be stronger and more effective if those discussions are happening with a group of countries that represent different perspectives. So emerging economies are going to have slightly different benefits and challenges. They need to be part of that discussion. Well, besides, I'm probably overly passionate about it, so I feel like I've gone on a bit too
Sen. Maria Cantwell (D-WA):
Long. No, no. The question I was trying to get at as people, listen, this committee passed this legislation, we created the NAIAC, we said, here's your responsibilities. We hope you've been thinking about this because we've given you a few years to do so. I was wondering if the current thinking was a divide over the complexity of dealing with national security, kind of deep DeepFakes and commercial and citizen issues on deepfake, and whether you had reached some conclusion on the international side of there's a lot to this and a lot to communicate and coordinate because obviously the worldwide web is a big open system. So you could say the United States is doing this, but you need others to participate. But consumer issue is very different for how we deal with national security issues. And so has the organization come to any conclusion on that?
Victoria Espinel:
I think the short answer is no, not to be overly legalistic, but there are significant restrictions on what I'm allowed to say in a public forum. And I want to be very careful not to cross any lines. So I can tell you that I think there are conversations happening about national security and consumers on the point. I feel like it is fine for me to say on the point that you were talking about, I don't see there being a real challenge. I don't see there being a lack of consensus on national security versus consumer issues and being able to engage internationally on that. Well,
Sen. Maria Cantwell (D-WA):
They're just different organizations within our government and I'm pretty sure they are internationally. So it just makes it challenging.
Victoria Espinel:
It makes it challenging. And so I think of the, so this I can say, I'll just say in my capacity as BSA, one of the, you have for example, the UK government is hosting a global summit in the beginning of November. And I think one of the challenges they face is who, if you're going to have a global summit that is intended to address the safety of artificial intelligence, which is what the UK has announced, who is going to be part of that summit and how many issues can they address? Because there are a myriad of challenges. And as you say, they are often addressed by different parts of government speaking just for the United speaking, just in the context of the United States, I think having effective coordination across the federal government, I think there's more that could be done there. And I think that would be very, very helpful because you don't want these issues to get siloed. You don't want agencies to be doing things that are duplicative or in conflict.
Dr. Ramayya Krishnan:
And I'll reach out to your office senator about the trust infrastructure point that I made. I'm happy to provide additional information.
Sen. Maria Cantwell (D-WA):
Well, we all know that we have lots of international organizations that are working on coordination on lots of internet issues as it is today. I think the question is, has anybody with the NAIAC come up with a framework before we start having these kinds of big discussion? So anyway, we'll get more information. I want to turn it back over to Chair Hickenlooper, thank you so much for again, holding this very important hearing. I see that our colleague from New Mexico is here, and so sorry to take up so much time. I thought I had a free opportunity while you were off voting. Thank you all to the panel too. Thank
Victoria Espinel:
You. If I could just say briefly, I'm excited to be here to testify in my capacity as CEO of BSA, but I'd also be happy to follow up with you in my capacity as a member of the NAIAC committee. Thank you.
Sen. John Hickenlooper (D-CO):
And didn't I tell you I was leaving you in good hands? I love coming in on the end of a discussion. Thank God. How did I miss that? And certainly the traditional news, the trust of traditional news and how to make sure that they're fairly compensated for the costs that I don't think any of us know any traditional news organization that is worth a fraction of what it was worth 15 years ago. Just the nature of what's happened. I turned it over to the good Senator from New Mexico.
Sen. Ben Ray Luján (D-NM):
Thank you, Mr. Chairman, thank you to you and to Ms. Blackburn for this important hearing. And it's great to be able to listen to Chair Cantwell as well to all the members of the panel. Thank you so much for being here today and offering your expertise. One area I've spent a considerable amount of time during my time in the Senate on this issue surrounding broadband, I'll say, as opposed to AI is in the area of transparency and making things easier to understand. And what I mean by that is Senator Cassidy and I, we introduced something called the T L D R, which was too long, didn't redact. And we all know what those agreements look like, pages and pages of documents. And before you can download or use something, you go down a little box and it says, accept terms of agreement. And people click that and they move on and they don't realize what's there.
I was also proud to see and advocate during the bipartisan infrastructure bill, something called nutrition labeling for broadband to make it easier for consumers to compare services across the way. Now, the same type of transparency and easy to understand documentation is also needed so that consumers know where they are interacting with AI, know how their information's going to be used, and know when content is generated by AI And I'm working on legislation to require this type of transparency and disclosures and responsibilities. Now, Mr. Gregory, you mentioned that threats from synthetic media and disinformation most impact those already at risk, like minority groups that face other forms of threats and discrimination. Mr. Gregory, yes or no, are DeepFakes and AI generated disinformation already being created in Spanish and other non-English languages?
Sam Gregory:
Yes.
Sen. Ben Ray Luján (D-NM):
And Mr. Gregory, yes or no? Do you believe technology companies invest enough resources into making sure AI systems work equally well in non-English languages?
Sam Gregory:
Systems are not as adequate in non-English languages, and there are less resources invested in making them applicable in non-English languages. And when it comes to deep fakes, many tools work less effectively for detection in less dominant languages outside of English.
Sen. Ben Ray Luján (D-NM):
And with that being said, Mr. Gregory, would the current system make more communities even more vulnerable in the US and globally to the RISA synthetic media?
Sam Gregory:
This is exactly the concern that my organization has spent the last five years working on, is the vulnerability of communities in the US and globally to synthetic media because of their exclusion from access to tools and support around it.
Sen. Ben Ray Luján (D-NM):
Well, thank you for your work in that space and bringing attention to it. I introduced a piece of legislation called the Litos Act this year to address the lack of non-English investment and multilingual large language models. Any action Congress takes on AI transparency must protect our most marginalized communities, including non-English speakers. And for those that don't know what least those means, it means ready, but it's an acronym. So I credit the staff for coming up with an acronym that calls to action to ensure that we'd all get ready. Now, AI systems are useful in a huge variety of scenarios for making financial decisions to medical diagnosis and content modernization. And I believe government should also utilize AI for improving constituents access to government services. This just goes to show the broad application of what AI can do. Now, Ms. Espinal AI systems are used in different sectors and for different purposes, and these various systems can have different kinds of outputs.
For example, we have predictive AI making best guesses and generative AI creating content. I have a hard time keeping up with a lot of this stuff, but maybe the insight forum tomorrow will help a little for folks like myself. But understandably, consumers and other users of these systems will have very different experiences interacting in these systems. Now, my question to you stems from if people even know they are interacting with an AI system, which is not a given. So under that premise, isn't it then necessary that any transparency and oversight requirements be specific to the sector or use case of the ICE system?
Victoria Espinel:
So you are absolutely correct that AI is used in many different ways and has many different outputs and has many different consequences. So I would say a couple of things. One is I think in terms of having transparency so that for example, a consumer knows if they're interacting with an AI service like a chatbot, I think that's important. But I would also say to build on that is that looking at the specific uses perhaps as opposed to looking at a sector but looking to see whether or not that use is going to have a consequential decision on someone's life, is it going to impact their ability to access government services, to access public benefits, to get a loan, to get education, to get a job? And if it is, if it's going to have a consequential decision on someone's life, then I believe we believe at BSA that companies should be required by legislation to do impact assessments, to identify those risks and then take steps to reduce those risks.
Sen. Ben Ray Luján (D-NM):
I appreciate that very much. Now, professor, under the thought of the explosion of generative AI, which we know has serious implications for the integrity of our information ecosystems, do you agree that Congress should require tech companies to label any AI generated content with disclosures and metadata that includes information on how the content was created?
Dr. Ramayya Krishnan:
Thank you, Senator. Yes, they should. And I want to also thank you for your first remark about T L D R and the privacy nutrition labels. This is something that's absolutely essential in this space. Create something that's easy for people to understand. And as I understand it, Apple is introduced two years ago. These labels that are privacy schematics that are easy to understand for people, and those need to be adapted for the purposes of what it is you were introducing in your opening remarks.
Sen. Ben Ray Luján (D-NM):
I appreciate that very much, professor. Now, Mr. Chairman, I have several other questions, but I see my time has expired. I've already exceeded it. I'll submit them into the record. But I just want to thank you all for what you have been doing, what you're going to continue to do, those that are participating in one form or another with a form tomorrow, and especially for the follow-up coming out of this as well. And so for those of you, I did not speak to very much or at all. I have some questions for you as well and I'll get them to you. So I look forward to hearing back from you and your expertise on this subject. Thank you, Mr. Chairman.
Sen. John Hickenlooper (D-CO):
Thank you, Senator Luján. And once again, I thank all of you. I'm going to wrap it up unless we've missed something urgent, speak now or forever hold your peace, although that's from a different ceremony. Today's discussion is important to help the next steps for working with consumers to understand the benefits and ultimately to trust. AI centers are going to be able to submit questions not to Senator Luhan, but all the centers. Additional questions for the record, the hearing record closes on September 26th. We ask witnesses to provide responses to the committee by October 10th. And again, thank you very much for your time. We're adjourned. Thank you.