US Senate AI Working Group Releases Policy Roadmap

Gabby Miller, Ben Lennett, Justin Hendrix / May 15, 2024

The US Senate’s ninth ‘AI Insight Forum’ focused on national security. Dec. 6, 2023.

On Wednesday, May 15, 2024, a bipartisan US Senate working group led by Majority Leader Sen. Chuck Schumer (D-NY), Sen. Mike Rounds (R-SD), Sen. Martin Heinrich (D-NM), and Sen. Todd Young (R-IN) released a report titled "Driving U.S. Innovation in Artificial Intelligence: A Roadmap for Artificial Intelligence Policy in the United States Senate." The 31-page report follows a series of off-the-record "educational briefings," including "the first ever all-senators classified briefing focused solely on AI," and nine "AI Insight Forums" hosted in the fall of 2023 that drew on the participation of more than 150 experts from industry, academia, and civil society.

The report makes a number of recommendations on funding priorities, the development of new legislation, and areas that require further exploration. It also encourages the executive branch to share information "in a timely fashion and on an ongoing basis" about its AI priorities and "any AI-related Memorandums of Understanding with other countries and the results from any AI-related studies in order to better inform the legislative process."

Earlier this month, Sen. Schumer teased the policy roadmap at the AI Expo for National Competitiveness, an event hosted by the Special Competitive Studies Project. During his interview with Washington AI Network Founder Tammy Haddad, Sen. Schumer reiterated that innovation is his “North Star,” but noted that the Senate’s AI Insight Forums established broad consensus that guardrails are needed to ensure AI is safe and reliable. “If innovation comes at the cost of America's economic security or civil rights or liberties, it's going to limit AI's potential,” Sen. Schumer said. (SCSP is chaired by former Google executive Eric Schmidt, who was one of only two attendees at the AI Insight Forums to be invited to participate in more than one session.)

At the event, Sen. Schumer also gave a rundown of what to expect from the Senate in the coming months. “We're not going to wait to have one huge comprehensive plan that touches on everything,” he said. Instead, different committees will use the AI policy roadmap and translate it into more concrete legislation. Sen. Amy Klobuchar (D-MN), for instance, is almost ready to go on establishing rules for use of AI during elections, according to Sen. Schumer. “If we have these deep fakes and no one believes that democracy worked in 2024, it could really stunt the growth of AI and even state growth of democracy,” he said.

What follows are key funding and legislative priorities listed in the report, as well as areas the working group identified as worthy of further study.

1. Funding priorities

The report suggests that Congress should act on the following funding priorities:

Research and development.

  • “Funding for a cross-government AI research and development (R&D) effort, including relevant infrastructure that spans the Department of Energy (DOE), Department of Commerce (DOC), National Science Foundation (NSF), National Institute for Standards and Technology (NIST), National Institutes of Health (NIH), National Aeronautics and Space Administration (NASA), and all other relevant agencies and departments.”

Innovation, labs, and workforce programs.

  • “Funding the outstanding CHIPS and Science Act (P.L. 117-167) accounts not yet fully funded,” including for programs at the National Science Foundation and the Departments of Commerce and Energy.


  • “Funding, as needed, for the DOC, DOE, NSF, and Department of Defense (DOD) to support semiconductor R&D specific to the design and manufacturing of future generations of high-end AI chips” to maintain US competitiveness.

Expansion of the National AI Research Resource (NAIRR).

  • “Authorizing the National AI Research Resource (NAIRR) by passing the CREATE AI Act (S. 2714) and funding it as part of the cross-government AI initiative, as well as expanding programs such as the NAIRR and the National AI Research Institutes to ensure all 50 states are able to participate in the AI research ecosystem.”

Challenge programs.

  • “Funding a series of ‘AI Grand Challenge’ programs” similar to those run by “the Defense Advanced Research Projects Agency (DARPA), DOE, NSF, NIH, and others like the private sector XPRIZE.”

Bolstering NIST.

  • “Funding for AI efforts at NIST, including AI testing and evaluation infrastructure and the U.S. AI Safety Institute, and funding for NIST’s construction account to address years of backlog in maintaining NIST’s physical infrastructure."

Bolstering the Bureau of Industry and Security (BIS).

  • “Funding for the Bureau of Industry and Security (BIS) to update its information technology (IT) infrastructure and procure modern data analytics software; ensure it has the necessary personnel and capabilities for prompt, effective action; and enhance interagency support for BIS’s monitoring efforts to ensure compliance with export control regulations.”

AI and robotics.

  • “Funding R&D activities, and developing appropriate policies, at the intersection of AI and robotics to advance national security, workplace safety, industrial efficiency, economic productivity, and competitiveness, through a coordinated interagency initiative.”

Materials research.

  • “Supporting a NIST and DOE testbed to identify, test, and synthesize new materials to support advanced manufacturing through the use of AI, autonomous laboratories, and AI integration with other emerging technologies, such as quantum computing and robotics.”

Election administration.

  • “Providing local election assistance funding to support AI readiness and cybersecurity through the Help America Vote Act (HAVA) Election Security grants.”

Modernization of government services.

  • “Providing funding and strategic direction to modernize the federal government and improve delivery of government services, including through activities such as updating IT infrastructure to utilize modern data science and AI technologies and deploying new technologies to find inefficiencies in the U.S. code, federal rules, and procurement programs.

Interagency coordination on AI and infrastructure.

  • “Supporting R&D and interagency coordination around the intersection of AI and critical infrastructure, including for smart cities and intelligent transportation system technologies.”

National security threats and risks.

  • The report also says that the working group “supports funding, commensurate with the requirements needed to address national security threats, risks, and opportunities, for AI activities related to defense in any emergency appropriations for AI,” and lists a variety of priorities across defense and security agencies.

2. Development of new legislation

The report proposes the development of a raft of new legislation.

US innovation in AI.

  • “Encourages the relevant committees to develop legislation to leverage public-private partnerships across the federal government to support AI advancements and minimize potential risks from AI.”
  • “Encourages the relevant committees to address the unique challenges faced by startups to compete in the AI marketplace, including by considering whether legislation is needed to support the dissemination of best practices to incentivize states and localities to invest in similar opportunities as those provided by the NAIRR.”

AI and the workforce.

  • “Development of legislation related to training, retraining, and upskilling the private sector workforce to successfully participate in an AI-enabled economy. Such legislation might include incentives for businesses to develop strategies that integrate new technologies and reskilled employees into the workplace, and incentives for both blue- and white-collar employees to obtain retraining from community colleges and universities.”
  • “The relevant committees to consider legislation to improve the U.S. immigration system for high-skilled STEM workers in support of national security and to foster advances in AI across the whole of society.”
  • “The AI Working Group is encouraged by the Workforce Data for Analyzing and Tracking Automation Act (S. 2138) to authorize the Bureau of Labor Statistics (BLS), with the assistance of the National Academies of Sciences, Engineering, and Medicine, to record the effect of automation on the workforce and measure those trends over time, including job displacement, the number of new jobs created, and the shifting in-demand skills. The bill would also establish a workforce development advisory board composed of key stakeholders to advise the U.S. Department of Labor on which types of public and private sector initiatives can promote consistent workforce development improvements.”

High impact uses of AI.

  • “The AI Working Group believes that existing laws, including related to consumer protection and civil rights, need to consistently and effectively apply to AI systems and their developers, deployers, and users.”
  • “We encourage the relevant committees to consider identifying any gaps in the application of existing law to AI systems that fall under their committees’ jurisdiction and, as needed, develop legislative language to address such gaps. This language should ensure that regulators are able to access information directly relevant to enforcing existing law and, if necessary, place appropriate, case-by-case requirements on high-risk uses of AI, such as requirements around transparency, explainability, and testing and evaluation.”

Protecting children.

  • “Develop legislation to address online child sexual abuse material (CSAM), including ensuring existing protections specifically cover AI-generated CSAM. The AI Working Group also supports consideration of legislation to address similar issues with non- consensual distribution of intimate images and other harmful deepfakes.”
  • “Consider legislation to protect children from potential AI-powered harms online by ensuring companies take reasonable steps to consider such risks in product design and operation. Furthermore, the AI Working Group is concerned by data demonstrating the mental health impact of social media and expresses support for further study and action by the relevant agencies to understand and combat this issue.”

Discrimination and social scoring.

  • “Consider legislation to ban the use of AI for social scoring, protecting our fundamental freedom in contrast with the widespread use of such a system by the CCP.”


  • “Consider legislation that both supports further deployment of AI in health care and implements appropriate guardrails and safety measures to protect patients, as patients must be front and center in any legislative efforts on health care and AI. This includes consumer protection, preventing fraud and abuse, and promoting the usage of accurate and representative data.”
  • “Consider legislation that would provide transparency for providers and the public about the use of AI in medical products and clinical support services, including the data used to train the AI models.”

Elections and democracy.

  • “The AI Working Group encourages the relevant committees and AI developers and deployers to advance effective watermarking and digital content provenance as it relates to AI-generated or AI-augmented election content.”

Privacy and liability.

  • “The AI Working Group encourages the relevant committees to consider whether there is a need for additional standards, or clarity around existing standards, to hold AI developers and deployers accountable if their products or actions cause harm to consumers, or to hold end users accountable if their actions cause harm, as well as how to enforce any such liability standards.”
  • “The AI Working Group encourages the relevant committees to explore policy mechanisms to reduce the prevalence of non-public personal information being stored in, or used by, AI systems, including providing appropriate incentives for research and development of privacy-enhancing technologies.”
  • “The AI Working Group supports a strong comprehensive federal data privacy law to protect personal information. The legislation should address issues related to data minimization, data security, consumer data rights, consent and disclosure, and data brokers.”

Transparency, explainability, intellectual property, and copyright.

  • “Consider developing legislation to establish a coherent approach to public-facing transparency requirements for AI systems, while allowing use case specific requirements where necessary and beneficial, including best practices for when AI deployers should disclose that their products use AI, building on the ongoing federal effort in this space. If developed, the AI Working Group encourages the relevant committees to ensure these requirements align with any potential risk regime and do not inhibit innovation.”
  • “Consider developing legislation that incentivizes providers of software products using generative AI and hardware products such as cameras and microphones to provide content provenance information and to consider the need for legislation that requires or incentivizes online platforms to maintain access to that content provenance information. The AI Working Group also encourages online platforms to voluntarily display content provenance information, when available, and to determine how to best display this provenance information by default to end users.”
  • “Consider whether there is a need for legislation that protects against the unauthorized use of one’s name, image, likeness, and voice, consistent with First Amendment principles, as it relates to AI. Legislation in this area should consider the impacts of novel synthetic content on professional content creators of digital media, victims of non-consensual distribution of intimate images, victims of fraud, and other individuals or entities that are negatively affected by the widespread availability of synthetic content.”
  • “Consider legislation aimed at establishing a public awareness and education campaign to provide information regarding the benefits of, risks relating to, and prevalence of AI in the daily lives of individuals in the United States. The campaign, similar to digital literacy campaigns, should include guidance on how Americans can learn to use and recognize AI.”

Safeguarding against AI risks.

  • “Develop legislation aimed at advancing R&D efforts that address the risks posed by various AI system capabilities, including by equipping AI developers, deployers, and users with the knowledge and tools necessary to identify, assess, and effectively manage those risks.”

National security.

  • “Encourages the relevant committees to develop legislation to improve lateral and senior placement opportunities and other mechanisms to improve and expand the AI talent pathway into the military.”
  • “Develop legislation to set up or participate in international AI research institutes or other partnerships with like-minded international allies and partners, giving due consideration to the potential threats to research security and intellectual property.”
  • “Develop legislation to expand the use of modern data analytics and supply chain platforms by the Department of Justice, DHS, and other relevant law enforcement agencies to combat the flow of illicit drugs, including fentanyl and other synthetic opioids.”

3. Areas for further study

The report makes a variety of recommendations for future study by Congressional committee and the federal government, including recognizing “the rapidly evolving state of AI development” and supporting “further federal study of AI, including through work with Federally Funded Research and Development Centers (FFRDCs).”

AI and the workforce.

  • “Exploration of the implications and possible solutions (including private sector best practices) to the impact of AI on long-term future of work as increasingly capable general purpose AI systems are developed that have the potential to displace human workers, and to develop an appropriate policy framework in response, including ways to combat disruptive workforce displacement.”

High impact uses of AI.

  • “Supports Section 3 of S. 3050, directing a regulatory gap analysis in the financial sector, and encourages the relevant committees to develop legislation that ensures financial service providers are using accurate and representative data in their AI models, and that financial regulators have the tools to enforce applicable law and/or regulation related to these issues.”
  • “Encourages the relevant committees to investigate the opportunities and risks of the use of AI systems in the housing sector, focusing on transparency and accountability while recognizing the utility of existing laws and regulations.”
  • “Recognizes the AI-related concerns of professional content creators and publishers, particularly given the importance of local news and that consolidation in the journalism industry has resulted in fewer local news options in small towns and rural areas. The relevant Senate committees may wish to examine the impacts of AI in this area and develop legislation to address areas of concern.”
  • “Explore mechanisms, including through the use of public-private partnerships, to deter the use of AI to perpetrate fraud and deception, particularly for vulnerable populations such as the elderly and veterans.”
  • “Consider policies to promote innovation of AI systems that meaningfully improve health outcomes and efficiencies in health care delivery. This should include examining the Centers for Medicare & Medicaid Services’ reimbursement mechanisms as well as guardrails to ensure accountability, appropriate use, and broad application of AI across all Populations.”

Privacy and liability.

  • “Evaluate whether there is a need for best practices for the level of automation that is appropriate for a given type of task, considering the need to have a human in the loop at certain stages for some high impact tasks.”

Safeguarding against AI risks.

  • “Explore whether there is a need for an AI-focused Information Sharing and Analysis Center (ISAC) to serve as an interface between commercial AI entities and the federal government to support monitoring of AI risks."

National security.

  • “The AI Working Group recognizes the DOD’s transparency regarding its policy on fully autonomous lethal weapon systems. The AI Working Group encourages relevant committees to assess whether aspects of the DOD’s policy should be codified or if other measures, such as notifications concerning the development and deployment of such weapon systems, are necessary.”
  • “Recognizes the significant level of uncertainty and unknowns associated with general purpose AI systems achieving AGI. At the same time, the AI Working Group recognizes that there is not widespread agreement on the definition of AGI or threshold by which it will officially be achieved. Therefore, we encourage the relevant committees to better define AGI in consultation with experts, characterize both the likelihood of AGI development and the magnitude of the risks that AGI development would pose, and develop an appropriate policy framework based on that analysis.”
  • “Encourages the relevant committees to explore potential opportunities for leveraging advanced AI models to improve the management and risk mitigation of space debris. Acknowledging the substantial efforts by NASA and other interagency partners in addressing space debris, the AI Working Group recognizes the increasing threat space debris poses to space systems. Consequently, the AI Working Group encourages the committees to work with agencies involved in space affairs to discover new capabilities that can enhance these critical mitigation efforts.”
  • “Develop a framework for determining when, or if, export controls should be placed on powerful AI systems.”
  • “Develop a framework for determining when an AI system, if acquired by an adversary, would be powerful enough that it would pose such a grave risk to national security that it should be considered classified, using approaches such as how DOE treats Restricted Data.”


Gabby Miller
Gabby Miller is a staff writer at Tech Policy Press. She was previously a senior reporting fellow at the Tow Center for Digital Journalism, where she used investigative techniques to uncover the ways Big Tech companies invested in the news industry to advance their own policy interests. She’s an alu...
Ben Lennett
Ben Lennett is a contributing editor for Tech Policy Press and a writer and researcher focused on understanding the impact of social media and digital platforms on democracy. He has worked in various research and advocacy roles for the past decade, including as the policy director for the Open Techn...
Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a new nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & ...