Home

US Senate AI Report Meets Mostly Disappointment, Condemnation

Gabby Miller, Justin Hendrix / May 16, 2024

On Wednesday, May 15, 2024, a bipartisan US Senate working group led by Majority Leader Sen. Chuck Schumer (D-NY) released a report titled "Driving U.S. Innovation in Artificial Intelligence: A Roadmap for Artificial Intelligence Policy in the United States Senate." The report follows a series of off-the-record "educational briefings," a classified briefing, and nine "AI Insight Forums" hosted in the fall of 2023 that drew on the participation of more than 150 experts from industry, academia, and civil society.

While some industry voices voiced praise for the report and its recommendations – for instance, IBM Chairman and CEO Arvind Krishna issued a statement commending the report and lauding Congressional leaders, indicating the company intends to “help bring this roadmap to life” – with some exceptions, civil society reactions were almost uniformly negative, with perspectives ranging from disappointment to condemnation.

Perhaps unsurprisingly, given that Sen. Schumer put innovation and competitiveness at the fore of the working group’s agenda, the most significant criticisms came from civil rights and social and economic justice advocates, who chastised the report for putting industry interests ahead of the public interest, and for appearing to cater to Big Tech and defense priorities. Still, others, including some who participated in the AI Insight Forums and have played prominent roles in setting AI policy in the Biden administration, faulted Sen. Schumer’s closed-door process and lack of recognition of prior work in the space, including on legislation.

Tech Policy Press is collecting responses from experts, including some shared directly, and some posted publicly. While we cannot include every such statement here, please notify us if you have a unique perspective to share. We also accept contributions from the reader community for those who may wish to provide a more complete analysis or response.

Dr. Alondra Nelson, Harold F. Linder Professor of Social Science at the Institute for Advanced Study

Dr. Nelson led the creation of the Biden-Harris administration's Blueprint for an AI Bill of Rights in 2022 as acting director of the White House Office of Science and Technology Policy.

The approach of developing bipartisan consensus on the front end of major AI policy reflects the urgency the American people are expressing on this issue. But that’s where service of the public’s interests ends in this effort.

A year-long, closed-door process on one of the most urgent policy priorities Americans have ever faced has produced a product stripped of our shared values of public benefit, corporate accountability, and protections of our fundamental rights and liberties. The bipartisan Roadmap for AI Policy starts where many attempts at landmark legislation end: stripped of a robust vision for a fairer future, whittled down by compromise.

Remarkably, there is no shortage of ideas to make this legislation better. The Biden-Harris administration has been working to enact a vision for a fairer American society in which the use of AI does not harm communities and any potential benefits are shared. This work is anchored in the 2022 Blueprint for an AI Bill of Rights I helped draft, a framework that centered on privacy, safety, anti-discrimination, notice of the deployment of AI tools and systems, and human recourse and fallback when AI is used. The Administration has worked with dispatch to achieve this vision, advancing an October 2023 executive order, binding guidance for federal agencies, enforcement actions, and many new policies for economic and national security, public health, competitiveness, and workers’ rights.

I’m hopeful that now that this roadmap has seen the light of day, the Senate goes to work to infuse it with the values and accountability that will make it worthy of this moment.

Suresh Venkatasubramanian, Director, Center for Tech Responsibility and Professor, Computer Science and Data Science, Brown University

Professor Venkatasubramanian helped author the Blueprint for an AI Bill of Rights during a 15-month appointment as an advisor to the White House Office of Science and Technology Policy.

My first take was on X, where I said this: The new Senate ‘roadmap’ on AI carries strong ‘I had to turn in a final report to get a passing grade so I won't think about the issues and will just copy old stuff and recycle it’ vibes. I wonder if they used ChatGPT to generate it.”

Reading it more thoroughly, my overwhelming feeling is one of disappointment and sadness. The Insight Forum process, as skewed and overbalanced as it was, had the potential to bring together a variety of perspectives on AI governance and open up a genuinely innovative approach to legislating on AI that provided BOTH support for research and development AND guardrails and protections for people and communities subject to the vagaries of AI-driven systems.

It did none of this. The report reads like a mix of deep giveaways for defense projects, huge investments in extremely narrow technical investigations (in spite of participant after participant emphasizing the importance of supporting broad sociotechnical research), and superficial lip service paid to the most pressing concerns around AI – civil rights, bias mitigation, accountability, and trust. It fails to acknowledge the significant progress that the Executive Branch has made with the AI Bill of Rights, the executive order, the OMB guidance, and so many other agency actions, and doesn't try to consolidate or build on any of those efforts.

It's telling also to see which groups are celebrating it - those that have been implacably opposed to any kind of AI governance. When your most vocal supporters are the ones who benefit the most from inaction, that says something about the report you've put out.

Spencer Overton, Patricia Roberts Harris Research Professor of Law, GW Law School

Although women and people of color comprise over 70% of the U.S. population, today’s Senate bipartisan AI roadmap inadequately deals with the elephant in the room--bias in AI. Despite the fact that bias is one of the biggest challenges of AI, the 30-page report uses the word “bias” only three times—two of which are buried in the appendix. The report’s legalistic and weak language on bias signals to industry that key Senators think that deepfake ads against sitting politicians pose greater threats than AI bias that may deprive millions of fair employment, housing, and lending opportunities. If this tepid language on bias is what’s needed for a “bipartisan” report, it feels like a “Blue and Gray” Civil War reunion that muzzled the topic of Jim Crow segregation for fear of being “too divisive.”

Courtney C. Radsch, Director of the Center for Journalism and Liberty at Open Markets Institute (and board member at Tech Policy Press)

Unsurprisingly, the outcome document from a series of closed-door meetings that prominently featured the leaders of the world’s most powerful and wealthiest AI companies did little to address the core democratic challenges of governing these technologies. The AI policy roadmap resulting from Sen. Schumer’s much-lauded “AI listening tour” completely misses the mark on its efforts to address how to govern AI in the public interest or to ensure American innovation and competitiveness, since this would require dismantling the current concentration of power in AI that has given a handful of Big Tech companies vast control over key aspects of the AI value chain and undue influence over AI development.

Rather than calling for regulation to ensure that the AI sector is aligned with public interest objectives, requiring that AI companies comply with existing intellectual property laws, and redressing market imbalances that impede meaningful innovation while disrupting entire sectors of the economy with promises of vague “productivity gains” that more often than not result in greater profits to capital rather than labor, the report simply warns against imposing regulations that would “impede innovation.” Given the high concentration of power and monopolization of core aspects of the artificial intelligence value chain, from chips to compute to cloud to talent, by a handful of Big Tech companies, the failure to even mention market concentration and the implicit assumption that protecting the current bloated tech behemoths and their view of AI is somehow equivalent to innovation is deeply disturbing. The limited reference to regulation warns only of avoiding onerous ones that would “impede innovation,” parroting the tech industry perspective that has enabled companies to develop and deploy business models that are corrosive to democracy, develop products that promote addiction and perpetuate extremism, and avoid responsibility and liability for harms as diverse as surveillance, censorship and genocide.

And the only mention of journalism, my area of particular interest, is an oblique acknowledgement of the “concerns of professional content creators and publishers” and a suggestion that perhaps the Senate should consider the impacts of AI in this domain and develop legislation to address areas of concern. By the time this happens it will undoubtedly be too little, too late. The Executive Director of Open Markets Institute advised Congress to ignore most of its recommendations, sage counsel given that it proposes funneling $32 billion in taxpayer funds to R&D that risk subsidizing the wealthiest companies while further delaying actual legislation and regulation.

Robert Weissman, President, Public Citizen

With the roadmap punting legislative action to Senate committees, the question now is: Will Congress act to impose restraints on AI corporations to prevent them from inflicting serious damage on the nation and the world?

AI harms are pervasive – and fast worsening. These include racially biased algorithms making discriminatory health care, housing and criminal justice decisions; nonconsensual, intimate deepfakes disrupting the lives of tens of thousands of girls; worsening fraud, especially against seniors; privacy incursions and appropriation of creators’ work; intensifying electricity demand; corporate concentration; reckless moves toward deployment of autonomous weapons and more, much more. If Congress does not address these issues urgently, AI corporate power will grow stronger and the chances of realizing the great promise of AI and directing it to serve the broad public interest — rather than AI serving to concentrate wealth at the expense of the public — will rapidly diminish.

Understandably, the Senate roadmap focuses on “promoting innovation.” It’s important to note that the roadmap correctly understands efforts to develop privacy protections, eliminate racial bias, promote safety and more as pro-innovation. The choice between innovation and regulation is a false dichotomy. Regulation fosters smarter, safer and more sustainable innovation. The choice is not innovation or no innovation, but what kind of innovation and, most pointedly, innovation for whom.

Amba Kak and Sarah Myers West, AI Now Co-Executive Directors

Let’s be clear about what this process was about, from start to finish: despite a pressing need for regulatory intervention, in the end, the nine “insight forums” functioned as a stalling tactic. Rather than act on the mountains of evidence and ideas about what to do, momentum to regulate AI was instead diverted into a closed door industry-dominated process. The long list of proposals are no substitute for enforceable law – and these companies certainly know the difference, especially when the window to see anything into legislation is swiftly closing.

It’s on our elected officials to ensure that the interests of the public are front and center in defining the horizon of possibility for AI. That their vision only extends as far as another roadmap is disappointing, especially when the writing is already on the wall: what we need are stronger rules that put the onus on companies to demonstrate products are safe before they release them, ensure fair competition in the AI marketplace, and protect the most vulnerable in society from AI’s most harmful applications – and that’s just the start.

Concerningly, the report ushers in an ambitious proposal of $32 billion in taxpayer dollars for AI R&D under the banner of democratizing AI. This proposed investment risks further consolidating power back in AI infrastructure providers and replicating industry incentives – we’ll be looking for assurances to prevent this from taking place.

Cody Venzke, Senior Policy Counsel, ACLU

The AI policy roadmap sets us on the wrong path. Despite mountains of evidence provided by numerous organizations at multiple “AI Insight Fora,” the roadmap gives little acknowledgement of the risks to civil rights and civil liberties posed by artificial intelligence. For example, algorithmic systems used to screen tenants are prone to errors and incorrectly include criminal or eviction records tied to people with similar names; such errors fall hardest on Black and Latine people. Similarly, algorithmic tools already used widely in hiring and other employment decisions have been repeatedly shown to exacerbate barriers in the workplace on the basis of race, disability, gender and other protected characteristics. Similar disparities are well documented across crucial life opportunities – governmental benefits and services, healthcare, education, insurance and credit, the criminal legal system, and the child welfare system. And yet the roadmap merely states, “The AI Working Group acknowledges that some have concerns about the potential for disparate impact, including the potential for unintended harmful bias.”

This is not enough. Enforcement of existing civil rights laws should be reinforced with meaningful protections, such as creating standards for assessing when the risk to civil rights and civil liberties is too great to warrant use. People should be able to receive notice when AI is used in making decisions about us, and the law should require audits and assessments of algorithmic decision making, mitigation of discrimination and other harms, decommissioning of AI if mitigation is not possible, and providing recourse, including human alternatives.

Just as concerningly, the AI roadmap appears to double down on uses of AI in defense, national security, and intelligence — agencies that are already broadly using AI and whose use of AI has many of the most profound impacts on individuals’ civil rights and civil liberties. The roadmap supports increased funding for defense and national security uses but has virtually nothing that would mandate the adoption of robust safeguards for AI systems that contribute to surveillance, watchlisting, searches of travelers at the border, or even the use of lethal force. Efforts by the White House to protect civil rights in the age of AI already grant wide latitude and overbroad exceptions to national security agencies, and the roadmap would double down on that misstep. If the development of AI systems for national security purposes is an urgent priority for the country, then the adoption of critical protections by Congress and the executive branch is just as urgent. We cannot wait until dangerous systems have already become entrenched.

Maya Wiley, President and CEO of The Leadership Conference on Civil and Human Rights and The Leadership Conference Education Fund

No one lives a single issue life, and technology reaches every facet of our lives. Today’s artificial intelligence legislative framework is a first small step towards ensuring a fair, just, and socially beneficial AI future. We welcome a bipartisan agreement as a beginning towards a transformational future of fairness, justice, and opportunity. But to create a transformative and sustainable nation of opportunity with emerging technologies like AI, we must have a powerful set of preventive guardrails. Elections, health care, housing, education, and the criminal-legal system are all places where people can be helped or harmed by technology, depending on how the government responds in this critical moment. Unfortunately, the framework’s focus on promoting innovation and industry overshadows the real-world harms that could result from AI systems.

We urge Congress to continue to work towards enacting policies that make AI equitable for all of us. AI is deeply intertwined in our lives, touching everything from education to employment and elections. ‘Black Box’ AI systems are making decisions for us and feeding us information based on troves of data — data that have already shown biases against historically marginalized communities. Congress must act to protect our civil rights, ensuring that our loved ones receive the life-saving health care they deserve, that our neighbors can count on job security and fair wages, and that our democracy is protected.

The Leadership Conference and our coalition partners have continued to be vocal in the need for any legislation related to AI to include civil rights protections, recognizing the large gaps in safeguards for individual privacy, workers’ rights, and election integrity. This framework recognizes elements from the American Privacy Rights Act of 2024, including minimizing individuals’ data footprint online, which is a positive sign that lawmakers across Capitol Hill are connecting the dots between privacy and AI. Federal lawmakers also acknowledged that workers, who face the greatest potential change to their livelihoods from AI, deserve a seat at the table in any AI policy discussion. Workers, and the unions that represent them, ought to be front and center in the fight for equitable AI.

We are sorely underprepared to face the potential threat of AI ahead of this year’s election, but there is positive movement in Congress. AI holds promise to help protect and expand our democracy by increasing our capacity to reach voters. It also, however, poses a great threat to turbocharge the spread of voting disinformation and hate speech online, stoking fear and distrust in our election infrastructure and sowing hate against communities of color. I have testified in front of Congress on the importance of addressing this issue and urged action to mitigate the impact of artificial intelligence on our elections. The Senate Rules Committee today considers three key pieces of legislation — the Protect Elections from Deceptive AI Act, the A.I. Transparency in Elections Act of 2024, and the Preparing Election Administrators for A.I. Act — all three of which are important steps to protect our elections. The roadmap acknowledges the need to address AI-generated election content, but it doesn’t go far enough to stop the potential turbocharged spread of disinformation that can result from the use of AI.

The Leadership Conference and our Center for Civil Rights and Technology will continue to work diligently to protect and advocate for civil rights protections to be included in every piece of legislation related to AI governance. Technological innovation is not truly innovative until it includes all of us.

Calli Schroeder, Senior Counsel and Global Privacy Counsel, Electronic Privacy Information Center (EPIC)

The Senate AI working group’s roadmap for AI speaks at great length of AI’s potential, but fails to adequately acknowledge or address AI’s existing harms. This mindset that AI adoption is both good and inevitable prevents the document from meaningfully engaging with the many criticisms and concerns surrounding AI. With a proposal of $32 billion for AI research and development and few meaningful proposals to address AI safety and harmful impacts, this is a roadmap for the development of AI, but not a meaningful roadmap to actually regulate AI.

The roadmap fails to acknowledge the many well documented harms that AI already inflicts on Americans. On the same day that the Senate’s roadmap was released, EPIC released a report on the harms caused by Generative AI. The new report, Generating Harms II: Generative AI’s New & Continued Impacts, expands on our Generative AI harms report from last year, delving into the risks generative AI poses to elections, privacy rights, data function and quality, and creator rights over their content. In a new section, the report also sets out and categorizes the various state, federal, international, and private-sector proposals intended to counter, mitigate, or remedy generative AI harms.

The roadmap’s lack of attention to the very real harms caused by AI means it also lacks policy proposals to address those harms. This is a missed opportunity by the Senate AI working group to ensure that any AI innovation is done in a way that respects people’s rights and does not exacerbate harms. Congress should not make the same mistakes with AI that it did with data privacy – the time to regulate this technology is now.

Damon Hewitt, President and Executive Director of the Lawyers’ Committee for Civil Rights Under Law

We are deeply disappointed that the AI framework does little to advance serious proposals for protecting civil rights. While the report briefly raises issues that the civil rights community has raised consistently, it is completely devoid of substantive recommendations or legislative steps to address them. In that regard, what is billed as a roadmap seems more like a treadmill–lots of energy expended, but little forward movement.

Congress’s work is incomplete unless it follows through with concrete legislation, and time is running out. We urge Congress to enact protections like those in the Lawyers’ Committee’s model legislation, the “Online Civil Rights Act,” which includes a prohibition on algorithmic discrimination, establishes a duty of care for the use of AI, requires pre-and post-deployment testing for bias and other harms, gives individuals a right to an explanation for how AI affects them, and ensures data used for AI is kept private and secure. These measures will help to ensure meaningful transparency, safety, and fairness in AI technologies.

With unregulated AI-driven decisions impacting equal opportunity and the general election season around the corner, it is imperative that Congress move swiftly to regulate AI and protect civil rights online. We cannot afford to wait another year for legislation to protect our rights amidst a rapidly growing technological landscape. The time to act is now.

Willmary Escoto, Esq., US Policy Counsel, Access Now

The Senate working group's AI framework signals movement towards bipartisan support for ensuring that existing consumer protection and civil rights laws apply to AI developers and deployers. We commend the framework's focus on data minimization and its recognition of the need for a strong federal data privacy law—something for which Access Now has consistently advocated. It's past time Congress curbed the rampant collection and misuse of personal data, especially by data brokers.

Unfortunately the framework lacks the policy focus or necessary teeth to prevent the significant harms AI is inflicting on vulnerable populations and our civil rights right now. The framework must be followed by concrete legislative actions to ensure it is more than just guidance. The desire to tackle AI-generated election disinformation is another commendable aspect of the framework. But it’s crucial to avoid viewing watermarking as a silver bullet. Watermarking alone can’t address the challenges posed by AI-generated content. We need robust protections for personal data and accountability for AI developers and users to truly mitigate the human rights harms of AI.

Matt Mittelsteadt, Research Fellow at the Mercatus Center at George Mason University

The broad AI package has many positives, including efforts to build our public and private sector talent bases, multiple efforts to analyze and clarify existing rules, efforts to invest in talent development and upskilling, new efforts to measure AI’s economic impacts, and an effort to investigate potentially fine-grained transparency requirements. Elements of the AI package released by the bipartisan Senate group could aid responsible AI diffusion, but unfortunately, paired with the good are multiple provisions tuned to grapple with overhyped, unempirical apocalyptic risks.

Negative pieces include new AI export control authorities and even the possibility of a restricted data regime modeled on our approach to nuclear secrets. As policymakers get to work, rather than imagining the apocalyptic, their focus should fall on preserving innovation, empirical challenges, and the changes needed to ensure rapid, responsible AI diffusion.

Nicole Gill, Accountable Tech Co-Founder and Executive Director

The AI roadmap released today by Sen. Schumer is but another proof point of Big Tech’s profound and pervasive power to shape the policymaking process. The last year of closed-door ‘Insight Forums’ has been a dream scenario for the tech industry, who played an outsized role in developing this roadmap and delaying legislation.

The report itself is most concrete in offering a roadmap for industry priorities while merely hand-waving toward many of the most pressing challenges associated with the widespread adoption of AI: ensuring AI applications do not further entrench existing inequalities, regulating its use in the upcoming elections, and preventing an even more rapid erosion of our climate through the demand for energy, to name a few. Lawmakers must move quickly to enact AI legislation that centers the public interest and addresses the damage AI is currently causing in communities all across the country.

Hodan Omaar, Senior Policy Manager, Center for Data Innovation

This roadmap for AI policy shows Congress is listening to those who have called on policymakers to ensure the United States remains the global leader in AI. By investing in and prioritizing AI innovation, the United States is helping safeguard its position and creating a framework for policy that recognizes the enormous societal and economic benefits AI can bring to sectors such as health care, transportation, and education.

The roadmap will help the United States steer clear of the pitfalls Europe is encountering. EU policymakers have failed to prioritize AI innovation and adoption and plunged into a stringent regulatory regime that now has them worrying if they have shot themselves in the foot and will ever be able to catch up with the U.S. economy. The roadmap suggests Congress is learning from that cautionary tale.

However, this roadmap is designed to spur a wave of legislative activity in Congress to address concerns about AI—privacy, safety, workforce disruptions, etc.—and the challenge for Congressional lawmakers will be to pick the right policy solution for each concern. They should recognize that certain issues may require new regulations, but many can be addressed by legislation that sets guidelines, promotes certain practices, or incentivizes desired behaviors. For example, the House recently passed a bill supporting the development of privacy-enhancing technologies, which is a great example of non-regulatory legislation that will help address some of the privacy concerns related to AI. Being discerning about what concerns merit responses and what types of policy action they warrant will help ensure policymakers craft targeted, impactful, and effective policies to address the real challenges AI poses while avoiding unnecessary regulatory burdens that will stifle innovation.

Finally, the roadmap leans heavily on investments to support AI research and development, but policymakers should recognize that the benefits of AI are not going to be realized by only improving AI development. The United States also needs a multipronged national AI adoption plan to ensure these opportunities are translated into all the areas where they can make a positive difference in people’s lives. It is therefore critical that Congress focus on crafting policies that accelerate the adoption of AI in both the public and private sectors.

Alexandra Reeve Givens, President & CEO, Center for Democracy & Technology

The Roadmap rightly acknowledges key risk areas that AI experts have warned about for years. But after a year of work, Congress needs to do more than acknowledge those issues; it needs to act. It’s not enough to focus on investments in innovation, as the Roadmap does in detail – we also need guardrails to ensure the responsible development of AI.

The EU has moved ahead, and in the US states are starting to fill the void. Congress should be advancing serious legislative proposals that protect people's privacy and civil rights, with requirements for transparency, explainability, testing and evaluation that provide genuine accountability.

Work is starting in committees (a good thing), but it's slow and diffuse. So the real test will be whether the bipartisan AI group will now lead and help deliver on the ‘guardrails’ portions of the roadmap -- protecting consumers and advancing responsible, trustworthy AI.

Janet Haven, Executive Director, Data & Society Research Institute

For those who expect that their government's role is to defend the public interest, protect civil rights, and shield workers and consumers from anticompetitive practices, the Senate AI Working Group's new AI legislative road map is a major disappointment.

What we should expect from our lawmakers are legislative interventions that drive innovation toward rights-respecting, people-centered AI systems; enhance worker voice over tech; and set the conditions for an industry that is both competitive and environmentally sustainable. The road map ignores the evidence we already have to design the protections and accountability ecosystem around AI that we need now, and instead calls for massive public investment to support the priorities of the tech industry.

This should have been a moment for congressional leadership to put forward tangible legislative proposals to protect people from AI’s harms. What we got instead is a warmed-over white paper of suggestions for Congress to “consider” or “explore.” The public deserves better.

Chris Lewis, President and CEO of Public Knowledge

The Senate AI Working Group’s Insight Forums process has produced an important document, the AI Roadmap, but we also hope the process has produced senators who are more informed about the threats and opportunities that AI presents. The education part of this process is incredibly important for two reasons. First, artificial intelligence is still developing and evolving. A baseline of knowledge is critical to avoid policy mistakes about the current technology, let alone where it will be in future years. Second, the Insight Forums and the roadmap both demonstrated that there are many competing viewpoints about artificial intelligence. A baseline of knowledge is important to see what values are prioritized and what are left out in this high-level consensus document.

The roadmap highlights eight important areas for immediate action that reflect some key public interest values. First and foremost, it is critical that the roadmap lifts up the importance of fighting well-documented existing harms, including bias and discrimination that are amplified by poorly designed, or poorly used, artificial intelligence, as well as the labor challenges that arise from introducing AI into industry. Along with the threat of the spread of disinformation and its impact on democratic institutions, these existing harms have the power to unravel our society. The public must remain vigilant to ensure that subsequent legislation adequately addresses these harms. We are also glad to see support for a comprehensive privacy law in the roadmap, a baseline protection that is required for AI and platforms that use AI to have the trust of their users. Finally, we appreciate the emphasis on the threat of deepfakes, going beyond the impact on public figures to the average person. Everyone’s name, image, and likeness must remain protected, especially from the threat of non-consensual intimate imagery.

The roadmap unfortunately does not adequately address or is silent on three important issues that support key public interest values. First, the roadmap does not sufficiently ensure competition in this evolving marketplace. Public Knowledge supports the National Artificial Intelligence Research Resource, or NAIRR, to promote research and innovation, but more is needed to ensure that the gatekeeper power of Big Data, computing power, and network effects do not lock in early innovators as dominant AI monopolies. Other kinds of public investment in lowering barriers to entry must be studied and created, including the possibility of Public AI systems. Regulatory action to prevent self preferencing and other anticompetitive business practices are also needed. Second, there is no mention in the roadmap of the importance of protecting fair use under copyright law in AI and digital markets, which is critical for protecting equity and creativity online. Third, the roadmap does not present a vision for sustainable oversight and accountability of artificial intelligence. Public Knowledge has long advocated for an expert digital regulator empowered to protect the public interest and with the expertise and authority to regulate the development of artificial intelligence and other digital platforms as they evolve.

With further analysis of the roadmap, we look forward to continuing to work with senators to address this evolving technology and key public issues as AI is used increasingly in many parts of our lives. We encourage the public to remain vigilant as legislation rapidly moves forward and demand that all public interest values are considered alongside the wave of industry lobbyists working to shape the future regulatory posture of our government.

Evan Greer, Director, Fight for the Future

Sen. Schumer’s new AI framework reads like it was written by Sam Altman and Big Tech lobbyists. It’s heavy on flowery language about ‘innovation’ and pathetic when it comes to substantive issues around discrimination, civil rights, and preventing AI-exacerbated harms. The framework eagerly suggests pouring Americans’ tax dollars into AI research and development for military, defense, and private sector profiteering. Meanwhile, there’s almost nothing meaningful around some of the most important and urgent AI policy issues like the technology’s impact on policing, immigration, and worker’s rights. There is no serious discussion of open source in the document, exposing a strong bias toward Big Tech dominance in the AI space. None of this is all that surprising given that companies like Clearview AI, Microsoft, and Amazon got far more air time during the process of creating this report than human rights groups and AI policy experts. It seems that lawmakers in DC are less interested in regulating responsibly and more interested in rubbing elbows with CEOs and currying favor with those who stand to profit from unfettered AI. This roadmap leads to a dead end.

Todd O’Boyle, Senior Director of Technology Policy, Chamber of Progress

This framework takes a smart approach to AI policymaking, examining the opportunities and challenges that AI poses and addressing each of those directly. What you won’t find in this framework is AI scaremongering or calls for a one-size-fits-all federal agency to regulate this wide-ranging technology. There’s lots of work to be done filling in the details, but what the Senate working group has laid out is a strong start.

Merve Hickok, President, Center for AI and Digital Policy

The AI Issues Report is a missed opportunity. Poll after poll made clear that Americans see the impact of AI on education, housing, employment, credit, and criminal justice, and they are concerned. They want the government to establish guardrails, to ensure accountability and oversight as these technologies are deployed. Support for AI legislation is bipartisan and widespread.

Early on, the Center for AI and Digital Policy objected to the secrecy of the AI Issues Forum. We said Senator Schumer was right to prioritize AI policy but wrong to hold critical meetings behind closed doors. The outcome is not surprising - the powerful had their way, the public was ignored.

CAIDP has worked with many governments on AI policy. We are encouraged by lawmakers' efforts to pass laws, including the EU AI Act and the Council of Europe AI Treaty. These laws establish necessary safeguards for AI and make genuine innovation possible. Our annual report, the AI and Democratic Values Index, makes clear the worldwide effort to enact legislation for AI governance.

We remain deeply concerned about the situation in the United States. There are many good bipartisan initiatives. There is public support for passage. But so far, there is little progress. We urge Senator Schumer to move forward the pending proposals. The longer the Congress delays, the more difficult these problems will become.

Authors

Gabby Miller
Gabby Miller is a staff writer at Tech Policy Press. She was previously a senior reporting fellow at the Tow Center for Digital Journalism, where she used investigative techniques to uncover the ways Big Tech companies invested in the news industry to advance their own policy interests. She’s an alu...
Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a new nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & ...

Topics