Home

Donate
Perspective

Shaping AI Standards to Protect America’s Most Vulnerable: Tech Innovators

Serena Oduro / Jun 16, 2025

Serena Oduro is a Policy Manager at Data & Society.

Secretary of Commerce Howard Lutnick listens as President Donald Trump gives remarks during an official State Dinner at Lusail Palace in Doha, Qatar, Wednesday, May 14, 2025. (Official White House Photo by Daniel Torok)

While the Trump administration’s policies destabilize the lives and livelihoods of many groups in the United States – from immigrants to researchers to current and former employees of the federal government – there is one group finding comfortable footing: the tech industry. From the image of Big Tech CEOs seated front and center at the inauguration to a raft of executive orders that seek to accelerate adoption of artificial intelligence to the Republican attempt to pass a state AI policy moratorium, tech firms are taking advantage of the moment.

In this context, the shift announced June 3 by Secretary of Commerce Howard Lutnick to reform the National Institute of Standards and Technology’s (NIST) US AI Safety Institute (USAISI), established under the Biden-Harris administration, to the US Center for AI Standards and Innovation (CAISI), is unsurprising. But understanding the potential implications of the entity's new mission is important for those concerned with tech accountability.

The Secretary’s press release announcing the change highlights that the purpose of the center will be to shore up Commerce’s ability to understand AI capabilities and identify homegrown and foreign AI threats and vulnerabilities. With its new brand and management, the CAISI has an even greater focus on national security and American competitiveness, centering research around security, assessment of American AI capabilities versus our international counterparts, and using the CAISI to represent American AI interests to “guard against burdensome and unnecessary regulation of American technologies by foreign governments” and “to ensure US dominance of international AI standards.”

As “industry’s primary point of contact,” CAISI diverges from the USAISI’s mission to advance the science of AI safety through multi-stakeholder collaboration, including with academia and civil society. As NIST and CAISI are housed under the Department of Commerce — the tension between the interest for NIST to represent and be resourceful for industry, academia, and civil society perspectives was always present. Integrating the demands of academia and civil society into NIST’s deliverables as required under President Biden’s Executive Order on AI was an uphill battle, with bias and discrimination often inadequately addressed.

The tension over the need to address bias and discrimination was foreshadowed from the beginning of the USAISI In a November 2023 public workshop on “Collaboration to Enable Safe and Trustworthy AI”, the first gathering NIST held to kick off the creation of the USAISI and assess areas for research in AI safety, the proposed working groups included a working group on Society and Technology, with a focus on standards and operationalizing NIST’s AI Risk Management Framework. In that meeting, I asked for civil society concerns related to AI safety to be addressed across all the working groups in addition to the Society and Technology working group, since issues such as bias and discrimination cut across issues of AI evaluation, red teaming, and synthetic content. That point ultimately was not addressed, and the Society and Technology working group was transformed into the Safety & Security group.

The transformation of the Society and Technology working group to Safety & Security was an early harbinger of the battle between AI safety efforts and the need to address AI issues that impact the public. Over the span of a decade, we have seen AI efforts transform from ethical to trustworthy to responsible to safe to secure, with CAISI now seemingly cleaving off values altogether to double down on the pursuit of innovation, come what may. Years of research aimed at addressing well-documented AI harms are being cast to the wayside as innovation is being framed as the only concept that matters.

The government's interest in collaboration to advance AI safety is over — and not just with academia and civil society, but with the world. The establishment of the AISIs occurred at a time when the UK, US, South Korea, and other countries were emphasizing the need for international collaboration to advance the science of AI safety. As noted by the USAISI during its creation, “AISI aims to catalyze a more connected and diverse ecosystem, both domestically and internationally, to align multiple stakeholders and their resources in a shared endeavor.” The AI Safety Summits were gatherings for international partners to align on AI safety, with the November 2024 inaugural convening in San Francisco launching the International Network of AI Safety Institutes, aiming to drive “alignment on and build the scientific basis for safe, secure, and trustworthy AI innovation around the world” and advance their work before the Paris AI Action Summit. The Paris AI Action Summit was not as harmonious of an international gathering as initially conceived, with Vice President JD Vance making it clear that the Trump administration is interested in American dominance and “AI opportunity,” not “hand-wringing about safety,” with international cooperation being welcomed as long as it “fosters the creation of AI technology” instead of enforcing “onerous international rules.”

CAISI’s focus now—to “guard against burdensome and unnecessary regulation of American technologies by foreign governments” and “ensure US dominance of international AI standards” —puts the International Network of AI Safety Institutes into question. CAISI does not seem interested in collaboration, but in capture. Countries must do what America says, or else.

It’d be naive to pretend that the previous USAISI was not meant to be a place for the USA to flex its dominance as well. However, the shift to a pro-innovation, national security lens marks borders between the countries that not only impact governance but also research agendas. The UK’s shift in early 2025 from an AI Safety Institute to the UK AI Security Institute now focuses on “strengthening protections against the risks AI poses to national security and crime” and distinguishes that “it will not focus on bias or freedom of speech, but on advancing our understanding of the most serious risks posed by the technology to build up a scientific basis of evidence which will help policymakers to keep the country safe as AI develops.”

Countries' research agendas are now guarded, focusing on the harms that most threaten the nation state: perpetrators to their borders. Within this worst list of perpetrators, some harms that impact people will be addressed, such as child sexual abuse images, but others, like bias and discrimination, are not threats, in their eyes, to the nation. Directing research attention (in a time of reduced government funding towards research) away from AI’s most documented harms stymies research and leaves the public — workers, women, children, people of color, LGBTQ+ community, people of multiple marginalized identities, and so many others — vulnerable.

While the USAISI changed, its US AI Safety Institute Consortium remains focused, for now, on “bring[ing] together more than 280 organizations to develop science-based and empirically backed guidelines and standards for AI measurement and policy, laying the foundation for AI safety across the world.” From its inception, it was meant to be a multi-stakeholder, interdisciplinary consortium, but the chilling effect of industry’s dominance in CAISI and the administration's disregard for academia and civil society make it hard to imagine that any future collaborations will be successful.

The USAISI’s change exemplifies the industrial and national security priorities that will dominate American AI policy for the next four years. It is unfortunate that at a time when there is a breadth of research showing opportunity for multi-disciplinary collaboration and research that could advance our understanding of AI, governments and tech firms are united in a gluttonous pursuit of power. Democratic governance is best when multiple stakeholders have power and are heard — including government, industry, academia, civil society, and the public. Right now, the future of AI science and accountability hangs in the balance while industry takes its seat beside the throne.

Authors

Serena Oduro
As Data & Society’s policy manager, Serena Oduro leads and manages the organization’s state-level policy engagement. Driven by her dedication to realizing an AI ecosystem that truly benefits us all, she is passionate about advancing a sociotechnical and rights-forward approach to AI governance. Sere...

Related

Perspective
Do We Need a ‘NIST for the States’? And Other Questions to Ask Before Preempting Decades of State LawMay 20, 2025

Topics