Renaming the US AI Safety Institute Is About Priorities, Not Semantics
Paulo Carvão, Mizuki Yashiro, Shaurya Jeloka / Jul 3, 2025
US President Donald Trump signs Executive Orders, Monday, February 10, 2025, in the Oval Office with Secretary of Commerce Howard Lutnick. (Official White House photo by Abe McNatt)
United States Commerce Secretary Howard Lutnick’s recent decision to rebrand the US AI Safety Institute (AISI) as the Center for AI Standards and Innovation (CAISI) might appear to be another act of bureaucratic housekeeping. But this one-letter shift is no accident: it’s a mark of a deeper change in national priorities towards AI development.
When it comes to governing AI, language is never neutral. How we describe institutions reflects how we understand their purpose. And in this case, the renaming of AISI marks a pivot between two competing visions for AI governance: one that emphasizes long-term risk mitigation and public accountability, and the other, prioritizing innovation, speed, and global competitiveness above all.
The original AISI, housed within the National Institute of Standards and Technology (NIST), embodied the first vision. It was founded on two key premises: first, that “beneficial AI depends on AI safety,” and second, that “AI safety depends on science.” At its creation, the AISI outlined its core missions of developing standardized metrics for frontier AI, coordinating with global partners on risk mitigation strategies, and advancing the science of testing and validation for safety.
CAISI’s revised mission reflects a subtle but determined shift towards the second vision, accelerationism. As Secretary Lutnick put it:
For far too long, censorship and regulations have been used under the guise of national security. Innovators will no longer be limited by these standards.
Where the AISI reflected the values of safety advocates, the CAISI appears to be aligning itself with actors like OpenAI and Andreessen Horowitz, who see excessive regulation as an existential threat to US competitiveness.
In March, OpenAI submitted a response to the White House’s request for comments in support of the AI Action Plan. It suggest that the AISI be “reimagin[ed]” as a “single, efficient ‘front door’ to” the government. The idea is to streamline engagement between federal agencies and commercial actors and not subject them to a patchwork of state laws. In other words, speed over scrutiny. This laissez-faire approach is also evidenced by the proposal for a moratorium on state AI legislation that was just stripped from the budget and reconciliation bill before it advanced in the Senate.
This accelerationist vision is gaining traction. But it raises critical questions too: Who defines the “standards” in CAISI? What values shape them? What will happen to the safety protocols that AISI was designed to advance?
From a governance perspective, this shift should concern us. An approach focused on the security and operational aspects of the technology is well documented and measurable, but potentially narrow. One grounded in “safety,” by contrast, implies a broader systemic commitment to minimize harm, account for long-term risks, and ensure that new models won’t lead to catastrophic threats.
What is even more concerning is that this transition also ignores the voices of civil society. We analyzed the 10,068 public comments submitted in support of the AI Action Plan preparation. While 41% of Big Tech submissions supported accelerationism, the public overwhelmingly prioritized fairness, accountability, and safety. Close to 94% of civil society respondents focused on public interest, responsible AI advocacy, and safety calling for redress mechanisms and democratic oversight, not just innovation.
If CAISI is to fulfill its mandate of serving this nation, it must look beyond a single viewpoint. It must be a platform for pluralism: a place where national security, public safety, and innovation are co-equal partners in governance. That means prioritizing transparency in how standards are set, preserving long term safety research, and building mechanisms for meaningful participation from academia and the broader public.
Today’s claims for light regulation mask an agenda for no regulation reframed as a defense against a so-called premature governmental intrusion. The real challenge, however, is not about too little or too much regulation. It’s about designing adaptive models of oversight. Alternatives such as AI sandboxes, dynamic governance models, or multi-stakeholder regulatory organizations are proposals already on the table. CAISI, if positioned well, could serve as a crucial first node, laying the groundwork for a responsive AI governance framework.
Words matter. So do institutions. CAISI’s rebrand isn’t just about optics: it codifies governing intent. It’s clear what the administration seeks: speed, streamlined approvals, and limited regulatory drag. If we let the pivot from safety to standards for security to go unexamined, we’re not just accelerating innovation, but accelerating past accountability.
It’s up to the rest of us to demand balance.
Authors


