NTIA’s Balanced Approach: Supporting Open Foundation Models While Tackling AI Misuse
Cody Venzke / Sep 18, 2024Cody Venzke is a Senior Policy Counsel in the ACLU's National Political Advocacy Department.
Last May, a coalition of AI executives and researchers penned a short, twenty-two-word statement, warning of the “risk of extinction” posed by artificial intelligence to humanity. The statement was covered extensively, likely because it evoked images of the Terminator, Stanley Kubrick’s HAL 9000, and robot dogs with guns.
Despite pushback that the focus on existential risks distracts from more pressing AI harms, eight months later, President Biden’s Executive Order on AI tackled something like those doomsday scenarios. It charged the National Telecommunications and Information Administration (NTIA) to report on the risks posed by the public release of the “model weights” of AI models known as “foundation” models to create chemical, biological, radiological, and nuclear weapons. Earlier this summer, NTIA released its report, and policymakers would be wise to heed its nuanced, deliberative call for investing in evaluating and monitoring the risks and benefits posed by “open” foundation models, rather than resorting to restrictions on their open release.
Under the Executive Order, NTIA was given a specific charge: to investigate the consequences of publicly releasing the “model weights” of “foundation” models. As defined in the Executive Order, “foundation” models are AI models that are trained on broad data to have applicability across several contexts and that demonstrate high levels of performance; “model weights” are numerical representations of the relationships that an AI system observes during its training between inputs and its eventual output. Model weights are of interest to policymakers because they can allow users to fine tune a foundation model for particular uses — like creating a healthcare or counseling chatbot — without having to start from scratch.
On one hand, concerns about widely available model weights’ risks are understandable: if a model can be fine-tuned to better function as a healthcare chatbot, it could just as feasibly be fine-tuned to develop code for cybersecurity attacks, create non-consensual intimate imagery of real people, or to discriminate against minority job seekers. For this reason, some have called for the publication of model weights to be restricted, as opposed to addressing those specific harmful uses.
On the other hand, lessons from traditional open-source software demonstrate how valuable openness can be in vetting applications for security vulnerabilities, spurring innovation, and fostering competition. Civil rights advocates have underscored the importance of openness for testing and auditing AI for discriminatory effects when it’s used in sectors like employment, housing, and credit. Likewise, the Federal Trade Commission has recognized that “open” foundation models can benefit competition, portability, privacy, and auditability — while simultaneously warning companies that seek to abuse temporary or partial openness as a tactic to consolidate power in the long run or misrepresent the openness of their AI through “open-washing.”
Moreover, as NTIA recognizes, widely available model weights are only one form of “open” foundation models, and “open” AI is best viewed along a “gradient.” Developers might grant the public access through an application programming interface, disclose some or all of the underlying code, release training data, or provide more information on the model’s architecture, training, and testing. Each of these degrees of openness carries benefits, risks, and corresponding policy responses in ways that are not easy to predict from experience with open-source software. For example, although widely available model weights may enable robust, competitive markets for applications built on foundation models, that competition may be nullified by concentration elsewhere in the development, training, and deployment of AI.
In light of these complex — and competing — values, NTIA recommends not restricting the publication of foundation model weights at this time, but instead investing in infrastructure and personnel to evaluate and monitor the risks posed by the publication of model weights. In doing so, NTIA avoids painting an apocalyptic scenario worthy of Hollywood, and instead takes a more sober approach: it recognizes that “open” foundation models might be used to spur innovation and facilitate transparency and accountability — or be used for a range of harms. It also concludes that we are still in the early days of the purported AI revolution, and more evidence is needed before we impose flat bans on the public release of “open” foundation models.
NTIA’s approach is more than “wait and see.” It’s an exercise in both realism and humility: we do not know yet what unique harms open foundation models may present or what regulatory responses would be effective in meeting those harms. NTIA consequently makes a crucial if un-sexy recommendation: we must proactively invest in the infrastructure and governmental capacities we need for assessing, monitoring, and responding to the actual harms posed by foundation models, rather than restricting their public release — without evidence — now. This means gathering evidence on AI’s harms, engaging industry and impacted communities, and investing in a federal workforce to undertake those crucial tasks. For as wonky as it is, NTIA’s position is not bureaucratic handwaving, but a clear, concrete call: we need to invest in this crucial infrastructure, and soon.
Recognizing the benefits of open releases of foundation models does not mean ignoring harmful uses of AI, and it does not mean a free pass for open foundation models. As the ACLU and other civil rights groups have recognized, AI — including open foundation models — can cause real, cognizable harms that should be addressed. Appropriately scoped regulations for the developers of open foundation models and for those who deploy them with harmful effects must be part of the policy conversation. Indeed, the policy toolbox is full of options beyond blanket restrictions on the open publication of model weights by focusing on harmful uses of the foundation models — enforcing existing civil rights law, establishing testing and auditing requirements throughout the AI development and deployment lifecycle, and passing meaningful privacy legislation. And when the harms to civil rights and civil liberties from particular uses of foundation models, such as for facial recognition technology or discriminatory hiring technology, cannot be mitigated, those uses should be prohibited.
Many of those policy tools are already available to Congress and the executive agencies. Others will require gathering new evidence. But NTIA’s message is clear: there is a better way to respond to the complex social and technical questions raised by AI than following the lure of a Hollywood script.