The AI Red Line Challenge
Christabel Randolph, Marc Rotenberg / Sep 3, 2024Christabel Randolph is Associate Director of the Center for AI and Digital Policy, a global network of AI policy experts and human rights advocates. Marc Rotenberg is the Founder of the Center for AI and Digital Policy.
Around the world, governments are hurrying to adopt governance frameworks for artificial intelligence. National AI strategies seek to promote innovation and prosperity while ensuring the protection of fundamental rights and the rule of law. Among the greatest challenges for policymakers is drawing red lines to determine what AI systems should not be developed or deployed, even as consensus is emerging that such prohibitions are essential for safe, secure, and trustworthy AI.
The protection of fundamental rights is a good starting point for AI prohibitions. In 2019, former UN Human Rights chief Michelle Bachelet urged the prohibition of AI systems that fail to comply with international human rights norms. Her call influenced the UNESCO Recommendation on AI Ethics, which explicitly prohibited the use of AI for mass surveillance and social scoring– a practice where some governments assign a secret score to their citizens based on their compliance with government-established norms.
The European Union’s Artificial Intelligence Act also includes prohibitions on certain AI systems, establishing several red lines, including restrictions on the use of AI for biometric categorization based on sensitive characteristics and untargeted scraping of facial images from the internet to create facial recognition databases. It also forbids emotion recognition in the workplace and schools, social scoring, predictive policing (when based solely on profiling a person or assessing their characteristics), and AI that manipulates human behavior or exploits people’s vulnerabilities. Determining the unacceptable risk of AI systems is also a top priority for the AI Act.
The United States plays an increasingly crucial role in global efforts to ban dangerous AI systems. With the release of the Executive Order on AI, President Biden said, “My Administration cannot—and will not—tolerate the use of AI to disadvantage those who are already too often denied equal opportunity and justice. From hiring to housing to healthcare, we have seen what happens when AI use deepens discrimination and bias, rather than improving quality of life.”
President Biden also repeatedly said companies should not release AI products that are not safe. In a similar vein, Vice President Harris called attention to the existential risk of AI systems that impact people in employment, housing, credit, and education, stating, “we must consider and address the full spectrum of AI risk — threats to humanity as a whole, as well as threats to individuals, communities, to our institutions, and to our most vulnerable populations. We must manage all these dangers to make sure that AI is truly safe.” Finally, the OMB Guidance that implements the Executive Order clearly directs federal agencies to decommission “safety-impacting” and “rights-impacting” AI systems that fail to comply with minimum practices.
Of course, red lines are not new to US policymakers. Federal agencies have routinely established prohibitions on unsafe products to promote public safety. In 1988, the Consumer Products Safety Commission banned lawn darts after the game caused just a few deaths. Waiting for the bodies to pile up before acting would have been a dereliction of duty for an agency charged with public safety. The U.S. Food and Drug Administration (FDA) routinely bans products, devices, and ingredients. Most recently, the FDA banned powdered gloves “based on the unreasonable and substantial risk of illness or injury to individuals exposed to the powdered gloves.”
No motor vehicle is sold in the United States without undergoing extensive safety testing, and even after cars are sold, every state in the country requires annual inspections. So, it is hardly surprising, then, that AI governance frameworks establish obligations for those who develop and deploy AI systems throughout the lifecycle and, in some instances, prohibit certain AI systems altogether.
In the global policy realm, the United States backed the Council of Europe AI Treaty that urges countries to consider the need for moratoriums or bans on AI systems incompatible with human rights, democracy, or the rule of law. The United States also supported the outcome of the G7 Ministerial Meeting in Japan, the Hiroshima AI Process, which states, “Organizations should not develop or deploy advanced AI systems in a way that undermines democratic values, are particularly harmful to individuals or communities, facilitate terrorism, enable criminal misuse, or pose substantial risks to safety, security, and human rights, and are thus not acceptable.”
And a recently adopted US-backed UN resolution calls upon all Member States to “cease the use of artificial intelligence systems that are impossible to operate in compliance with international human rights law or that pose undue risks to the enjoyment of human rights.” US Ambassador Thomas-Greenfield rallied members around the passage of this resolution, calling upon the global community to “govern this technology rather than have it govern us.”
Prohibitions on AI systems can be traced back to the beginning of AI governance frameworks. One of our earlier governance initiatives, the Universal Guidelines, set out prohibitions on secret profiling and social scoring and also urged a ‘Termination Obligation’ for systems that are no longer under human control. For an AI impact assessment, a foundational requirement for the governance of AI systems, to have any real consequence, it must contemplate a “go/ no go” outcome. That is routine for environmental impact assessments.
The United States and several other countries have established AI safety institutes to monitor the development of advanced AI systems. As with the Consumer Product Safety Commission from an earlier era, these AI Safety Institutes must be prepared to prohibit the deployment of AI systems that are unsafe. When lawmakers return to Washington in September, they should look more closely at legislative proposals to prohibit unsafe AI systems. With growing public concern about the deployment of AI, it is up to lawmakers and federal agencies to draw red lines for safe, secure, and trustworthy AI.