Home

Time for An International Treaty on Artificial Intelligence

Merve Hickok, John Shattuck / Mar 14, 2024

Council of Europe- Strasbourg, France. Shutterstock

This week, the Council of Europe (COE) is expected to wrap up negotiations on the text for the first international treaty on artificial intelligence. The work is important and timely. Many governments are developing AI strategies and policies, but there is still little agreement on the basic norms to control this rapidly emerging technology. And there is growing concern that as AI systems are more widely deployed, the protection of fundamental rights and democratic values could be pushed aside.

The challenges are clearest in the private sector. There is countless evidence – from healthcare to hiring, credit and insurance decisions – that unregulated AI systems replicate bias and produce unfair outcomes. For example, AI-powered surveillance systems in the private sector, from workplaces to school classrooms, suppress people’s rights and freedoms and now gather personal data for elaborate training models without restraint. Scholar and former US presidential science advisor Dr. Alondra Nelson has called for an AI Bill of Rights to address such issues.

Anticipating these problems, the Council of Europe began work on an AI treaty several years ago. The goal was a comprehensive and far-reaching agreement among nations about basic rules to govern AI that safeguard human rights, democratic values, and the rule of law. The first round of work resulted in the recommendation of a legally binding treaty that covers activities undertaken by both private and public actors, and then a committee was tasked to draft the treaty. 

The Council of Europe has 46 Member states, but Observers (such as the United States, Canada, Japan, and Mexico) can also participate in the drafting of the treaty. The impact of COE treaties goes beyond those currently engaged, since treaties are open for ratification by all countries worldwide.

Through much of the drafting work, hopes remained high that the process would produce a robust framework equal to the task of managing one of the most transformative technologies in history. But difficulties have emerged as negotiations approach the final stages. Much of the private sector agrees with the need for regulation, but countries such as the US are reportedly pushing for a “carve-out” for private-sector AI systems. In addition, security forces would like to remove national security AI systems from the scope of the treaty.

AI experts have sounded alarms about these recent developments. British computer scientist Stuart Russell, one of the world’s leading AI experts, told delegates that an AI treaty that fails to cover AI systems in the private sector will ignore the greatest risk to public safety today, and a national security exclusion could make nations more vulnerable to foreign adversaries and could be used as an excuse for domestic mass surveillance and narrowing of rights.

A recent survey from the members of the Institute of Electronic and Electrical Engineers (IEEE), a leading association of computer professionals, confirms these concerns. A large majority of US IEEE members said that the current regulatory approach in the US to managing AI systems is inadequate. About 84 percent support requiring risk assessments for medium- and high-risk AI products, as the recently adopted European AI Act requires, and nearly 68 percent support policies that regulate the use of algorithms in consequential decisions, such as hiring and education. More than 93 percent of respondents support protecting individual data privacy and favor regulation to address AI-generated misinformation.

These concerns are widely shared. More than one hundred civil society organizations in Europe have now urged negotiators at the Council of Europe to remove the blanket exceptions for the tech sector and national security. Similar campaigns have brought together experts and advocates in Canada and the United States.

Polling data shows growing public concern about AI. In the United States, the Pew Interest Research Center found that Americans are far more concerned about AI than they are enthusiastic, a gap that has increased over the last several years.

When the OECD AI Principles, the first governance framework for AI, were developed in 2019, the US was one of the leaders driving the process. The OECD AI Principles drew no distinction between AI systems deployed in public or private sectors. The United Kingdom and Japan, key players in the current treaty conversations, also endorsed the OECD AI Principles.

It is hard to follow the rationale for such a private sector carve-out when US President Joe Biden has repeatedly underlined the need for guardrails for AI systems which can impact our rights and safety. The President called upon Congress to enact AI regulation, and there is bipartisan agreement on the need for such laws.

We believe there is a solution: a return to first principles. The reason for a treaty is to bring nations together in support of common commitments. If some non-European nations have difficulty aligning with the common objectives, give them time for implementation, and if absolutely necessary, allow exceptions for specific purposes. The US, for example, could commit to a comprehensive treaty, but then take a derogation during the ratification process. Narrowing the scope of the treaty itself, however, would lower the bar of protection for human rights and democracy for all countries. We should not lose sight of the need for common commitments now among nations ready to move forward. The AI treaty does not prescribe domestic methods for implementation. Countries may differ in their legal systems and traditions. Differences in legal systems should not prevent us from uniting in the protection of human rights and democracy. 

Several years ago, former Massachusetts Governor and presidential candidate Michael Dukakis first called for a global accord on AI, reflecting widespread concern about “what happens to these technologies and whether or not we use them for good reasons and make sure they are internationally controlled.” Many experts warned that AI could hack elections, displace jobs, and replace human decision-making. Concern about unregulated AI is growing. In December, Pope Francis called for a legally binding international treaty to regulate AI. He said algorithms must not be allowed to replace human values and warned of a "technological dictatorship" threatening human existence. The Pope urged nations to work together to adopt a binding international treaty that regulates AI development and use.

The new treaty presents a unique opportunity to address one of the great challenges of our age – ensuring that artificial intelligence benefits humanity. There should be no ambiguity about the obligations of future parties to the treaty.

A shorter version of this article was published on March 12, 2024.

Authors

Merve Hickok
Merve Hickok is the President and Research Director at Center for AI and Digital Policy (CAIDP), engaged in global AI policy and regulatory work, with a particular focus on fundamental rights, democratic values, and social justice. She is an expert advising OECD, UNESCO, United Nations, EU committee...
John Shattuck
John Shattuck is Professor of Practice in Diplomacy at the Tufts University Fletcher School, where he specializes in transatlantic affairs, and former Senior Fellow at the Harvard Kennedy School Carr Center for Human Rights Policy, where he directed the Project on Reimagining Rights and Responsibili...

Topics