Home

Minding the AI Power Gap: The Urgency of Equality for Global Governance

Patricia Gruver-Barr, Gordon LaForge / Nov 17, 2023

Gordon LaForge is a Senior Policy Analyst with New America's Planetary Politics Initiative and a lecturer at the Thunderbird School of Global Management at Arizona State University. Patricia Gruver-Barr is the co-founder of the Tech Diplomacy Network, a Senior Fellow at New America’s Planetary Politics Initiative, a former Science Attaché for the British and Québec governments, and a tech policy consultant.

World leaders gather in May 2023 at the G7 Hiroshima Summit. Wikimedia

Global action to govern AI is picking up speed. In November, the UK government hosted an international AI Safety Summit and issued the Bletchley Declaration on managing the risks of cutting-edge AI systems. At the UN in New York, the Tech Envoy convened a High-Level Advisory Body on AI with 32 experts from government, civil society, and industry to develop governance recommendations. The G7’s Hiroshima AI Process issued a policy framework for AI, plus industry- and civil society-led groupings such as the Global Partnership on AI and the Frontier Model Forum of leading AI companies.

Although it is early days, the momentum for global AI governance is a refreshing course correction from previous efforts to mitigate the risks and harms of novel digital technologies. The failure to develop regulatory frameworks and shared norms for policy responses to cyberattacks, data protection, and algorithmic decision-making has allowed human rights abuses, threats to democracy, and real-world violence to proliferate. As AI develops at an exponential pace, policymakers and industry leaders seem genuinely keen to get ahead of AI risks before greater societal harm occurs.

What is less encouraging is that so far, the high-level conversations and processes are heavily concentrated in wealthy, like-minded nations and tend to focus on a narrow conception of AI safety with less regard for the broader implications of AI for society. Of particular concern is the propensity for AI to worsen harmful concentrations of power and drive global divides between haves and have-nots to perilous new widths.

The AI governance processes underway are largely exclusive to Western democracies and their allies, a few high-profile academics, and large tech companies. And these conversations have centered around a relatively narrow set of AI risks. The AI Safety Summit, for instance, focused primarily on “misuse risks,” bad actors using AI for biological or cyberattacks, and “loss of control risks,” AI systems breaking free from their creators to pursue their own objectives.

This comes as little surprise, given that with some exceptions, the most prominent voices are the so-called Doomsayers, concerned with the speculative, existential threat of a rogue AI agent, and the Warriors, who argue that we need to contain the proliferation of AI and govern it through the lens of national security and geopolitical competition. It is no wonder that the International Atomic Energy Agency is the most evoked template for an international AI agency.

Industry leaders and powerful nation-states obviously should and will play a leading role in global AI governance. And there are real concerns about national security, the dual-use potential of AI systems, and autonomous weapons that governments should address through international treaties and in fora such as the Dutch Government’s Summit on Responsible Artificial Intelligence in the Military Domain.

But, the most urgent AI risks receive far too little attention in the global conversation. These relate to how AI will perpetuate and worsen global inequalities and systemic injustices. Already, biased facial recognition and scoring systems deployed in countries from Brazil to the United States to China harm minority and historically marginalized populations.

The rise of menial AI-enabling work in content moderation and data annotation adds to an underclass employed in precarious, informal jobs. Low-wage and low-skill workers already suffering from poor pay and lax labor protections are particularly exposed to dispossession and displacement from AI applications. And rich countries are poised to accrue the greatest benefits from AI, while poorer countries will fall farther behind. A paper from the McKinsey Global Institute projects that rich nations could see up to five times the net economic benefit from AI as developing nations by 2030.

Crises do not stop at national borders. Look at the COVID-19 pandemic or the migrants fleeing war zones, poverty, and climate change to Europe and the United States, where extremist politics are threatening democracy. The widening rifts between AI “haves” and “have-nots” will almost certainly fuel more populism, migration, and conflict.

As a result, Global AI governance must prioritize addressing asymmetries of AI power, access, and development. At a minimum, high-level governance processes should include a wide array of stakeholders. As Alondra Nelson and Seth Lazar argue, AI safety is not a narrow technical problem, but a whole-of-society problem that requires ethicists, social scientists, lawyers, and others.

Experts from developing countries and underrepresented groups need an equal seat at the table. Right now, most of the discussion about the impacts and regulation of AI takes place in nations that comprise 1.3 billion people. The 6.7 billion living in low- and middle-income countries receive little attention, and yet some of the darkest consequences of poorly regulated AI will fall on them.

Global institutions have a poor track record when it comes to financial redress. For example, rich countries have fallen billions of dollars short on pledges made to developing countries to finance the cost of climate change mitigation and adaptation. But if we accept that AI benefits and public goods will disproportionately accrue to wealthy nations and that entrenching global inequality any further is problematic, then we have to develop mechanisms to ensure the economic and other benefits generated by AI are more evenly distributed across the world.

The global political dynamics that confound climate change mitigation efforts, make direct remuneration from wealthy countries to poor ones a non-starter. However, there is much that multilateral bodies and international organizations can do to make AI technology more accessible and to create opportunities for poorer countries to leverage AI for their own economic benefit. There are models from other scientific fields of institutions that could help accelerate the development of AI ecosystems. The Belmont Forum is a collaboration involving funding organizations, science councils, and regional associations that facilitates global and transdisciplinary research to understand, mitigate, and support adaptation to climate change. Since 2009, it has disbursed hundreds of millions of euros to over 130 projects undertaken by more than 1,000 scientists hailing from 90 countries.

Gavi, the Vaccine Alliance, a partnership of foundations, governments, private companies, and nonprofits, has immunized nearly 900 million children in developing countries. In building the networks and infrastructure to purchase and deliver vaccines, Gavi has strengthened public health systems across the developing world. An AI-oriented Gavi could assemble a similar coalition focused on amplifying the data, infrastructure, and financing ecosystem needed to safely compete in AI.

Finally, as a general principle, AI governance frameworks should promote openness. AI is not a weapon of mass destruction; it is software. Attempts to manage it by containing the proliferation of models or pauses in development are quixotic and could worsen power disparities in the development of AI ecosystems. Licensing requirements for AI models, a proposal gaining traction in the US and elsewhere, would privilege large AI incumbents and raise the bar to entry for startups. Moreover, opacity is often more dangerous than transparency, as open data and open-source code can be scrutinized by researchers, external auditors, and democratic institutions for risks, biases, and vulnerabilities.

All of this is a tall order, given the power concentrations and escalating geopolitical rivalries that dominate the AI landscape. But, recognition of the power concentration problem is growing. Calls for more inclusive AI governance that addresses inequality and injustice are mounting. In forming its High-Level Advisory Body on AI, the UN has stated its commitment to an open, inclusive, multi-stakeholder process. Given its global positioning, the UN has an opportunity to make bridging the power gap a primary focus.

The short history of digital technology has revealed an important, if reductive, axiom: Technology that centralizes power and dehumanizes is dangerous, whereas tech that distributes power and centers humans is good. We should do all we can to ensure AI governance steers us in the right direction.

Authors

Patricia Gruver-Barr
Patricia Gruver-Barr is the co-founder of the Tech Diplomacy Network, a Senior Fellow at New America’s Planetary Politics Initiative, a former Science Attaché for the British and Québec governments, and a tech policy consultant.
Gordon LaForge
Gordon LaForge is a senior policy analyst with New America, a think tank, where he researches and writes on the geopolitics and governance of emerging technologies and other issues in global politics. He is also visiting faculty at Stanford University’s Leadership Academy for Development and the ASU...

Topics