Why Internet Governance Must Inform AI GovernanceKonstantinos Komaitis / Nov 1, 2023
Konstantinos Komaitis is an Internet policy expert and author, a non-resident fellow and a senior researcher at the Lisbon Council, and a non-resident fellow in DFRLab at the Atlantic Council.
There is a new front in the geopolitical battle for technological dominance: artificial intelligence (AI). Even though AI tools have been around for years, recently they have become ubiquitous. Conversations are consumed by AI, so much so that the Collins dictionary named AI “the word of the year.” In terms of economics, AI has reawakened the technology market that, for many years, appeared lacking in innovation and creativity.
Indeed, not since the birth of the Internet or perhaps the dawn of the mobile age has technology seemed so likely to usher in rapid change. That is why it is important to look at lessons learned from prior efforts to govern technology.
Zooming In and Zooming Out on AI Governance
At the market level, new products have emerged and are expected to disrupt societies. OpenAI’s ChatGPT, for instance, has already disrupted various sectors, including healthcare, finance, and customer service, and has been credited for being a powerful tool for automation and cost reduction. At the same time, big technology companies are integrating AI tools across their existing popular products. Microsoft is implementing AI tools in products such as Teams, Word and Excel, while Google is doing the same with Gmail, Docs and Sheets. As with any disruption, this acceleration of innovation by industry has been celebrated, but has also raised legitimate concerns over ethics, jobs, and security, amongst other issues. The industry has responded through self-regulatory initiatives, but they are nothing more than a patchwork of high level commitments rather than binding rules. All this creates anxiety amongst nation states.
At the national level, the race is intense and divisive. The world is evidently split into regulatory camps that are uncoordinated and often conflicting. Some countries, like Australia and India, seem to be relying on existing legal frameworks for answers, while others, like the European Union and China, are in the process of introducing AI-focused legislation. Countries like Cuba and Russia have banned certain AI tools, including ChatGPT. In the United States, a polarized Congress is incapable of agreeing on legislation, but the Biden administration has shown initiative with an ambitious executive order, acknowledging the need for some rules and demonstrating its willingness to use its executive power to lay them down. The administration’s order sends a signal to the EU and other allies that at the highest level of the US government there is a political will for collaboration.
At an international level, things are more complex. The United Nations and its agencies are increasingly claiming space through initiatives like the recently announced High-level advisory panel, the International Telecommunications Union’s (ITU) AI4Good annual summit and UNESCO’s attempt to address ethical issues in the use of AI. In the meantime, the G7 has come out with the Hiroshima principles focusing on transparency, accountability, design, safety and sustainability amongst other things; similarly, the G20 has also supported a set of principles for trustworthy AI.
Moreover, earlier this year, at the BRICS summit in Johannesburg, Chinese President Xi Jinping said that “BRICS countries have agreed to launch the AI Study Group of BRICS Institute of Future Networks at an early date. [There is a] need to enable the Study Group to play its full role, further expand cooperation on AI, and step up information exchange and technological cooperation”. Older and newer initiatives are also at play: the 2019 values-based principles, the Organization for Economic Cooperation and Development (OECD) adopted and the Council of Europe’s ongoing negotiations for an AI Convention with focus on the protection of fundamental rights against the harms of artificial intelligence. Finally, there are the individual governments, which constantly seek to raise their international profile, and are pushing forward with a combination of summits, principles and outcomes. The UK is hosting an AI summit in early November, while China has announced that as part of its Belt and Road Initiative (BRI) forum it intends to host a global AI initiative.
What We Learned from Three Decades of Internet Governance
All this creates a complex AI governance map and with so many conflicting and possibly overlapping initiatives. The chances for a breakdown in coordination amongst interested parties are high. Collaboration will be key in moving forward. Given that some of these attempts for governance tend to operate in silos, it becomes somewhat urgent to consider how best to equip these discussions on AI governance. Lessons from Internet governance could help.
Internet governance has taught us three fundamental things about governing technology. The first is the need for inclusion and the creation of processes and fora where knowledge is exchanged and diversity is celebrated. Collaborative participation has been, and continues to be, key for the advancement of the Internet because it takes its complex issues and turns them into viable solutions, at least most of the time. A similar complexity exists in AI - if not more. The governance of AI cannot happen by one actor alone, be it states or businesses. The questions are too many and the answers too complicated for one single stakeholder to bear the responsibility to provide. The value of multistakeholder governance derives from its flexibility and adaptability to require actors with a potential stake in an issue to collaborate. Though imperfect, this system makes room for errors that can be addressed through building blocks of knowledge. As AI continues to advance, this characteristic becomes crucial in ensuring that all possible dimensions are covered.
The second learning is that there is an indisputable need to be more inclusive of the Global South right from the beginning. In the early days of Internet governance, governments realized the need for a global communication governance framework, which was cultivated in the World Summit on Information Society through phases in 2003 and 2005. This decision was right, as international cooperation was key in addressing emerging technologies, especially the Internet and its undeniable impact on societies. Advancement of information communication technology (ICT) was linked to development goals, creating the necessary outline for countries in the Global South to actively participate. Along the way, however, the fast pace of Internet development combined with various economic and political challenges countries in the Global South faced, created a digital divide that for years the Internet community has been working hard to address. The same should not happen with AI. At the UN level, through AI4Good, the International Telecommunication Union (ITU) seems to be trying to address this issue but it cannot do it alone; the rest of the UN system needs to support it. It is imperative that the newly-created High Level coordinates close with both the ITU and UNESCO.
Finally, the third thing we learned is that civil society must be given a voice. The hard-won participation of civil society groups, acting as a reminder of the obligation of states to respect human rights and of the need for businesses to consider users’ fundamental freedoms, has been indispensable in shaping the discussions around the Internet. Civil society has ensured that humans are placed in the center of Internet governance and has managed to balance out the commercial interests of the private sector with the political aspirations of states. For AI governance to have any chance of success, civil society must be invited and have a seat at the AI table. Especially, considering how AI can be weaponized, causing a domino effect that can span from warfare to misinformation and fake news, civil society’s input becomes crucial for ensuring that future governance arrangements operate within the appropriate human rights’ framework.
We have a long way to go in AI governance. In some ways, we are only just beginning. But AI is at the brink of breaking loose and we cannot let this happen without an appropriate normative framework in place. It is important that we take the time we have to think how best to use the experience of Internet governance as a guiding tool. We must move forward; not backwards.