UN Reaches Consensus on AI. Now Comes the Hard Part
Vidisha Mishra, Nicole Manger / Sep 11, 2025On August 26, the UN General Assembly adopted Resolution A/RES/79/325 (the AI Modalities Resolution) by consensus, without a vote, following months of negotiations. At a time when consensus is elusive and the multilateral system is under constraint, this follow-up to the Global Digital Compact (GDC) marks a timely and substantive step towards global AI governance. This challenge transcends borders and demands collective action.
The resolution creates two new institutional mechanisms that stand out for their ambition and design. The “Independent International Scientific Panel on AI” will comprise forty experts serving in their personal capacity, with appointments balanced by gender and geography. Members will serve three-year terms, disclose conflicts of interest, and elect co-chairs from both developed and developing countries. The Panel is tasked with producing an annual, evidence-based assessment synthesizing existing research, accompanied by thematic briefs as needed. These reports will be presented not only to the General Assembly but also to the Global Dialogue on AI Governance, anchoring political deliberations in independent science.
The “Global Dialogue on AI Governance” is the second mechanism. It is significant as the first truly global and inclusive platform, bringing more than 100 countries – particularly from the Global South and LDCs – onto equal footing in shaping AI governance. These countries have largely been excluded from existing processes led by the G7, G20, Council of Europe, EU, AU, and OECD. By design, the Dialogue broadens participation, deepens North–South exchange, and helps close digital divides.
The Dialogue will convene annually for two days, alternating between New York and Geneva, beginning with a High-Level Launch Event on September 25 during the UNGA High-Level Week this September. Its first full session will be held in 2026, alongside the AI for Good Summit, organized by the ITU. The Dialogue will bring together governments, civil society, industry, and academia to exchange best practices, address knowledge and financing gaps, and advance future multistakeholder cooperation on strengthening AI capacity and competency.
With the two mechanisms, the resolution sets a clear agenda built around three priorities: (1) ensuring safe and trustworthy AI through transparency, accountability, and human oversight; (2) advancing equity by building capacity in developing countries and addressing social, ethical, cultural, and linguistic impacts; and (3) promoting openness and interoperability by aligning governance approaches and supporting open-source software, open data, and open AI models.
Its level of detail stands out, in contrast to the vague commitments that often define multilateral agreements, which are frequently diluted in the interest of consensus. By creating institutions with explicit mandates, timelines, and outputs, and embedding independence, inclusivity, multi-stakeholder engagement, and transparency, the General Assembly signals it has learned from past governance failures.
Yet the resolution’s impact will depend less on its design than on how governments and stakeholders carry it forward. Translating ambition into action will require resources, political will, and new forms of cooperation.
The way forward lies in addressing these practical challenges head-on.
Secure independence and sustainable resourcing
For the Scientific Panel and Global Dialogue to matter, they must be credible and properly funded. Independence, conflict-of-interest safeguards, and predictable resources are not add-ons – they are preconditions for trust. The latest funding restraints on the UN's regular budget have led the Secretary-General to issue the UN-80 Initiative, introducing relocation plans to more cost-efficient locations than the current headquarters in New York and Geneva, and making significant cuts to the UN workforce. Even before the latest cuts, during budgetary negotiations in the 5th Committee in December 2024 for the implementation of the Pact for the Future, the Advisory Committee on Administrative and Budgetary Questions (ACABQ) recommended a phased financing approach for building institutional capacity for the GDC implementation and to further clarify some functions of the new Office for Digital and Emerging Technologies (ODET). With only a handful of posts secured, ODET remains heavily reliant on voluntary funding. Given its central role in facilitating the Global Dialogue and Scientific Panel, vetting criteria for such external funds must be clear, transparent, and consistent.
In the current budgetary climate, the UN is also exploring innovative voluntary financing options for AI capacity-building. A recently published report by the UN Secretary General, titled Innovative Voluntary Financing Options for AI Capacity Building, stresses the critical need for substantive investments to close the digital divides and systematically enable AI capacity and literacy. In this context, it is vital that any new voluntary mechanisms – whether a Global Fund on AI, a platform to coordinate existing regional AI funds, or coordination systems for in-kind contributions – must be designed to prevent conflicts of interest. Equally important is ensuring that capital-intensive sponsors, particularly from the technology-focused private sector, cannot convert financial power into political influence or use funding to secure privileged access to emerging technology markets and talent.
Accelerate in AI time
The first Global Dialogue is not scheduled until 2026 – an eternity in AI timelines, where the speed of innovation far exceeds the pace of rule-making. In the meantime, processes such as the WSIS, G7, G20, and regional dialogues at the European Union and African Union levels will help bridge the gap. Linking these efforts to the UN track is critical to prevent fragmentation and ensure the Dialogue launches from a position of coherence. Streamlining the growing series of Global AI Summits into this process will be equally important.
Launched by the UK at Bletchley Park in 2023 with an initial focus on AI safety, the AI Action Summit series has since evolved into a global platform. Early summits established AI Safety Institutes in the US and the UK, creating an International Network That now spans Japan, France, Singapore, Australia, Canada, Kenya, the EU, and other regions. The agenda has broadened – from safety and security to public-interest applications – with the next AI Action Summit scheduled for India in 2026, focusing on the societal impacts of AI. This global process provides the very momentum the UN Dialogue can build on – avoiding duplication, integrating knowledge, and anchoring existing alliances into a coherent global framework.
Build bridges across governance tracks
The UN resolution leaves open the question of how global efforts will connect with regional and national frameworks. Binding instruments, such as the Council of Europe Convention, will be critical complements, offering a legal grounding on issues like human rights protection, the rule of law, and democracy that a largely non-binding resolution cannot provide. The Global Dialogue may provide a platform to discuss whether broader legal frameworks – for example, through the UN Human Rights Council – may be needed to account for AI’s significant transnational impact across the AI supply and value chains.
Similarly, the Scientific Panel should synthesize and consolidate existing research on the latest technological developments, cross-regional governance approaches, and AI readiness to inform future AI capacity building and governance initiatives. The Scientific Panel may draw on long-standing scientific work, for example, by the OECD AI Observatory, UNESCO Readiness Assessment Methodology (RAM), and the AI Safety Reports (latest version). It may also draw on the rich experience of existing scientific bodies within the UN system, such as the UNFCCC, in the areas of climate science and diplomacy.
The UN process should serve as the anchor for interoperability – the place where diverse approaches converge into common principles. While the Global Digital Compact brings breadth and inclusivity, it does not carry enforceable commitments on human rights, environmental protection, or sustainable development. The Global Dialogue could therefore provide a space where binding and non-binding approaches reinforce one another, helping to translate universal principles such as the Declaration of Human Rights into the age of AI.
Empower civil society and enhance multistakeholder engagement
For the Global Dialogue to succeed, it must extend beyond states alone. Governments retain authority, but their legitimacy depends on outcomes shaped in collaboration with civil society, industry, academia, and the technical community. This principle is well established in Internet governance, where the technical community has long been a cornerstone of open, transnational cooperation. At UNGA this year, however, civil society participation was limited – in large part because no side events were organized in the UN HQ. Initiatives like Digital at UNGA, with affiliate sessions from the Global Solutions Initiative, Project Liberty, and partners, demonstrate the potential of such spaces to model inclusive and resilient digital infrastructure – the foundation of trustworthy AI governance.
Earlier in the GDC process, civil society was engaged through consultations and thematic inputs that helped shape key outputs, such as the Secretary-General’s Policy Brief, which informed the zero draft. Yet stakeholders have consistently voiced concerns about opacity, as many saw little evidence of how their contributions influenced the interstate negotiations that led to the GDC’s adoption at the Summit of the Future in 2024. With the AI Modalities Resolution, the role of civil society has also been limited.
The Global Dialogue cannot afford to repeat this pattern. If it is to be open, transparent, and inclusive, it must meaningfully involve academia, civil society, the private sector, and the technical community from the outset – and clearly communicate how their input shapes negotiations and outcomes. Adequate funding for the relevant UN entities will be essential, but so, too, will building channels that make multi-stakeholder engagement routine rather than exceptional. Done right, the Dialogue can reset the tone: a process where states lead, but legitimacy is secured by the active participation of stakeholders beyond government.
Make open-source work for the public good
The resolution’s nod to open-source AI, data, and models is promising but incomplete. Without safeguards, it risks amplifying vulnerabilities. Done right, open tools can democratize access, support capacity-building, and help close divides. Done poorly, they widen them.
United Nations Member States can build on existing initiatives within the system, such as those led by the United Nations Development Programme (UNDP), the International Telecommunication Union (ITU), and the Office of the Secretary-General’s Envoy on Technology (ODET), as well as efforts like the Digital Public Infrastructure (DPI) Safeguards Initiative, the Digital Public Goods Alliance, and the Open Source Program Office for Good (OSPO4Good) Conference Series andOpen Source Week co-hosted by ODET and the United Nations Office of Information and Communications Technology (OICT). Projects like the Complex Risk Analytic Fund (CRAFD) show how joint investments in open and high-quality data, analytics, and AI can help to better anticipate, prevent, and respond to global crises and save lives. These examples show how, when paired with governance safeguards, open-source approaches can serve the public good rather than undermine it.
The hard part begins now. The AI Modalities Resolution provides scaffolding, but only substantial follow-through and collective action will transform it into an operational foundation with a measurable impact for communities worldwide. The test ahead is whether governments and stakeholders can secure financial and structural independence, move at the speed of technology, bridge various parallel governance tracks at national, regional, and multilateral levels, empower non-state voices across regions and sectors, and channel open-source innovation toward the public good. If they succeed, this process could set a new standard for how the world governs transformative technologies. If they fail, the resolution will stand as another well-intentioned declaration that could not keep pace with reality.
Vidisha Mishra is the Director of Policy and Outreach at the Global Solutions Initiative (GSI), and Nicole Manger is the Global/UN AI Governance Lead at the German Federal Foreign Office and a GSI Fellow. They co-authored this article in their personal capacity.
Authors

