Home

Donate
Perspective

Breaking Down AI Safety: Why Focused Dialogue Matters

Ho Ting (Bosco) Hung / Apr 29, 2025

PARIS - February 11, 2025: France's President Emmanuel Macron (front center) poses for a group picture with world leaders and attendees (including Dr. Alondra Nelson) at the end of the plenary session of the AI Action Summit at the Grand Palais. (Photo by LUDOVIC MARIN/AFP via Getty Images)

Despite the proliferation of AI governance events and consultation initiatives such as the 2024 AI Seoul Summit, the 2025 France AI Action Summit, and the OECD’s public consultation on AI risk thresholds, a comprehensive framework has yet to be established. Meanwhile, following the 2025 Summit, there have been no public plans for a continued series of high-level international AI governance summits.

With discussions on AI safety currently stalled, it is crucial to reflect on how to move the conversation forward. In particular, a key obstacle to realizing the responsible use of AI lies in the persistence of a fragmented landscape, where different stakeholders advance their own approaches and discuss AI safety in fundamentally different ways. As AI safety is a broad concept that permeates various aspects of society, this divide disincentivizes cooperation and hinders the progress of developing an AI governance framework.

Specialized dialogue groups with a sharper thematic focus, complemented by conferences with a narrower focus, should be implemented more widely. A dual-track approach featuring both broad discussions and targeted dialogues would help foster consensus-building and ensure that governance efforts address context-specific risks, while not hindering the pursuit of holistic and diverse insights into AI safety.

A diverse range of risks and fragmentation of dialogues

Unlike other technologies, such as aviation or nuclear, AI systems are general-purpose and applied across a wide range of sectors. Thus, AI safety is ultimately a broad concept covering the safe, responsible, and reliable design, operation, deployment, and integration of AI systems. It encompasses diverse concerns, ranging from catastrophic and existential risks to bias, inequality, privacy, model transparency, and accountability. These risks also vary by system capabilities, compute requirements, design, and deployment context.

As AI applications span a wide range of domains involving multiple stakeholders with diverse interests and varying exposure to AI, one can imagine the themes of the debates to be equally broad. In fact, as humanity’s understanding of AI has deepened and efforts to address a wider range of issues have grown, the scope of content at the international summit series has expanded significantly. However, without a clear agenda to guide these discussions, stakeholders risk conflating or misinterpreting distinct categories of risks. Although their contributions raise important and valid concerns that help broaden the debate, these perspectives often fail to intersect meaningfully, limiting their ability to drive consensus or inform targeted policy development. Combined with the widening scope at previous summits, such as the AI Seoul Summit, the lack of a focused agenda risks diluting participants’ attention and leading to shallow engagement.

Moreover, these broad conversations are often diverted by the popular framing of AI as an inherent threat. Some, like Warren Buffett and Yuval Noah Harari, have compared AI to other dangerous technologies like nuclear weapons, but this comparison risks oversimplifying the full picture. AI, in essence, is a tool—a framework, model, or facilitator—that enables the creation of various outputs. It is not AI alone that poses a threat, but rather its applications and the intentions of those who wield it. Just as the physical mechanism of nuclear fission becomes dangerous when weaponized to produce atomic bombs and deployed in wars, AI poses risks through misuse or irresponsible deployment. Framing AI as a threat in itself risks sidelining other safety challenges and deepening adversarial divides between safety advocates and industry actors, who remain an indispensable stakeholder in its governance. Consequently, we could misunderstand or fail to reach a consensus on when and how AI should be governed.

Why do we need both broad and targeted dialogues?

To be clear, even if AI is not an inherent threat, its safety still demands serious attention, considering the potential for misuse and loss of control. Careful evaluation of risks and rigorous model testing, including the potential for model theft and misuse, is necessary to ensure its responsible use. Given the rapid evolution of AI and the scope of its safety still being unclear, initiatives like the AI Action Summit remain necessary to establish a shared vision and evaluate priorities.

These broad dialogues also facilitate the inclusion of diverse perspectives from stakeholders, including governments, corporations, civil society, the technical community, and academia. Each of them could bring unique and important perspectives to contribute to addressing the challenges, whether technical or process-based. While the public raises concerns about the societal challenges AI poses to daily life and the risks of marginalization, the technical community can offer valuable insights and assess the feasibility of proposed solutions. Academia could provide the analytical rigor and long-term perspectives necessary to ensure the robustness and ethical plausibility of the proposed governance framework. Meanwhile, governments can provide regulatory insights and set policy directions while inviting the private sector’s input, thereby ensuring the latter’s trust and incorporating technical expertise into an evidence-based regulatory model.

Although principle-based conversations are foundational in mapping the whole landscape of AI safety, they are not sufficient on their own in addressing AI risks. As AI is a technology with wide-ranging capabilities serving various purposes across multiple domains and contexts, AI safety presents a collection of distinct challenges that require tailored approaches, as reviewed in documents and reports from the AI Safety Summit and AI Seoul Summit. Without a more specialized discussion focusing on distinct risks alongside its broad counterparts, AI safety dialogues could risk becoming incoherent or overly generalized. This also exacerbates the risk that stakeholders with divergent preferences may misunderstand each other’s concerns, thereby becoming unwilling to contribute to discussions and causing stagnation in progress toward risk governance.

How can we address dialogue fragmentation?

As stakeholders worldwide work towards resetting conversations about AI governance, future conference conveners must prioritize the convergence of shared priorities and coherence of discussions, which is crucial for fostering consensus-building and creating meaningful dialogues for a safe future for humanity. They must step in and break down the debates into distinct categories, covering various stages of AI development and deployment, as well as the implementation and risk assessment of AI in different domains. This can be achieved by hosting parallel-running panels or working groups that address these topics. Similar to the extensive panel discussions at previous conferences, such as the AI Action Summit, each specialized track should continue to feature a balanced mix of stakeholders to encourage holistic input. The careful incorporation of specialized dialogue groups into large, multistakeholder conferences will allow the discussants to identify the focus of each other’s discourse and create shared understanding more effectively, while ensuring that debates target the threats of AI arising from their specific applications and users.

Admittedly, coordinating a large number of specialized panels within a single conference can be logistically and technically demanding, which may result in overstretching if attention, participation, and resources are limited. Thus, safety advocates could also organize more individual conferences and standalone events that focus on a narrower theme, such as a conference on the fairness of AI algorithms or a convening on the privacy regulations of AI training data, to complement the larger, all-encompassing counterparts. Offering a niche focus would help the initiatives establish a distinct identity and promote engagement, thus avoiding the duplication of efforts and encouraging meaningful conversations. Together, these insights can then be channeled back into broader governance discussions, enriching the larger forums with more nuanced and context-specific perspectives.

By starting from the more minor, more fine-grained problems in a more targeted dialogue where consensus is easier to reach, we can begin to assemble the building blocks for a comprehensive AI governance framework with targeted solutions. This way, we can navigate the complexity of AI safety and work towards a future where the benefits of AI can be realized safely and responsibly.

Authors

Ho Ting (Bosco) Hung
Ho Ting (Bosco) Hung is a Fellow at the Oxford China Policy Lab, a Research Associate at the Oxford Group on AI Policy (Future Impact Group), and a Researcher at the International Team for the Study of Security Verona, at which he delivered research to members of the US Department of State, FBI, and...

Related

Reconciling Agile Development With AI Safety

Topics