Home

Donate
Perspective

The Need for and Pathways to AI Regulatory and Technical Interoperability

Benjamin Faveri, Craig Shank, Richard Whitt, Philip Dawson / Apr 16, 2025

PARIS - February 11, 2025: France's President Emmanuel Macron (front center) poses for a group picture with world leaders and attendees at the end of the AI Action Summit at the Grand Palais. (Photo by LUDOVIC MARIN/AFP via Getty Images)

As we stand at a critical juncture in AI’s development, a critical governance challenge is emerging that could stifle innovation and create global digital divides. The current AI governance landscape resembles a patchwork of fragmented regulations, technical and non-technical standards, and frameworks that make the global deployment of AI systems increasingly difficult and costly. This fragmentation poses several challenges, such as conflicting rules and technical specifications, weakened trade capabilities, and increased compliance burdens on organizations, each of which can be partially addressed through a mix of regulatory and technical interoperability efforts.

The fragmented AI governance landscape

Today’s global AI governance environment is characterized by diverging regulatory approaches across major economies. The EU has positioned itself as a first-mover with its AI Act, implementing a binding, risk-based classification system that bans certain AI applications outright and imposes stringent obligations on high-risk systems like biometric identification and critical infrastructure. This AI Act stands in stark contrast to the UK’s sector-specific approach, which avoids new legislation in favor of empowering existing regulators to apply five cross-cutting principles tailored to industries like healthcare and finance. Meanwhile, the US lacks comprehensive federal AI legislation, resulting in a chaotic mix of state-level laws and non-binding federal guidelines. States like Colorado have enacted laws with "duty of care" standards to prevent algorithmic discrimination, while others have passed various sector-specific regulations.

The recent shift in US federal leadership has further complicated matters, with the Trump administration’s 2025 Executive Order replacing previous guidance and focusing on “sustaining and enhancing US AI dominance.” China takes yet another approach, combining state-driven ethical guidelines with hard laws targeting specific technologies like generative AI. Unlike Western frameworks emphasizing individual rights, China’s regulations focus on aligning AI development with national security and government values.

Aside from these and other hard laws, soft law initiatives add another layer of complexity to the fragmented AI governance landscape. Recent datasets capture over 600 AI soft law programs and 1400+ AI and AI-related standards across organizations like IEEE, ISO, ETSI, and ITU, among others. While some efforts, like ISO 42001 and OECD’s AI Principles, have gained considerable traction, the sheer number of competing soft laws has created significant compliance burdens for organizations aiming to develop or deploy their AI systems globally and responsibly.

Why AI regulatory and technical interoperability matters

This fragmentation creates serious problems for innovation, safety, and equitable access to AI technologies. When a healthcare algorithm developed in compliance with the EU’s strict data governance rules could also potentially violate US state laws permitting broader biometric data collection or face mandatory security reviews for export to China, the global deployment of beneficial AI systems becomes increasingly complicated. The economic costs are substantial. According to APEC’s 2023 findings, interoperable frameworks could boost cross-border AI services by 11-44% annually. Complex and incoherent AI rules disproportionately impact startups and small and medium-sized enterprises that lack the resources to navigate fragmented compliance regimes, essentially giving large enterprises an unfair advantage.

Beyond economics, technical fragmentation perpetuates closed ecosystems. Without standardized interfaces for AI-to-AI communication, most systems remain siloed within corporate boundaries, precluding interoperability between AI agents or between agents and platforms. This lack of interoperability stifles competition, user choice, edge-based innovation, and trust in AI systems. When safety, fairness, and privacy rules vary dramatically between jurisdictions, users cannot confidently rely on AI applications regardless of where they were developed. Establishing shared regulatory and technical principles ensures that users in different markets can trust AI applications across borders.

Pathways to AI interoperability

Fortunately, there are four promising pathways to advance both regulatory and technical interoperability. These pathways do not require completely uniform global regulations but rather focus on creating coherence that enables cross-border AI interactions while respecting national priorities. First, governments should incorporate global standards and frameworks into domestic regulations. Rather than developing regulations from scratch, policymakers can reference established international standards like ISO/IEC 42001 in their domestic regulation. This incorporation by reference approach, like practices in the EU's harmonized standards system, creates natural alignment in compliance mechanisms while still allowing for national customization.

Second, we need open technical standards for AI-to-AI communication. While corporate APIs might offer short-term solutions, true open standards developed through multistakeholder bodies like IEEE, W3C, or ISO/IEC would create a level playing field. Governments can incentivize adoption through procurement policies or tax benefits, like NIST’s Smart Grid Interoperability Roadmap.

Third, piloting interoperability frameworks in high-impact sectors would validate approaches before broader implementation. Multilateral regulatory sandboxes – like those established between the UK, UAE, and Singapore – provide safe environments to test regulatory and technical interoperability approaches across borders. Developing measurement tools that map relationships between different interoperability frameworks can identify overlaps and gaps, creating crosswalks between major regulatory and technical systems.

Finally, building stronger economic and trade cases for interoperability will stimulate political will. Integrating AI governance provisions into trade agreements, as seen in the USMCA’s Digital Trade Chapter, creates mechanisms for regulatory coherence while fostering digital trade. Regional frameworks like APEC and ASEAN have recognized this approach, urging economies to pursue regulatory interoperability to prevent market fragmentation.

The path forward

Achieving regulatory and technical interoperability will not happen overnight, nor will it emerge spontaneously from market forces alone. The incumbents’ natural incentive is to protect their AI silos from encroachment. What is needed is a networked, multistakeholder approach that includes governments, industry, civil society, and international organizations working together on specific and achievable goals. International initiatives like the G7 Hiroshima AI Process, the UN’s High-Level Advisory Body on AI, and the International Network of AI Safety Institutes offer promising venues for networked multistakeholder coordination. These efforts must avoid pursuing perfect uniformity and instead focus on creating coherence that enables AI systems and services to function across borders without unnecessary friction. Just as international shipping standards enable global trade despite differences in national road rules, AI interoperability can create a foundation for innovation while respecting legitimate differences in national approaches to governance.

The alternative – a deeply fragmented AI landscape – would not slow innovation, entrench the power of dominant players, and deepen digital divides. The time for concerted action on AI interoperability is now, while governance approaches are still evolving. By pursuing regulatory and technical interoperability together, we can take a path where AI fulfills its promise as a technology that benefits humanity across borders rather than deepening existing divides.

Authors

Benjamin Faveri
Benjamin Faveri is a Consulting Scientist for the International Panel on the Information Environment’s Scientific Panel on Global Standards for AI Audits and a Research Fellow at Arizona State University where he leads a project on soft law for AI governance. Before this, he worked at the Responsibl...
Craig Shank
Craig Shank is a seasoned technology industry leader with over three decades of experience at the intersection of technology, policy, and law. His career spans executive roles at Microsoft and other tech companies, where he developed and implemented innovative practices in AI governance, competition...
Richard Whitt
Richard Whitt’s career has spanned over three decades as a public policy attorney, technology strategist, business advisor, and entrepreneur. As founder of GLIA Foundation, in August 2024 Richard launched the GliaNet Alliance, a coalition of small for-profit tech companies exploring the market for N...
Philip Dawson
Philip Dawson is an experienced leader and strategist at the intersection of global AI policy and governance. He is Head of AI Policy at Armilla AI, a provider of assessment and insurance solutions for AI and LLM-powered applications, and has been an active member of global AI expert committees with...

Related

Interoperability in AI Governance: A Work in Progress

Topics