Closing the Gaps in AI Interoperability
Benjamin Faveri, Craig Shank, Richard Whitt, Philip Dawson / Oct 15, 2025
Hanna Barakat & Archival Images of AI + AIxDESIGN / Data Mining 1 / CC-BY 4.0
AI regulatory and technical interoperability stands at a critical juncture. While significant technical and regulatory interoperability efforts have emerged to address ongoing fragmentation challenges, the current landscape reveals both remarkable progress and concerning gaps that threaten to undermine AI’s potential. The path forward requires considerable coordination across sectors, with each stakeholder playing a distinct and interconnected role in building coherent frameworks for AI governance and technical integration.
Current technical and regulatory AI interoperability efforts
The landscape of AI interoperability consists of multilateral initiatives and emerging technical standards that show promise and further fragmentation. On the regulatory front, the G7’s Hiroshima AI Process has established international frameworks with guiding principles and codes of conduct for advanced AI systems, garnering support from 49 governments beyond the original G7 members through the Hiroshima AI Process Friends Group. This expanding consensus represents meaningful progress toward coordinated governance approaches. Similarly, the International Network of AI Safety Institutes brings together technical organizations from ten countries, prioritizing research collaboration on AI risks, common testing practices, shared interpretation frameworks, and capacity building from diverse actors. Meanwhile, the EU’s harmonized standards process under the AI Act utilizes the “New Approach” to regulation by delegating technical standard development authority to CEN/CENELEC.
Regional efforts complement these global initiatives. The Transatlantic Trade and Technology Council focuses on aligning AI approaches between democratic allies, while APEC economies advance AI standards through collaborative knowledge exchanges. The ASEAN Digital Economy Framework Agreement establishes progressive rules covering digital trade, cybersecurity, cross-border data flows, and emerging technologies, including AI.
Technical interoperability efforts have also gained significant momentum through industry-led standards. Google’s Agent-to-Agent (A2A) protocol represents the first major industry standard for multi-agent communication, enabling independent AI agents to communicate through standardized message formats with support from over 50 technology partners. Complementing this horizontal integration approach, Anthropic’s Model Context Protocol (MCP) addresses integration challenges by standardizing how AI systems connect with external tools and databases, rapidly gaining adoption from major AI providers, including OpenAI and Google DeepMind.
The convergence of these protocols creates possibilities for comprehensive AI ecosystems where multiple specialized agents collaborate using A2A while accessing various systems through MCP. Additional initiatives include the Open Voice Interoperability Initiative, which establishes frameworks for diverse conversational AI agents to interact effectively using standardized natural language interfaces.
Gaps in current interoperability efforts
Despite these efforts, four gaps remain unaddressed. The first involves geopolitical disruptions and trade tensions that are fragmenting cooperative efforts. The current US administration’s AI policy shifts introduce considerable disruptions to existing cooperation mechanisms, with proposed changes to AI chip export controls potentially accelerating China’s AI self-sufficiency. Similarly, the pivot from "AI Safety” to “AI Security” in US policy discourse risks sidelining fairness and bias concerns, creating friction with international partners who view such issues as important.
Second, regulatory fragmentation and incompatible standards, such as the proliferation of incompatible mandatory risk classification models across jurisdictions, create a patchwork of requirements that organizations must navigate. While the EU AI Act establishes specific categories for high-risk applications, jurisdictions like Colorado have developed distinct classification systems, compounding fragmentation through multiple, potentially incompatible certification frameworks without mutual recognition agreements. This absence of official interoperability tools or regulatory crosswalk makes it difficult to understand compliance requirements across state and international jurisdictions. Varying requirements for data governance, algorithmic transparency, human oversight, and risk assessment further hinder the development of global technical standards.
Third, gaps remain in AI operations and global representation. Current standards lack adequate rules for AI activities, including delegation, consent, and authentication issues. Environmental oversight is also absent, with insufficient policies addressing the energy and water demands of AI infrastructure. In addition, there is limited representation of Global South countries in developing governance frameworks, which risks creating systems that fail to account for diverse developmental needs and cultural contexts.
Fourth, deficits in security, privacy, and trust hinder cross-system integration. Cross-platform AI interoperability involves sharing sensitive data and capabilities across systems with different security models. Traditional security approaches are inadequate for the distributed trust relationships required by interoperable AI ecosystems. The question “where does our data go?” emerges as a significant barrier to AI adoption, driving defensive procurement focused more on risk mitigation rather than capability evaluation.
Sectoral roadmaps to bridge interoperability gaps
Addressing these challenges requires coordinated action across four key sectors. The public sector roadmap emphasizes governments’ position as regulators, procurers, and users of AI systems. Governments should focus on foundation building through establishing early definition and measurement standards, incorporating global AI policy frameworks into domestic regulatory development, and creating dedicated interoperability coordination offices. Next, they should develop adaptive governance frameworks that can evolve with AI’s rapid advances, implementing regulatory sandboxes for testing interoperability frameworks, and adopting ‘sovereignty-compatible’ approaches that enable regulatory coherence without requiring identical rules across jurisdictions. Building trust infrastructure and verification systems is also critical, through comprehensive audit frameworks and mutual recognition agreements for AI certification. Lastly, governments should focus on cross-border integration and scaling, leveraging economic benefits to form broader coalitions and establishing permanent institutions for sustained coordination.
The private sector faces a critical window to act before regulatory fragmentation creates exponentially higher costs. Organizations should start with strategic positioning, conducting interoperability assessments, forming cross-functional teams, and developing organizational AI principles that prioritize interoperability from the outset. Building interoperable system architectures requires implementing standardized protocols, robust data governance, and technical systems that accommodate varied compliance requirements. Controlled deployment through pilot projects or sandboxes enables validation of technical approaches while building internal expertise. Scaling successful pilots and building external partnerships support organization-wide implementation for cross-enterprise interoperability, while ongoing optimization enables continuous improvement amid evolving technical and regulatory requirements.
Standard-setting bodies and international organizations are uniquely positioned to sustain cooperation during geopolitical disruption and drive progress when regulation lags. Their roadmap should center on building coherence through shared frameworks and global reference models for AI assurance. Accelerating standards development requires prioritizing critical areas while maintaining transparency and credibility. Certification and assurance can serve as the unifying layer through joint certification models and expanded international accreditation networks. Flexible baseline standards can allow for local adaptation while maintaining global alignment. Broader participation should be supported through funding and diversity to ensure standards reflect a wide range of contexts and needs.
NGOs and civil society face the challenge of maintaining relevance amid commercial and governmental dominance. Their path forward includes five critical elements: expanding agenda-setting that moves beyond narrow “guardrails” debates, adapting to increasingly hostile political environments, securing meaningful roles in technical standard-setting processes, developing independent funding sources, and ensuring that policies and standards uphold values like privacy and security.
Choosing between fragmented and interoperable AI futures
Success across all roadmaps depends on recognizing that AI regulatory and technical interoperability is an ongoing negotiation between competing interests rather than a fixed endpoint. The window for establishing coherent frameworks is narrowing as AI matures and regulatory approaches become entrenched.
The current trajectory toward fragmented AI ecosystems risks creating technological lock-ins that mirror geopolitical divisions, potentially undermining the collaboration needed to address global challenges. Achieving success will require sustained coordination across technical and regulatory, governance, and verification domains, with each sector leveraging its capabilities and maintaining long-term commitments to multilateral cooperation. The choice between fragmented and interoperable AI futures will be decided in the next few years, making implementation of sectoral roadmaps not merely a policy preference but an urgent imperative for realizing AI’s potential while managing its risks.
Authors



