Home

Donate
Perspective

Learning from Past Successes and Failures to Guide AI Interoperability

Benjamin Faveri, Craig Shank, Richard Whitt, Philip Dawson / Jul 10, 2025

Alexa Steinbrück / Better Images of AI / Explainable AI / CC-BY 4.0

As artificial intelligence (AI) rapidly evolves and proliferates across sectors and jurisdictions, the challenge of achieving effective regulatory and technical interoperability has become increasingly salient. The current AI governance landscape is characterized by fragmentation and convolution, with proliferating standards, principles, and regulations creating compliance burdens and potential lock-in effects as different initiatives compete for market dominance. Rather than adding to this already complex governance landscape, there is considerable value in examining how other established sectors have navigated similar interoperability challenges, and what lessons can be learned from these efforts.

An examination of regulatory and technical interoperability efforts across four distinct domains – emerging technologies through the NanoDefine project; environmental sustainability via the EU’s INSPIRE Directive; telecommunications from the 19th-century telegraphs to post-Snowden architectures; and internet/web architecture development – reveals patterns that transcend individual technological contexts. These case studies collectively demonstrate that interoperability is not merely a technical endpoint but an ongoing negotiation between competing interests, involving complex dynamics of technological evolution, regulatory frameworks, and institutional coordination.

Unlike established technologies that achieved interoperability after lengthy maturation periods, AI systems are being deployed at scale while fundamental questions about definitions, measurement standards, safety protocols, and governance frameworks remain unresolved. These case studies reveal that once incompatible systems become entrenched, achieving interoperability requires considerably more effort and resources than proactive standardization undertaken during a technology’s nascent stages. This temporal reality creates urgency around establishing common frameworks for AI regulatory and technical interoperability before divergent approaches become too institutionalized to reconcile effectively.

Cautionary tales of interoperability

The EU’s INSPIRE Directive – an effort for all EU Member States to have interoperable environmental data collection, analysis, testing, etc., methods – provides a stark warning about the consequences of rigid regulatory frameworks that fail to evolve alongside technological advancements. Designed in 2007 for static datasets, INSPIRE’s specifications clashed with emerging technologies such as real-time sensor networks and analytics. For example, the Netherlands developed costly hybrid APIs to bridge INSPIRE’s various air quality monitoring systems, while the directive’s structured Geography Markup Language schemas proved incompatible with machine learning models requiring unstructured inputs. The European Parliament’s evaluation specifically criticized these “overly prescriptive metadata requirements” for discouraging private-sector participation.

Similarly, the history of telecommunications illustrates how interoperability systems designed only for cooperative conditions can become liabilities when trust breaks down. The International Telegraph Union’s framework thrived for decades based on mutual trust and shared economic benefits, which enabled global connectivity. However, WWI exposed the fundamental vulnerability of trust-dependent systems when Britain strategically severed Germany’s undersea cables. For example, the 1917 Zimmermann Telegram incident – in which British intelligence intercepted a secret German diplomatic communication routed through British-controlled cables during World War I, leading to the US’s entry into the war – demonstrated how shared infrastructure could become a weapon. This crisis immediately undermined existing technical interoperability frameworks, which had operated on the assumption of continued cooperation.

Success stories worth emulating

The NanoDefine project sought to develop a common definition of a nanomaterial in the late 2010s following definitional inconsistencies arising in safety assessments, product labeling, and international trade compliance efforts, all of which were creating safety issues and technical and regulatory barriers to trade. The NanoDefine project succeeded precisely because it built adaptability into its framework from the outset. The project’s NanoDefiner e-tool was designed to be expandable and adaptable, allowing for incorporation of new methods and updates to regulatory definitions as they became available.

This adaptive design enabled the framework to successfully accommodate the European Commission’s revised Recommendation for the Definition of Nanomaterial, demonstrating flexibility and sustainability. The project’s emphasis on developing adaptable Standard Operating Procedures (SOPs) for the testing, inspection, and use of nanomaterials helped maintain consistency in critical performance parameters and created a governance model that could evolve with technological progress.

Early intervention in telecommunications standardization also proved successful. The 1865 International Telegraph Convention was established while telegraph technology was still expanding, allowing delegates from twenty nations to agree on common standards before incompatible systems became entrenched. This proactive approach prevented the kind of systemic fragmentation that later plagued other technologies.

Four critical lessons for AI regulatory and technical interoperability

These case studies reveal four critical lessons that can directly inform AI regulatory and technical interoperability efforts.

First, there is a need for adaptive governance frameworks that can evolve alongside rapid technological advancement, avoiding the rigid specifications that hindered the INSPIRE Directive’s long-term effectiveness. Given AI’s rapid development , there is a need to design and implement adaptive governance frameworks that can evolve alongside these technological advancements.

Second, these cases highlight the importance of establishing early definition and measurement standards during technology development lifecycles, as demonstrated by both successful early interventions in telecommunications and the costly retrofitting required in nanomaterials governance. This lesson is particularly relevant to AI’s emerging governance challenges, where limited coordination and harmonization efforts of technical and regulatory requirements across jurisdictions risk creating economic trade disadvantages.

Third, it is essential to build trust through robust verification mechanisms rather than relying solely on cooperative goodwill, a lesson emphasized by the evolution of telecommunications from trust-based systems to zero-trust architectures – systems that continuously verify identities and assume every network component is potentially compromised, following security crises. For AI governance, this lesson is particularly urgent given the current trust crisis surrounding the transparency, bias, and safety of AI systems, which creates exactly the kind of uncertainty that undermines trust in regulatory and technical interoperability frameworks.

Fourth, bridging regulatory and technical divides requires structured collaboration that aligns diverse stakeholder interests while maintaining implementable standards. The internet and web architecture case illustrate this principle through the contrasting fates of TCP/IP and the Open System Interconnection (OSI) reference model, both of which are network models for the transfer and communication of information. While the OSI reference model relied on formalized, top-down standardization processes, TCP/IP succeeded by prioritizing a “rough consensus and running code” philosophy.

The path forward for AI interoperability

Taken together, these lessons suggest that successful AI regulatory and technical interoperability requires moving beyond the current proliferation of disconnected regulatory efforts toward more coordinated approaches that balance necessary standardization with the flexibility required for rapidly evolving technologies. The window for implementing such coordinated frameworks may be rapidly closing as AI technologies mature and regulatory approaches become institutionalized, making the insights from these historical cases both timely and essential for avoiding the costly fragmentation patterns that have characterized other technological domains.

For AI governance, this means that interoperability efforts must move beyond aspirational principles toward concrete verification mechanisms that enable stakeholders to independently assess compliance and performance, such as expanding and developing the emerging technical standards for AI explainability, auditing, and risk assessment that function as verification tools rather than mere guidelines. Like the post-Snowden zero trust architectures, AI interoperability frameworks should assume that trust breakdowns are inevitable and design systems that can maintain functionality through verification rather than faith.

The NanoDefine model of collaborative validation and transparent decision support tools offers a particularly relevant template for building trust in AI governance across diverse international stakeholders while anticipating future crises that could otherwise fragment emerging interoperability efforts. By learning from both the successes and failures of past interoperability efforts, we have a unique opportunity to establish AI interoperability frameworks that are both technically sound and adaptable to the inevitable challenges that lie ahead.

Authors

Benjamin Faveri
Benjamin Faveri is a Senior Project Manager at CEIMIA where he is responsible for their Aiming for AI Interoperability project. Before this role, Benjamin was the Consulting Scientist for the IPIE’s Panel on Global Standards for AI Audits; Research Fellow in AI Governance, Law, and Policy at Arizona...
Craig Shank
Craig Shank is a seasoned technology industry leader with over three decades of experience at the intersection of technology, policy, and law. His career spans executive roles at Microsoft and other tech companies, where he developed and implemented innovative practices in AI governance, competition...
Richard Whitt
Richard’s career spans over three decades as a public policy attorney, technology strategist, business advisor, and entrepreneur. He spent over eleven years with Google (2007-18) in its Washington D.C. and Mountain View offices, including four years as Corporate Director for Strategic Initiatives; s...
Philip Dawson
Philip Dawson is an experienced leader and strategist at the intersection of global AI policy and governance. He is Head of AI Policy at Armilla AI, a provider of assessment and insurance solutions for AI and LLM-powered applications, and has been an active member of global AI expert committees with...

Related

Perspective
The Need for and Pathways to AI Regulatory and Technical InteroperabilityApril 16, 2025

Topics