Home

UK Versus EU: Who Has A Better Policy Approach To AI?

Noah Greene / Feb 28, 2024

While not flawless, the UK's approach to AI is more appropriate at this stage of the technology, says Noah Greene.

Credit: Marcel Grabowski / UK Government

The United Kingdom (UK) and European Union (EU) are taking different approaches to regulating artificial intelligence (AI). In October, British Prime Minister Rishi Sunak took the opportunity to share his government’s views on AI governance. According to him, the UK will take the risks associated with AI seriously but will not “rush to regulate” the technology. In contrast, the EU has taken steps to regulate AI, most notably with the advancement of its AI Act, a comprehensive regulation that covers various use cases. 

Who’s approach to AI governance is better? Thus far, it has been the United Kingdom. The UK’s framework is less rigid, allowing for firms to innovate more quickly while also giving the government flexibility to respond to societal risks as they arise.

Currently, the British plan is to regulate AI sector-by-sector, if needed, rather than applying rules to an entire class of technology. In practice, the British government is building its AI vision on a simple concept: “We will not assign rules or risk levels to entire sectors or technologies. Instead, we will regulate based on the outcomes AI is likely to generate in particular applications.” This means enforcing existing laws that may be relevant to AI development and deployment, as needed, in areas like healthcare and law enforcement, along with creating new rules that the government deems necessary.

The EU believes that a different, more cautious process is needed and is applying a horizontal risk based framework that cuts across sectors. The union’s AI Act, which was solidified earlier this year, does not shy away from restricting how AI may be used by some actors. Certain systems are classified as posing an unacceptable risk and will subsequently be banned, such as those capable of behavioral manipulation and social scoring. Systems that pose a high risk of violating the union’s fundamental rights will be monitored throughout its lifecycle. Other analysts have referred to the EU’s policy as a “command-and-control” regulatory style. Core to the EU’s identity is primacy over its member states in the law. As a result, the EU likely feels the need to regulate AI more broadly and comprehensively, in part, because it is an institution built on sweeping legal supremacy for the sectors it maintains authority over (also known as EU competencies).

No strategy for AI governance is perfect. For now, the UK’s framework for AI governance is less likely to unduly harm innovation. In the EU AI Act, European leaders have created an overly broad regulation that acts as a gut punch to AI software that in some cases does not yet exist. More flexibility in the UK’s approach is to be expected. Unlike the EU’s policy process, which relies on the consensus of its members, by-and-large the Conservatives can choose their own path for the emerging technology future as long as they’re in power. The emphasis on innovation in the UK framework is more consistent with the world’s leading AI power, the United States. As a result, cross-national collaboration inside of the US-UK “special relationship” could be less of a challenge.

The European Commission has noted that “The opacity of many algorithms may create uncertainty and hamper the effective enforcement of the existing legislation on safety and fundamental rights…legislative action was needed to ensure a well-functioning internal market for AI systems where both benefits and risks are adequately addressed.” The problem is that most AI risks have not been actualized in a way that currently requires hard cross-sector AI-specific regulation nor is there a clear pathway for how the most negative outcomes may occur for many of the substantial risks which have been hypothesized. The EU’s policy in this area is prone to sacrificing innovation for the sake of a hypothetical future.

The roadmap that the UK government has laid out for itself allows for far more room for flexibility in addressing these risks, should it be necessary. A sector specific approach gives it more leverage to adapt its regulatory style as needed, reducing the government’s administrative burden, and increasing its responsiveness to societal needs. The challenge of regulating AI effectively cannot be understated. The fast-paced evolution in technological capabilities makes creating effective regulation a moving target which could make more narrow regulatory action a better tool for targeting core AI-based issues quickly, rather than a broad declaration that takes the form of a drawn out process and requires regular updates.

The UK’s policy vision is not flawless. It’s unclear whether Sunak’s government has a firm grasp on where AI may intersect with existing laws or gaps in those laws. This is something the government is seeking to further understand. Last year the UK created a central AI risk function to “identify, measure and monitor existing and emerging AI risks.” While helpful, it is unclear whether the central AI risk function can provide enough support to agencies responsible for regulating the marketplace. In many ways, the success of the UK’s strategy hinges greatly on the central AI risk function’s ability to communicate and coordinate with other agencies. Furthermore, future UK practices are heavily reliant on agency regulators to understand the impacts of AI in their sector of expertise well enough to regulate effectively. This may be a tall task for government officials who only have a superficial understanding of how AI tools are designed and used.

Nations benefit from policies that accelerate the development of truly beneficial AI software while also providing incentives to dissuade bad actors from going too far outside of the box. Governments should be wary of attempting to regulate AI without having a firm grasp of the second and third order effects of their policies, while also ensuring they don’t act too slowly to manage significant problems that may arise. For now, the UK government is doing just that, and other governments should do the same.

Authors

Noah Greene
Noah Greene currently serves as the research assistant for the AI Safety and Stability Project at the Center for a New American Security (CNAS). His research agenda includes investigating AI policy in the U.S., Europe, Russia, and China, along with the role of international institutions in emerging ...

Topics