Home

Donate

The AI Gambit: Will the UK Lead or Follow?

Dharminder Singh Kaleka / Oct 23, 2024

Shutterstock

Artificial intelligence (AI) stands at the forefront of the Fourth Industrial Revolution, with its potential to reshape economies, industries, and societies. The United Kingdom, keen on harnessing this power, has committed itself to becoming a global leader in AI through a regulatory approach that aims to foster innovation while ensuring ethical governance. The release of the UK government's AI Regulation White Paper in March 2023 marks a significant step towards creating a “pro-innovation” framework. While this approach offers opportunities, it also exposes the UK to significant domestic and international challenges. Regulatory fragmentation and geopolitical risks threaten to undermine the UK's strategy unless it addresses critical weaknesses.

The Sector-Led Approach: Flexibility at a Cost?

The AI Regulation White Paper proposes a sector-led approach, delegating the responsibility for AI regulation to existing sector-specific regulators. In theory, this approach allows for context-specific governance, wherein each regulator can develop AI policies relevant to their jurisdiction. For example, the Information Commissioner’s Office (ICO) handles AI’s data privacy aspects, while the Financial Conduct Authority (FCA) is responsible for AI's use in the financial sector. This decentralized model allows flexibility in how AI is governed, ensuring that regulations are appropriate to the risks in specific sectors, which could encourage innovation.

However, this flexibility comes at a cost. The sector-led approach has fostered “contextually appropriate and novel governance initiatives” but risks creating a fragmented regulatory landscape that lacks cohesion. AI, particularly general-purpose AI systems like large language models (LLMs), operates across multiple sectors. A system designed for language processing may be used for healthcare, customer service, or content generation, making it difficult for sectoral regulators to coordinate their oversight. The lack of cross-sector regulatory coherence could result in gaps or overlaps in enforcement, leading to inefficiencies and inconsistencies. For instance, a medical AI application may be regulated under strict guidelines by the Medicines and Healthcare products Regulatory Agency (MHRA), while an identical AI in a less regulated field, like marketing, could be subject to weaker oversight.

To mitigate these risks, the White Paper introduces mechanisms for coordination, such as defining core characteristics of AI systems (e.g., autonomy, adaptiveness) and creating cross-sectoral principles like fairness, accountability, and explainability. However, these principles are non-statutory, meaning regulators lack legal obligations to enforce them. The government has suggested that statutory enforcement may come later, but the timeline remains unclear. Until these mechanisms are strengthened, the UK’s sector-led model will struggle to effectively manage the risks posed by complex AI systems.

Deregulation and the Weakening of Oversight

The UK government’s broader deregulatory agenda exacerbates the challenges of AI governance. Post-Brexit, the government has pursued policies aimed at reducing regulatory burdens to stimulate economic growth. This approach is evident in the draft Digital Information and Data Protection Bill, which proposes replacing the UK’s existing General Data Protection Regulation (GDPR)-based framework with a more lenient model. The bill seeks to erode the independence of the ICO, the UK’s data protection authority, by requiring government approval for ICO codes of practice and introducing a “growth and innovation” duty alongside its existing data protection mandate. These changes threaten to politicize the ICO’s role and undermine its ability to safeguard consumer data.

Furthermore, there are legitimate concerns that the government’s deregulatory push will weaken oversight bodies across the board. The Centre for Data Ethics and Innovation (CDEI), originally conceived as an independent body with statutory powers to advise on data ethics and AI governance, has seen its role shift. The CDEI now focuses on working in partnership with the private sector, reducing its ability to hold the government accountable. Without independent oversight, the UK’s AI governance framework risks regulatory capture, where the influence of industry overrides public interest concerns.

Additionally, the Digital Markets Unit (DMU), established to monitor anti-competitive practices in the digital economy, has yet to receive legal powers to conduct its work effectively. Without these powers, the DMU cannot enforce competition rules on Big Tech companies, many of which are at the forefront of AI development. These setbacks suggest that the UK’s regulatory bodies may lack the authority needed to oversee the development of AI systems that could impact multiple sectors and industries.

Devolution and the Challenge of Fragmented Governance

AI governance in the UK is further complicated by the devolution of certain powers to Scotland, Wales, and Northern Ireland. While data protection remains a reserved power for Westminster, devolved administrations hold authority over areas like healthcare and education, both of which are critical to AI development. Scotland, for example, published its National AI Strategy in 2021, setting out a vision for AI governance that emphasizes ethical use and fairness. Wales has also advanced its own AI policies, emphasizing the ethical and transparent use of AI. In contrast, the central government in Westminster has prioritized innovation and growth, which could lead to tensions between the devolved nations and the central government.

The risk of fragmentation is real. If Scotland or Wales chooses to pursue a more stringent AI regulatory framework, it could create discrepancies in AI governance across the UK, confusing businesses and developers and complicating compliance efforts. The Internal Market Act (2020) was designed to ensure regulatory consistency across the UK, but it has already exacerbated tensions between Westminster and devolved administrations. The UK government’s recent veto of Scotland’s Gender Reform Bill using the Sewel Convention is a stark example of how devolution issues could spill over into AI governance. Without greater coordination between Westminster and the devolved governments, regulatory fragmentation could undermine the UK’s ability to develop a coherent national AI strategy.

International Constraints: The Looming Shadow of the European Union

Beyond domestic challenges, the UK’s AI governance strategy faces significant international constraints. The European Union, through its AI Act, is set to impose strict regulatory requirements on AI systems, particularly high-risk systems used in critical sectors such as healthcare, education, and law enforcement. While the UK is no longer bound by EU law, it must maintain data adequacy agreements with the EU to ensure the free flow of data between the two jurisdictions. The EU’s data adequacy decision for the UK is subject to a four-year review, and the UK’s proposed data protection reforms could jeopardize its adequacy status.

If the UK fails to maintain data adequacy, UK-based companies would face significant barriers to doing business in the EU. More importantly, the EU’s regulatory framework will likely exert extraterritorial influence through the so-called “Brussels Effect.” Much like the EU’s GDPR, the AI Act could set global standards that multinational companies must adhere to, effectively forcing the UK to align with EU regulations even as it seeks to diverge. This alignment is crucial, given that large AI companies operating in both the UK and EU will likely opt for compliance with the more stringent EU standards to avoid regulatory duplication.

Additionally, the EU and the US are deepening their cooperation on AI governance through initiatives like the EU-US Trade and Technology Council. The UK, having exited the EU, is currently excluded from these discussions, limiting its ability to shape global AI standards. If the UK wants to retain influence in shaping AI governance on the international stage, it must urgently seek to reestablish collaborative ties with the EU and other international bodies.

General-Purpose AI: A Regulatory Blindspot?

One of the most pressing challenges for the UK’s AI governance framework is its ability to regulate general-purpose AI systems like large language models (LLMs) and generative AI, such as OpenAI’s GPT-4. These systems, designed for use across multiple sectors, introduce significant regulatory complexities. They have the potential to disrupt industries as diverse as healthcare, law, and education, posing risks such as economic concentration, misinformation, and cybersecurity threats.

The UK’s sector-led approach may struggle to keep up with the rapid proliferation of these technologies. A general-purpose AI system might simultaneously fall under the remit of multiple regulators, leading to confusion over who is responsible for oversight. Additionally, the government’s emphasis on non-regulatory approaches and “light-touch” oversight could fail to address the systemic risks posed by these technologies. Without stronger cross-sector coordination, the UK risks falling behind in managing the ethical and societal impacts of these AI systems.

Conclusion: A Need for Course Correction

The United Kingdom’s AI regulation strategy holds much promise, but significant challenges remain. The sector-led, pro-innovation approach offers flexibility but risks creating regulatory gaps, undermining public trust, and leaving the UK vulnerable to ethical lapses. To safeguard its ambitions, the UK must enact statutory enforcement for its AI principles, bolster regulatory bodies with greater authority and resources, and ensure cross-sectoral coordination.

At the international level, the UK must reconcile its deregulatory ambitions with the realities of global AI governance. Strengthening ties with the EU, participating in global standard-setting, and addressing the risks of general-purpose AI are essential if the UK is to maintain its position as a global leader in AI governance. Failure to act decisively risks turning the UK into a reluctant follower rather than a trailblazer in the fast-evolving landscape of AI governance.

Authors

Dharminder Singh Kaleka
Dharminder Singh Kaleka is a London-based lawyer and policy specialist with expertise in developmental economics and strategic communications. He is the co-founder of MovDek Politico LLP, a political and public affairs strategy firm, and holds a Master of Social Policy and Development from the Londo...

Topics