Home

Donate
Perspective

UK Deepens Dependence on US Tech with New OpenAI Partnership

Megan Kirkwood / Jul 24, 2025

Megan Kirkwood is a fellow at Tech Policy Press.

British Prime Minister Keir Starmer delivers remarks during a visit at the Manufacturing Futures Lab at UCL (University College London) on January 13, 2025, in London, England. Source

The United Kingdom is continuing its push to embed dominant US tech firms into public institutions. On July 21, it announced a non-binding partnership with OpenAI to “expand AI security research collaborations, explore investing in UK AI infrastructure like data centres, and find new ways for taxpayer-funded services like security and education to make best use of the latest tech.”

Earlier in July, the government also partnered with Google Cloud to offer free training for civil servants and roll out its services across public institutions. These deals add to a growing list of agreements with US firms like Microsoft and Anthropic to offer their technology to the UK public sector.

What is in the agreement?

While details of the OpenAI partnership are sparse, the Memorandum of Understanding (MOU), a voluntary agreement, outlines the objectives of the partnership. First, the MOU highlights research, stating that the partnership will help build “sovereign AI” in the UK, which appears to mean that AI applications will be developed in the UK with the assistance of OpenAI technology. The irony that this will be built with US technology in partnership with a US firm appears to be lost.

Second, the MOU states that OpenAI intends to help diffuse AI across the public and private sectors. While Anthropic’s partnership specifically led to its models being used in a new government smartphone app, the OpenAI partnership is alarmingly much broader. The MOU states the collaboration will cross various government departments, “including in areas such as justice, defence and security, and education technology.” Again, the MOU emphasizes sovereignty, stating that the partnership is to illustrate the use of AI in these areas to encourage UK firms. However, with OpenAI guiding the initiative, it is likely to favor startups that use their technology. Most concerning, though, is that this particular announcement ignores the fact that implementing algorithmic “solutions” in areas like justice has been proven to be inaccurate and likely to repeat harmful bias.

Third, regarding infrastructure, the partnership intends to see OpenAI either directly invest in AI Growth Zones, which are essentially data centers, or generally assist in the research and development of AI models. Finally, the MOU states that OpenAI may expand its existing partnership with the UK AI Security Institute, previously the AI Safety Institute, “to include the development of a new technical information sharing program.” In addition, OpenAI plans to work with the UK government to supply information about “evolving model capabilities and risks,” ensuring that they continue to direct attention towards bogus debates about existential risk and away from concrete issues like the worrying climate impact of these technologies or the myriad of social harms inflicted by automated decision-making.

The agreement between the UK government and OpenAI has garnered wide publicity and attention, much of it negative. Robert Booth reports in The Guardian that the agreement has faced criticism from both government ministers — over the lack of specificity — and civil society organizations, who cite a lack of transparency. “This is yet more evidence of this government’s credulous approach to big tech’s increasingly dodgy sales pitch,” said Martha Dark, the executive director of Foxglove.

The voluntary and informal nature of both the Google Cloud and OpenAI agreements is cause for concern when combined with the complete lack of transparency on the procedure. In another report on the Google Cloud agreement, a government source told The Guardian the “opportunity secured by Google was not put out to public tender, as no money was changing hands.” It is concerning that Peter Kyle’s close relationship with US tech firms like OpenAI and Google may be giving them the upper hand in obtaining partnership agreements.

Such partnership announcements have prompted various academics and experts to warn of the clear risks of embedding dependency on Big Tech firms, which locks those technologies into critical public infrastructure. This not only further entrenches their market power but also risks placing them beyond the reach of regulatory enforcement, an argument I have made previously as well.

Trading away sovereignty

Despite frequent references to sovereignty in the agreements with OpenAI and Google, the UK is being drawn ever closer to the US, even amid ongoing geopolitical tensions. First, these agreements embed US Big Tech firms into UK public and private infrastructure, whether it’s Microsoft providing 365 to public organizations, Anthropic models powering government apps, Google Cloud supporting various public services, or OpenAI advising and directing the UK government’s deployment of AI across public institutions and the wider economy.

Second, the security focus aligns the UK with the US. The emphasis on OpenAI’s collaboration with the UK AI Security Institute, which was recently rebranded to focus on security, follows Peter Kyle’s call for the independent Alan Turing Institute to “prioritize defense, national security and sovereign capabilities.” Similar institutional shifts towards an emphasis on security over safety can be observed in the US. In addition, the UK followed the US in refusing to sign an international AI declaration aimed at fostering a global alliance to promote "transparent, safe, secure and trustworthy" AI, citing security concerns.

Finally, the refusal to commit to comprehensive AI regulation aligns the UK with the US. While the UK has tentative plans to introduce AI legislation through its AI Opportunities Action Plan, it intends to allow sectors to regulate AI deployed within their own sectors, rather than introducing overarching rules. Despite the public support for AI regulation, the can is continually kicked down the road. Gina Neff, Professor of responsible AI at Queen Mary University, points out the UK approach prefers to support industry without acknowledging “the social, cultural and economic transitions that we face” and calls for the empowerment of “regulators with stronger enforcement tools to right the imbalance of power between British society and the world’s biggest players in this sector.” However, if the UK government allows OpenAI, the famously anti-regulatory company, to direct the UK’s AI strategy, the possibility of any such regulation is unlikely. Meanwhile, the US has made clear that the federal government will not regulate AI and will try to prevent states from doing so.

As the UK seeks to maintain favor with the Trump administration, future AI legislation could potentially be traded away. The partnership between the UK and OpenAI is concerning, revealing the deepening entrenchment of Silicon Valley ideology and the continued intertwining of US Big Tech firms with UK public officials and institutions. While the UK government celebrates OpenAI’s commitment to opening more UK offices or maybe funding more data centers, it appears to be ignoring a growing number of voices warning against handing more power to tech firms.

Authors

Megan Kirkwood
Megan Kirkwood is a researcher and writer specializing in issues related to competition and antitrust, with a particular focus on the dynamics of digital markets and regulatory frameworks. Her research interests span technology regulation, digital platform studies, market concentration, ecosystem de...

Related

Analysis
The UK’s AI Strategy Risks Entrenching the Power of Big TechMay 29, 2025
Perspective
Renaming the US AI Safety Institute Is About Priorities, Not SemanticsJuly 3, 2025

Topics