Despite Risks, the UK’s Justice System Will Be Powered by ChatGPT
Megan Kirkwood / Oct 29, 2025Megan Kirkwood is a fellow at Tech Policy Press.

The UK Ministry of Justice & Crown Prosecution Service government office building in Westminster. Shutterstock
Last week, the UK’s Ministry of Justice announced that, following its successful pilot of OpenAI’s ChatGPT Enterprise, it has secured an agreement to adopt its technology alongside a data residency agreement. According to the government’s press release:
The landmark plan, secured through OpenAI’s ongoing partnership with the Ministry of Justice, will see the company enable its business customers to store their data on British soil for the first time British businesses will be able to host data on secure, sovereign servers not only enhances privacy and accountability but reinforces national resilience in the face of growing global cyber threats.
The agreement follows the Memorandum of Understanding (MOU) signed between the UK government and OpenAI earlier this year, which gives the company broad influence over UK policy, including AI infrastructure build out and AI adoption. However, because that deal is neither binding nor a paid consultancy, there are no obligations to publish further details beyond the public notice.
I previously wrote that while the MOU “emphasizes sovereignty, stating that the partnership is to illustrate the use of AI in these areas to encourage UK firms [...], with OpenAI guiding the initiative, it is likely to favor startups that use their technology.” What this latest announcement shows is that instead, OpenAI has advocated for the use of generative AI in public services, then effectively put itself to the top of the procurement queue. The deal comes as OpenAI seeks more funding and investment in its aggressive expansion, bolstered by the company’s restructuring to a for-profit company.
The Register reported that while “[i]t doesn't appear that [ChatGPT] will feature in lawmaking,” neither OpenAI nor the Ministry of Justice gave the publication any specifics, “beyond what has already been made public,” such as about timelines or the size of the deal. But CityAM reported that “public procurement records list the total contract award at £6.75m, covering two years from October 2025,” though that report acknowledges that full details are yet to be released.
OpenAI’s press release states that its technology will continue to aid “routine tasks including writing support, compliance and legal work, data and research processes, and document analysis.” The Independent quoted the Deputy Prime Minister, David Lammy, who said the OpenAI agreement is “enabling us to be more human not less. By adopting AI, we’re cutting the burdensome admin and ensuring frontline staff can spend more of their time doing the things only humans can do – monitoring offenders and protecting the British public.”
What the data residency agreement gets wrong about sovereignty
The announcement of the procurement deal also introduces expanded data residency for OpenAI customers to “store their data on British soil for the first time,” which the company says will allow British businesses “to host data on secure, sovereign servers.” This is intended to encourage adoption of OpenAI technology by “both Government and companies” while adhering to data protection rules. However, merely allowing US companies to maintain UK servers does not necessarily equate to the UK having sovereign control over the data and services hosted on those servers.
For instance, the US CLOUD Act (Clarifying Lawful Overseas Use of Data Act) mandates that US law enforcement can “compel American companies to provide access to data stored abroad, even if that data belongs to non-US persons and resides in data centers located in the European Union.” Thus, all data hosted on those UK OpenAI servers could be seized. Second, a vast majority of the value of the deal will accrue back to OpenAI and is unlikely to disperse throughout the UK economy. It is widely understood that data centers do not create jobs, and the announcement is clear in its push to get public and private institutions to adopt OpenAI’s technology, which will see the company pocket revenue from ChatGPT Enterprise at a time when the value of generative AI in the workplace is being questioned.
Meanwhile, UK-based companies are being crowded out in favor of OpenAI. UKAI, a trade association, has bemoaned the UK government’s focus on US Big Tech. Tim Flagg, UKAI’s chief executive, told The Guardian that “there is a huge imbalance between a handful of global players who are able to influence directly what No. 10 is thinking about on policy, and the thousands of other businesses that make up the AI industry across the UK.”
The AI push in public services
The UK’s central government insists that all public institutions should adopt AI— usually referring to generative AI applications like large-language model chatbots, the government tends to be imprecise in specifying which applications it refers to in communications. The Ministry of Justice and the Department of Health and Social Care are two prominent examples of AI implementation in public services.
For instance, the UK government has pushed note-taking and productivity software onto health care workers, such as AI transcription services during patient appointments, to “create structured medical notes and even draft patient letters.” The hope is that AI applications will boost productivity by speeding up work without additional hiring or other costs, though the trials evaluating such tools rarely, if ever, mention the harms emanating from incorrect outputs of large language models, which are inherent to their function. In healthcare settings, errors could have life-threatening consequences, and liability, according to the National Health Service (NHS) guidance on AI-transcription tools, “remains complex and largely uncharted” and will likely fall to the NHS Trust in cases of medical negligence.
The Ministry of Justice is not only the first department to pilot ChatGPT Enterprise; it has also launched an entire digital overhaul, including a “Justice AI Unit” described as “an interdisciplinary team of AI specialists, designers, technologists, and operational experts working to embed responsible AI across the justice system.” The website for the project looks and reads like a tech product; even its design is similar to OpenAI’s. The services on the Justice AI Unit page promise widespread AI implementation, including for administrative tasks and managing criminal records through a single digital identity for criminals.
The Ministry of Justice has also set its sights on implementing tools that “prevent” violent outbreaks before they happen by assessing “factors such as a prisoner’s age and previous involvement in violent incidents while in custody.” Such plans appear to ignore the documented harms emanating from predictive crime technology, which will reproduce historical bias and are largely inaccurate. In their book AI Snake Oil: What Artificial Intelligence Can Do, What it Can’t, and How to Tell the Difference, computer scientists Arvind Narayanan and Sayash Kapoor document the multitude of problems inherent with “predictive” AI. Applied in criminal justice, AI can cause harm by using inaccurate metrics. Narayanan and Kapoor discuss a model which used arrest data rather than crime data, which assumes that all arrests equal crime, when of course innocent people are arrested or found innocent in court. They also point out massive racial disparities in policing, which further skews the bias and accuracy of outputs.
Narayanan and Kapoor emphasize that even with huge datasets, predictive tools remain inaccurate. Though the Ministry of Justice is yet to release more details on the full range of factors that its model will consider, there is substantial evidence that such predictive tools only serve to surveil and discriminate against those overrepresented in arrest and crime data. While the Ministry of Justice maintains that “AI should support, not substitute, human judgment,” the drive to increase efficiency through automation risks using “convenience and efficiency” narratives “to bolster claims to expand digital surveillance and strip away democratic processes while diminishing accountability.”
Broadly implementing AI tools is part of the Ministry of Justice’s AI Action Plan, with a key goal to enable “economic growth by supporting the UK’s world-leading legal and LawTech sectors” and “strengthen our partnerships to support AI-driven legal innovation.” Injecting AI into the functioning of the Ministry of Justice is, beyond a drive for “efficiency,” clearly a response to the UK government’s demand that government departments push economic growth as a primary goal, with particular emphasis on investment to drive that growth. Indeed, the Ministry of Justice points to companies like Google DeepMind, OpenAI, Anthropic, Microsoft, Scale AI and Meta AI as potential collaborators “to drive forward sector growth.”
While many may find it troubling that a department tasked with overseeing the justice system is concerning itself with boosting the AI industry, it also points to the continued trend of the UK government willingly outsourcing important public infrastructure to US Big Tech, and in particular OpenAI, a firm that shows no hesitation in cozying up to an antidemocratic US administration.
Authors
