Discount AI Brings Premium Risks To Public Procurement
Nina-Simone Edwards / Aug 28, 2025
OpenAI CEO Sam Altman speaks onstage during the TechCrunch Disrupt conference in October 2019 in San Francisco, California. (Steve Jennings/Getty Images for TechCrunch)
Amid persistent political rhetoric casting government operations as inefficient, unnecessarily bureaucratic, and unresponsive to the public, opportunities to modernize government are increasingly being welcomed uncritically, including calls to integrate artificial intelligence. Meanwhile, the public’s rapid embrace of generative AI tools like ChatGPT and Claude — fueled by AI companies’ relentless hype — is normalizing their widespread use, despite mounting evidence of AI harms ranging from environmental costs to privacy risks. This AI hype cycle is creating the perception that the benefits outweigh the costs.
The Trump administration has been particularly fixated on “winning” the so-called AI race. Yet the very premise of this race is unclear. What marks the finish line: developing the most advanced tools, building the most data centers, or amassing the largest troves of data? The competitors are also not fully defined. Is this a contest between nations, or between US companies themselves? What is clear is that the race has already begun. In this framing, speed becomes the greatest asset. But pursuing speed only means that harms are delivered faster, instead of ensuring that AI is safe, equitable, and resistant to exploitation.
The recent AI Action Plan states its aim as “accelerating innovation, building AI infrastructure, and leading in international diplomacy and security.” In pursuit of this plan, this month, the General Services Administration (GSA) announced partnerships with AI companies OpenAI and Anthropic that will provide government agencies with access to their AI tools for just $1 each.
In its press release announcing its OneGov partnership with Anthropic, the GSA said it is committed to providing the federal workforce with the “transformative power of AI to modernize operations, improve decision-making, and deliver better results for taxpayers.”
Although certain AI tools and systems could theoretically be useful, particularly for more repetitive or routine tasks, the GSA’s ambitions for AI are outweighed by the harms generated by large-scale AI deployment and use. And in the aftermath of the 2024 election, the promise of AI as a transformative force in government seems increasingly doubtful, especially given the privacy abuses and disparate practices attributed to the “transformative” work of the Department of Government Efficiency (DOGE).
Operations are modernized but captured by the tech industry
Long before the AI Action Plan, agencies have been using AI to modernize government operations. The US Patent Office’s 2023 fiscal year report highlighted the use of AI in improving patent search processes. The State Department, among other agencies, has deployed an internal chatbot that helps with email drafting or document translation, and the Department of Energy has implemented AI to search historical documents and summarize search results. These tools can contribute to overall agency efficiency and, in practice, have done so. However, the Roosevelt Institute notes that federal workers are often left with additional duties when AI is integrated because, in addition to their normal job requirements, they must also correct “AI’s mistakes.” Like any new tool, AI comes with benefits and the added burdens of training, oversight, and time.
Yet, with the $1 deal, government agencies may begin integrating AI tools into their systems at a faster pace and larger scale. While federal workers continue to grapple with the new technologies, Anthropic and OpenAI have secured a partnership that places the government in an unbalanced position, consolidating power in the hands of a few tech companies. The federal government could become reliant on these private companies in the event of breaches, failures, or other system breakdowns. A single compromise of either Anthropic or OpenAI could expose sensitive or otherwise secure government data. Even if data is not leaked, it could be retained, repurposed, or even subpoenaed (depending, in part, on who is deemed to “own” the data). Breaches have previously occurred with government contractors exposing internal documents, but the risk here is far greater due to the breadth of data these companies will handle, the government’s resulting dependence on them, and the opacity of the legal protections governing that data.
‘Improvements’ to decision-making risk discrimination and inconsistency
From wrongful arrests to incorrect denial of benefits to bias in risk assessments, there are countless examples of harmful government uses of AI. These tools are often touted as being able to make better, unbiased decisions, but they very often fail. It has yet to be proven that AI truly can improve government decision-making without reproducing or amplifying bias and discrimination. People are jailed, fined, and otherwise penalized based on decisions rooted in what began as a few lines of code.
Further, AI has yet to demonstrate consistent usefulness in practice. Researchers note that there is “little evidence that AI meets the performance requirements necessary to ensure consistent, secure public service.” Without concrete proof that AI can be integrated into government services in a way that is reliable, non-discriminatory (i.e., fair to protected classes), and unbiased (i.e., free from systematic error or distortion in predictions or decisions), claims about its potential to improve services remain hollow. Scholars like Ruha Benjamin, Timnit Gebru, and Joy Buolamwini have long highlighted how human biases are encoded in AI systems, which will ultimately impact the decisions made.
Delivering ‘better’ results for taxpayers means a lack of governmental control
If modernized operations remain captured, and improvements to decision-making continue to be biased, it is unlikely that AI will suddenly deliver better results. The entrenchment of private companies may be the primary obstacle to meaningful improvements for taxpayers. Among other rights, taxpayers deserve privacy over their data. Government agencies have already lost significant control of who has access to internal records, with DOGE staff reportedly having sweeping, unauthorized access. The newly consolidated power of OpenAI and Anthropic could risk sensitive information being funneled into the infrastructure of those companies.
Private companies are not held to the same privacy standards–such as FOIA, which grants the public the right to request access to government records, or the Privacy Act, which governs information about individuals in federal systems. The uncertainty of privacy protections leads to questions of surveillance and/or mission creep: how much of our data will be used to endanger marginalized communities (or even train models) in ways that we are currently unaware of?
Ultimately, this $1 deal increases industry capture, uncertainty, training burdens, and biased decision-making. This is not about the price — it is about power. The deal entrenches a two-tiered system: government and corporate elites gain expanded AI capacity, while federal workers shoulder additional workload, and communities are left with few protections.
This low-cost government AI procurement is a false bargain, bringing risks that overshadow the low price. Although the AI Action Plan boasts ambitious goals, without adequate safeguards, large-scale AI integration efforts risk becoming avenues for exclusion rather than improvement. The future of AI in government should not be determined by private-sector giveaways, but by laws and policies that safeguard communities against inequitable impacts and data exploitation. A $1 deal today could cost millions their privacy and freedom tomorrow.
Authors
