Home

Donate
Analysis

Accelerating AI in the US Government: Evaluating the Trump OMB Memo

Ellen P. Goodman / Apr 24, 2025

February 26, 2025: Director of the Office of Management and Budget Russell Vought listens at the first cabinet meeting of President Donald J. Trump's second term at the White House. (Photo by Jabin Botsford/The Washington Post via Getty Images)

With its recently released 2025 Office of Management and Budget (OMB) Memo on federal use and procurement of AI, Driving Efficient Acquisition of Artificial Intelligence in Government, the Trump Administration all but erases the Biden Administration’s 2024 predecessor OMB Memo, Advancing the Responsible Acquisition of Artificial Intelligence in Government.

Gone is any discussion of “responsibility” and the previously adopted testing requirements for “rights-impacting” and “safety-impacting” AI systems. Not surprisingly, gone are terms such as “equity,” “bias,” and “environmental.” The Trump OMB Memo basically picks up where the first Trump administration left off with AI policy, as if the intervening four years of AI policymaking and state capacity building had not happened — with one notable exception. The federal agency Chief AI Officers — a post that the Biden OMB Memo created — remain, although their duties have shifted from managing risk to speeding acquisition.

The difference between the two OMB approaches is not just the U-turn on AI risk and harms. The Biden Administration adopted the theory of “procurement as policy.” Its OMB Memo sought to use the federal procurement power to shape the market for more trustworthy AI. If the government leveraged its purchases (as well as grants) to demand risk assessments, transparency, risk mitigation, and ongoing adversarial testing from its vendors, then those practices would become standard throughout the market for AI products and services. The Trump OMB Memo evinces very little interest in using procurement as policy.

Given that OMB Director Russell Vought wants to make the federal government small enough to drown in a bathtub (approximating Grover Norquist’s quote), he is not too interested in policy. Rather, the focus of his OMB Memo is almost exclusively on deploying AI systems faster, cheaper, and with favorable terms around IP and data. Still, there is a bit of policymaking in the Trump OMB Memo, maybe incidentally. It appears to be designed to prevent vendors from locking the government into particular proprietary systems. To support competition, it pushes agencies to favor open model weight systems, open APIs, and interoperability and portability in system architectures. If most of the new OMB Memo is a rejection of what came before, this one piece is an amplification of it.

NIST and consensus standard-setting are gone

Biden OMB Memo: The Biden policy required alignment with NIST’s 2024 AI Risk Management Framework to ensure standardized risk assessment. The memo had also lifted up international standards, the formation of which NIST plays (or played) a leadership role.

Trump OMB Memo: In Trump’s new policy, there is no requirement to follow NIST’s AI RMF; agencies are free to come up with their own performance standards. The whole emphasis is on performance, rather than risk. Performance standards are surely needed, given how much AI doesn’t actually work. But the sidelining of NIST is another example of this Administration’s squandering of American soft power and state capacity. What NIST has to say about AI sociotechnical governance impacts the whole free world. The Trump OMB Memo ignores it.

Equity, civil rights, and environmental considerations are gone

Biden OMB Memo: The list of requirements for “rights-impacting” AI systems was long. In short, agencies had to ensure that vendors conducted risk and impact assessments and evaluations on an ongoing basis, conducted adversarial testing and red-teaming, and complied with various transparency and notice provisions. Many of these values were set forth in the 2022 Blueprint for an AI Bill of Rights and in the 2023 Executive Order on AI. The Memo had also acknowledged that high energy consumption for compute was a consideration that agencies should take into account in making procurement decisions.

Trump OMB Memo: The new policy mentions civil rights compliance and expects agencies to reserve the right and have access to conduct evaluations as necessary. But that’s about it. It replaces the “rights-impacting” and “safety-impacting” typology of the Biden OMB Memo with the single term “high-impact AI.” Agencies are instructed to determine, as best they can, whether they are seeking a high-impact AI and then notify vendors that they will need to meet additional transparency and impact assessment requirements. The term refers to:

AI with an output that serves as the primary basis for decisions or actions with legal, material, binding, or significant effect on: an individual or entity’s civil rights, civil liberties, or privacy; or an individual or entity’s access to education, housing, insurance, credit, employment, and other programs; or an individual or entity’s access to critical government resources or services; or human life, well-being; or critical infrastructure or public safety; or strategic assets or resources, including high-value property and information marked as sensitive or classified by the Federal Government.

Generative AI guidance is gone

Biden OMB Memo: It had required agencies to make sure the generative AI systems they procured transmitted watermarks, metadata, and other provenance markers on possibly deceptive synthetic content. Those systems also had to provide information about training data, data labor, compute, model architecture, and relevant evaluations. And they would be contractually bound to make best efforts to filter out CSAM, NCII, and other toxic content.

Trump OMB Memo: The Trump Memo says the government will develop “playbooks” on generative AI, but otherwise does not address it.

Implications for federal AI deployment and the AI market

If the Trump OMB Memo is implemented, we can expect agencies to move fast in acquiring AI systems. They will focus on cost and effectiveness, but not much on safety or harms. There may be a positive spillover from the OMB’s ostensible commitment to a competitive AI marketplace. If the federal government really favors smaller players, open model weights, and modular systems that pose less risk of lock-in, perhaps new entrants will have a shot at big federal contracts.

Among the many likely negative spillovers is the thing that won’t happen: a robust ecosystem of AI measurement and evaluation methods and standards. There is widespread agreement that this needs to improve significantly if buyers are to be able to compare systems and the public to trust that they work effectively and fairly. The Biden OMB Memo sought to turbocharge that development through federal demand. The Trump OMB Memo doesn’t ask for it and seems to view that kind of science as an obstacle to AI adoption.

The next shoe to drop will be the Trump Administration’s rescission of Biden’s National Security AI Memo. The Biden Administration had sought to use the massive heft of federal demand to shape the market for trustworthy AI. Undoubtedly, the revision will trash that approach. There, too, I suspect, the accelerationists will have their way.

A version of this analysis was originally published on Medium.

Authors

Ellen P. Goodman
Ellen P. Goodman is a Professor at Rutgers Law School, Co-Director of the Rutgers Institute for Information Policy & Law (RIIPL), and a Senior Fellow at the Digital Innovation & Democracy Institute at the German Marshall Fund.

Related

How Tech and Civil Society Are Nudging Trump on AI Policy
Podcast
Researchers Defend the Scientific Consensus on Bias and Discrimination in AI
Perspective
The Need for and Pathways to AI Regulatory and Technical Interoperability

Topics