Home

Donate
Perspective

The US Government’s Use of Elon Musk’s Grok AI Undermines Its Own Rules

J.B. Branch / Oct 30, 2025

J.B. Branch is the Big Tech accountability advocate for Public Citizen’s Congress Watch division.

When the federal government adopts a new technology, it should be bound by the same principles that underlie democracy itself: fairness, transparency, and truth. Yet the recent decision by the General Services Administration (GSA) to make Grok—the large language model created by Elon Musk’s xAI—available across federal agencies defies those principles and violates the government’s own binding rules for AI safety and neutrality.

Yesterday, Public Citizen and a coalition of civil society organizations urged the Office of Management and Budget (OMB) to suspend and withdraw federal deployment of Grok. Our concern is simple: the large language model developed by Musk’s company, xAI, has been shown to produce racist, antisemitic, conspiratorial, and false content. The decision to deploy Grok, therefore, is not just reckless; it appears to violate the Trump Administration’s own AI guidance.

The contradiction at the heart of federal AI procurement

Executive Order 14319, “Preventing Woke AI in the Federal Government,” mandates that all government AI systems be “truth-seeking, accurate, and ideologically neutral.” OMB’s corresponding guidance memos (M-25-21 and M-25-22) go further: they require agencies to discontinue any AI system that cannot meet those standards or poses unmitigable risks.

Grok fails these tests on nearly every front. Its outputs have included Holocaust denial, climate misinformation, and explicitly antisemitic and racist statements. Even Elon Musk himself has described Grok as “very dumb” and “too compliant to user prompts.” These are not isolated glitches. They are indicators of systemic bias, poor trollish training data, inadequate safeguards, and dangerous deployment practices.

In Senate testimony, White House Science Adviser Michael Kratsios acknowledged that such behavior directly violates the administration’s own executive order. When asked about Grok’s antisemitic responses and ideological training, Kratsios agreed they were “obviously aren’t true-seeking and accurate” and “the type of behavior” the order sought to avoid.

That acknowledgment should have triggered a pause in deployment. Instead, the government expanded Grok’s footprint to every agency. This contradiction, banning “biased AI” on paper while deploying a biased AI system in practice, undermines both the letter and the spirit of federal AI policy.

Why this matters beyond bureaucratic compliance

To be clear, Grok’s deployment is not an isolated case. The Trump administration’s new USAi program allows federal employees to experiment with models from OpenAI, Anthropic, Google, and Meta under $1 contracts—a move that entrenches Big Tech dominance in government systems. Marketed as “safe innovation,” the program instead risks locking agencies into untested, corporate-controlled algorithms while sidelining smaller competitors. These deals could replace public judgment with private influence at the heart of federal decision-making.

What makes Grok unique, however, is its outlier propensity to parrot far-right and other extremist views on topics relative to other mainstream LLMs. This becomes a question beyond procurement paperwork. It’s about whether the government is reinforcing—or eroding—public trust in a critical technology. Every decision to deploy an AI system in public administration sends a message about the values our democracy upholds. When the government endorses an AI tool known for bias and falsehoods, it legitimizes disinformation, invites future misuse, and jeopardizes public confidence in the fairness of government systems.

An ideologically skewed AI system embedded in federal decision-making risks distorting how facts are communicated to the public and how policies are implemented. It threatens to turn tools of governance into instruments of propaganda. The integrity of democratic governance depends on ensuring that the systems the government uses to communicate, analyze, and make decisions are grounded in accuracy, neutrality, and accountability.

The danger isn’t hypothetical. If Grok can spread conspiracy theories and antisemitic claims online today, what happens when that same model is used to summarize briefings, draft memos, or answer public questions for a federal agency tomorrow? The stakes are not just technical. They are democratic.

What needs to happen now

OMB must immediately suspend Grok’s deployment and conduct a full compliance review under its own memos. It should publicly release any safety tests, red-teaming results, or risk assessments that informed the GSA’s decision to procure Grok. And it must clarify whether Grok has been formally evaluated for compliance with Executive Order 14319’s neutrality and truth-seeking standards.

Congress should also request a hearing for administrative officials to explain how GSA’s decision to adopt Grok aligns, or fails to align, with the Trump administration’s binding policies. As a check on the executive branch, it is Congress’s role to fully understand how this procurement meets the administration’s “neutrality and truth-seeking” standards.

These steps are not bureaucratic box-checking. They are the minimum needed to ensure the government abides by its own rules and maintains integrity in how it adopts powerful new technologies. AI in the public sector must not become a Trojan horse for ideological capture or commercial favoritism.

The broader lesson

The Trump administration’s broader AI procurement strategy exposes a deeper problem: the federal government is increasingly funneling contracts to a small circle of dominant tech firms. This is the opposite of competitive innovation. It rewards the same companies that have already received billions in federal support and deepens their grip on public infrastructure.

At the same time, Grok’s federal procurement is a case study in how quickly AI can slip from “innovation” to institutional risk when guardrails are ignored. The government’s role should be to model responsible AI adoption, not to rubber-stamp systems that amplify hate, falsehoods, or political agendas while entrenching corporate gatekeepers at the center of public decision-making.

Ultimately, this is about more than one AI tool. It’s about whether our government can still distinguish between a technology that serves democracy and one that serves power.

Authors

J.B. Branch
J.B. Branch is the Big Tech accountability advocate for Public Citizen’s Congress Watch division, leading their work on AI accountability, data privacy, tech product safety, platform oversight, and child online safety. An expert on AI governance and the intersection of civil liberties and technology...

Related

Perspective
Discount AI Brings Premium Risks To Public ProcurementAugust 28, 2025
Analysis
Accelerating AI in the US Government: Evaluating the Trump OMB MemoApril 24, 2025

Topics