Home

Donate
Perspective

Trump's Order Against 'Woke AI' Will Create Real Harm

Camille Stewart Gloster / Jul 29, 2025

US President Donald Trump delivers remarks at the White House AI Summit at Andrew W. Mellon Auditorium in Washington, D.C., Wednesday, July 23, 2025. (Official White House photo by Joyce N. Boghosian)

Before the campaign against it became federal policy, “woke AI” was a culture war tagline. Now, under the Trump administration’s AI Action Plan issued last week and a complementary executive order titled “Prevent Woke AI in the Federal Government,” this political slogan has evolved into a regulatory directive with potentially wide-reaching implications. The AI Action Plan and the executive orders that came along with it don’t just frame values; they reshape the criteria by which AI systems are evaluated, procured, and integrated into public infrastructure.

Of course, every administration should be expected to bring its own priorities to AI governance—be it innovation, national security, equity, or economic competitiveness. That’s normal. Trump’s AI Action Plan reads like a nationalist innovation agenda: dismantle regulatory guardrails, fund compute and infrastructure at scale, and establish “American AI” as the global gold standard.

What’s different here is not the shift in emphasis from the Biden administration, but the deliberate removal of core scientific and technical foundations from federal AI policy. Nested within the plan is a mandate to purge references to systemic bias, climate science, misinformation safeguards, and diversity considerations from AI risk frameworks and federal procurement requirements.

That shift isn’t about achieving neutrality, it’s about redefining it through a political lens, often at the expense of technical integrity.

“Woke” is not a technical term

The executive order on “woke AI” bars federal agencies from procuring any AI system that “exhibits political or ideological bias,” and the plan instructs the National Institute of Standards and Technology (NIST) to revise its AI Risk Management Framework to remove mentions of misinformation; diversity, equity, and inclusion; and climate change. According to the order, agencies must procure only those LLMs developed with two stated principles: “truth-seeking,” and “ideological neutrality.”

But that second principle gets weaponized quickly. The directive labels DEI an "existential threat" to reliable AI, claiming it causes models to suppress factual information and embed “concepts like critical race theory, transgenderism, unconscious bias, intersectionality, and systemic racism.”

This framing should alarm anyone who cares about evidence-based design. It falsely equates the acknowledgment of bias with ideological distortion. It treats inclusion as inaccuracy. And it attempts to legislate scientific uncertainty and sociological reality out of existence.

Let’s be clear: unconscious bias is a scientifically validated phenomenon, not a partisan conspiracy. Bias forms through pattern recognition, cognitive shortcuts, and lived experience. To deny its role in data, decision-making, or digital systems is to deny science. ​​

AI doesn’t invent bias on its own. It reflects and amplifies the bias embedded in the data it is trained on—data generated, selected, and labeled by humans. That’s why acknowledging human bias isn’t ideological. It’s foundational to building AI systems that are accurate, accountable, and safe, which requires the proactive embedding of transparency and epistemic rigor into the model development from the start.

In fact, this entire effort is a case study in bias—a textbook “mindbug,” as Dr. Mahzarin Banaji of Harvard describes them: ingrained patterns of thought that lead us to make predictable, flawed assumptions, even in the face of evidence. When we refuse to account for those distortions in data sets, algorithms, or risk frameworks, we don’t eliminate bias, we encode it more deeply. When truth itself is politicized, “neutral” becomes shorthand for strategic erasure.

Real harms from politicized AI design

Removing fact-based concepts from AI systems isn’t hypothetical. It causes tangible harm. Here are just a few examples:

  • Hiring algorithms that ignore structural bias have filtered out qualified candidates from underrepresented backgrounds, as seen in Amazon’s now-shelved recruitment AI.
  • Public-sector AI tools used in policing, health access, and education risk encoding past discrimination as "neutral" when DEI is stripped from evaluation metrics. We have tools to effectively reduce bias while preserving or improving accuracy.
  • Climate-informed AI models are critical for accurate prediction in agriculture, disaster response, and national defense. Political pressure to strip climate considerations from federal frameworks like NIST’s AI RMF risks degrading their precision and strategic value. Even if agencies like DoD or NOAA aren’t explicitly targeted, quietly erasing “climate change” from modeling inputs could weaken wildfire forecasts, flood risk assessments, and crop yield predictions—setting back scientific progress and undermining national resilience.

These harms compound over time. They disproportionately affect already marginalized communities. And they degrade the accuracy, adaptability, and legitimacy of the systems we increasingly rely on.

What about overcorrection?

Let’s acknowledge a valid concern: sometimes model alignment does go too far. Google’s Gemini image generation blunder is a case in point. When users asked it to generate images of historical figures, the system returned visual outputs so diverse they erased historical accuracy. That was a real design failure, and it was referenced in the EO. The intent, to reduce racial bias, was sound; but the implementation bent truth in the other direction.

Here’s the key difference: when a model overcorrects, we can iterate. We can test, adjust, and retrain. Public scrutiny forces transparency. Product updates reflect new lessons. These are bugs, not baked-in ideology. The decision to launch Gemini despite this flaw reflects Google’s rush to market, not some underlying ideological program.

But when entire concepts like DEI or climate science are banned from the training loop altogether? When federal funding is conditioned on ideological purity? That’s not a fixable bug. That’s a skewed operating system. Iterative policy is how we refine messy trade-offs between fairness, accuracy, and utility. Blanket erasure of scientific concepts is how we build brittle systems that shatter on contact with the real world.

You can’t secure what you can’t acknowledge

Stripping fact-based concepts like climate risk, disinformation, or structural bias from federal AI systems doesn’t just harm accuracy, it also undermines security. AI systems increasingly support critical infrastructure, threat detection, and emergency response. If climate data is excluded from AI models, it could introduce significant disruption or other negative impacts, particularly in high risk contexts from crucial logistics or supply chains to military bases and disaster zones. If misinformation detection is defunded, federal systems grow more vulnerable to adversarial exploitation—from state-backed influence ops to AI-poisoned data attacks.

In this landscape, neutrality without rigor is a liability. Security professionals understand: your weakest input becomes your highest risk surface. Distorted AI inputs don’t just produce skewed insights—they introduce new vulnerabilities into national systems.

What happens next? Courts, contracts, quiet creep and growing resistance

While no lawsuits have yet challenged the executive orders that accompanied the AI Action Plan, legal pushback is likely and could come from multiple directions. First Amendment claims may emerge, particularly if procurement policies penalize AI developers for including scientifically valid content—raising issues of viewpoint discrimination. Administrative law challenges could also surface if agencies like NIST or the Office of Management and Budget (OMB) revise frameworks without public comment or a scientific basis, violating the Administrative Procedure Act. And vendors whose contracts are rejected for supposed “ideological bias” may challenge those denials in court, pressing judges to define what “neutrality” even means in this policy context.

Still, the more immediate risk is bureaucratic. This vision is likely to advance not through sweeping legislative changes, but through procurement language, grant criteria, and agency-level directives. “American values” or other clauses may start appearing in funding solicitations. Agencies may sanitize language around race, gender, or climate to avoid disruption. Institutions may self-censor in ways that escape public scrutiny but carry profound downstream effects. That quiet creep will shape which tools are built, who gets served, and what truths are deemed permissible.

The stakes go beyond regulatory wonkery. When biased or censored AI systems are embedded into the federal infrastructure, they will start shaping how millions of people receive public services, interact with social programs, or get flagged for benefits eligibility. This isn't theoretical. When we train AI to ignore context, we train it to ignore people. In 2007, the Social Security Administration used automated systems and outdated rules to flag disability claimants for termination. The system disproportionately targeted poor and rural individuals. In multiple instances, federal and local agencies have used facial recognition systems (e.g., Clearview AI or older Amazon Rekognition systems) to make arrest decisions. These tools have shown significantly higher error rates for people with darker skin, particularly Black women, leading to wrongful convictions.

Federal agencies often provide funding or set procurement standards for these systems, meaning that systemic flaws—left unacknowledged due to “neutrality” mandates—can directly impact liberty. When it comes to public infrastructure, this means lives fall through the cracks—quietly, systematically, and often without recourse. The shifts won’t stay confined to the federal level; state and local governments often follow federal procurement shifts, and in this case may be compelled to align.

But there will be resistance.

Already, civil liberties groups, AI ethicists, and cybersecurity professionals are signaling concern. We may see an uptick in FOIA requests, inspector general complaints, and legal challenges from individuals harmed by federal systems that rely on skewed models. Journalists and watchdogs are likely to document changes to disclosures, moderation policies, and procurement outcomes. Advocates will pushing for transparency in how AI is trained, tested, and updated and continue to call out platforms that sanitize outputs in the name of "neutrality" while quietly erasing scientific and sociological consensus.And I hope individuals will become more discerning: cross-referencing outputs, comparing models, and scrutinizing the tradeoffs beneath the surface.

AI governance isn’t static. It needs constant calibration as the technology, risks, and public values evolve. I’ve supported that evolution across sectors and administrations, and will continue to do so. What matters most is ensuring that recalibration is evidence-based, not ideologically imposed. Because in the long run, it’s not just about what models say—it’s about what systems ignore, and who gets left behind.

Real neutrality isn’t about silencing facts or scrubbing complexity. It’s about grounding systems in science, transparency, and integrity, then giving people the tools to make their own choices.

Authors

Camille Stewart Gloster
Camille Stewart Gloster, Esq. is the CEO of CAS Strategies, LLC and the former Deputy National Cyber Director for Technology & Ecosystem Security for The White House. In her role, Camille led technology, supply chain, data security, and cyber workforce and education efforts for the Office of the Nat...

Related

News
Reactions to the Trump Administration's AI Action PlanJuly 24, 2025
Podcast
Considering Trump’s AI Plan and the Future It PortendsJuly 24, 2025
Perspective
AI Could Never Be ‘Woke’July 24, 2025
Transcript
Transcript: Donald Trump's Address at 'Winning the AI Race' EventJuly 24, 2025

Topics