Home

Donate
Perspective

The Trump Administration’s AI Policy Framework Has an Ideology. It Just Won't Admit It.

Genevieve Smith / Apr 9, 2026

President Donald Trump delivers remarks at the White House AI Summit at Andrew W. Mellon Auditorium in Washington, D.C., Wednesday, July 23, 2025. (Official White House Photo by Joyce N. Boghosian)

In March, the Trump Administration released its National Policy Framework for AI that includes legislative recommendations for Congress. The Framework proposes seven priority areas: child protection, community safeguards as AI infrastructure is built out, copyright and intellectual property, free speech, innovation, an “AI-ready” workforce, and federal preemption of state laws. At a high level, the document favors removing barriers to AI development while keeping regulatory burdens to a minimum.

What's striking is what's missing from the framework. Algorithmic bias and discrimination, data privacy beyond children, transparency, and environmental impacts are entirely absent, despite being among the most well-documented risks in AI research. Without these elements, the document reads primarily as an industry growth strategy with limited safeguards.

A framework built on ideology, not neutrality

This absence is not accidental. The framework is laced with ideologies that reflect a particular worldview: one that sees AI technology as objective and neutral (technocracy), believes markets are fair arbiters of value while regulation is interference (Silicon Valley meritocracy), and treats AI progress as inevitable, whereby the only meaningful question is whether America wins (technological determinism). This worldview runs across the administration, appointed advisors, and other policymakers shaping the federal AI approach.

The administration's Executive Order on "Preventing Woke AI in the Federal Government" (EO) mandates that the federal government only procure AI systems that "prioritize historical accuracy, scientific inquiry, and objectivity", embedding the assumption that properly designed AI produces neutral truth. Senator Ted Cruz captured this meritocracy logic in remarks before the Senate Commerce Committee: "To lead in AI, the United States cannot allow regulation, even the supposedly benign kind, to choke innovation or adoption." And David Sacks, one of the framework’s lead architects, put the determinism plainly: "technology is going to happen. Trying to stop it is like ordering the tides to stop… we might as well be the leader." Sacks has since left his formal leadership role on AI, but his worldview is largely written into the framework.

Taken together, this ideology casts bias concerns as overblown, transparency requirements unnecessary, and accountability frameworks mere overreach. Under this belief system, the government's job isn’t to regulate the development of the technology, but simply to remove barriers to deployment.

The contradiction is clear. The framework warns against the government "coercing AI providers to alter content based on ideological agendas," yet it advances a distinctly ideological position. It is an ideology that goes further than dismissing bias as a concern for AI; it actively reframes it. Bias mitigation is ideological interference and reducing discriminatory outcomes is injecting liberal values and moving away from the “truth”.

This is echoed in the EO, where the document connected the safety and moderation policies of AI models to a “leftist woke agenda.” Such views were reinforced by Sacks in a December 2025 social media post, where he called state legislation seeking to limit “algorithmic discrimination” as “ideological meddling” that should not be allowed, as “AI models should strive for the truth and be ideologically unbiased”.

Ignoring AI systems' measurable bias and unequal outcomes

The costs of this worldview are not abstract. Research across both predictive AI systems and generative AI shows they produce consistent, measurable harms at scale.

In a study of a widely used healthcare algorithm deployed across hospitals throughout the United States, researchers found that Black patients were assigned half the care of equally sick White patients. Why? The algorithm was predicting healthcare costs rather than illness. But systemic inequities result in spending less money caring for Black patients than White patients. Remedying the bias would have nearly tripled the share of Black patients receiving additional support.

The pattern extends beyond healthcare. In research I conducted of app-based AI credit scoring tools that facilitated small loans in low and middle-income countries, I found that fintechs consistently took "gender blind" approaches, grounded in the belief that machine learning is objective and data reflects the truth. Models learn from proxies – part-time work, informal income, app access patterns – that correlate with gender because of existing social structures, but are not necessarily causational for repaying microloans. Meanwhile, algorithms are often optimized for profit. The result: women, despite being better repayers, receive fewer loans and lower amounts.

Under the White House framework, requiring healthcare companies and fintechs to audit for and intervene on these patterns would constitute an "ideological agenda." Without transparency, Black people wouldn’t know they received less care and women have no way to know why they were denied or got small loans. What’s left is inequality dressed as neutral truth.

Generative AI is no different. Researchers find women are systematically depicted as younger, a distortion most stark in high-status, high-earning occupations. In a large-scale analysis of text-to-image models, my collaborators and I find that women dominate caretaking roles, while men dominate technical and physical labor roles – biases that exceed real-world statistics (a trend echoed in similar studies), rather than merely reflecting them. In an ongoing analysis of nearly one million images from five leading models, we further find that images of women are systematically lower quality than those of men.

The framework’s omissions thus matter. Without transparency requirements, there is no way to audit AI systems for these patterns, and little incentive for companies to hold themselves accountable. Without accountability for discriminatory outcomes, neutrality becomes a shield. And the framework's free speech rhetoric sets the stage to prosecute attempts at bias mitigation as ideological positioning. Meanwhile, powerful AI systems are increasingly black box technologies – where even developers cannot fully explain certain outputs – and foundation model developers are becoming more opaque, not less. The framework’s silence on all of this is not an oversight. It’s a logical consequence of a mutually reinforcing belief system, which has real costs.

AI accountability is not a partisan issue—it is a governance necessity

In comments against addressing bias in AI systems, Sacks cited Google’s failed bias mitigation effort. This was a crude “add diversity and stir” approach that produced images of Black George Washington and racially diverse Nazis. On that point, he’s right: it was a bad intervention. Yet one high-profile failure doesn’t erase the successful ones. In the healthcare study, removing healthcare costs as a proxy for health need eliminated racial bias in the model and helped hospitals deliver more effective care.

Transparency, audit requirements, and interventions to address harmful bias are not ideological overreach. They are basic tools of accountability, consistent with American values of liberty and justice for all.

At the federal level, push for regulation that starts from a different premise: AI is not neutral, and markets alone will not distribute its benefits fairly. In the absence of federal action, state regulators must still protect their citizens. New York City's Local Law 144 (the first US law requiring bias audits of automated employment decision tools) and Colorado's anti-discrimination in AI law (the first state law requiring protection from algorithmic discrimination across domains such as employment, housing, credit, and healthcare) represent exactly the accountability infrastructure the federal framework seeks to preempt. States should continue working with researchers to create accountability infrastructure and not cede ground.

This issue need not be a partisan debate. Making powerful technology work for everyone is something Americans should demand together. Predictive AI systems (those making consequential decisions about people in hiring, lending, and healthcare) are distinct from generative AI and the “race” to “win” that is tied to generative AI. The evidence of harm is clear, legal frameworks exist, and the economic case is strong. Build consensus, start there.

Finally, it is worth asking: What are we actually racing toward? Artificial general intelligence to beat China? And then what? A technology that doesn’t work for Black people, systematically undervalues women, renders marginalized groups invisible, and operates without accountability is not a foundation for meritocracy. It is a foundation for entrenching existing hierarchies at unprecedented scale and speed. Winning means more than getting to some abstract finish line first. It means building AI worthy of the American promise and getting the fundamentals right now, not after the tide has come in.

Authors

Genevieve Smith
Genevieve Smith is a research fellow at Stanford University in the Clayman Institute for Gender Research, founder of the Responsible AI Initiative at the UC Berkeley Artificial Intelligence Research Lab (BAIR), and serves as Professional Faculty at Berkeley Haas teaching on responsible AI innovation...

Related

Perspective
Four Pages That Could Reshape American AI PolicyApril 8, 2026

Topics