Trump’s AI Policy Framework Leaves Most Vulnerable Exposed
Sydney Saubestre / Mar 27, 2026
President Donald Trump delivers remarks at a press conference, March 9, 2026. (Official White House photo by Daniel Torok)
The White House’s new National Policy Framework for Artificial Intelligence, released last Friday, doubles down on a push for AI deregulation. Yesterday, Speaker Mike Johnson (R-La.) took up the call, signaling plans to translate the framework into law.
The framework covers seven areas: child safety, free speech, intellectual property, workforce development, energy permitting, innovation, and preemption of state AI laws. While the preemption provision was listed last, it will have an outsized impact. The framework proposes to preempt state AI laws that impose "undue burdens," preserving only a narrow set of state authorities: generally applicable laws protecting children, preventing fraud, and protecting consumers; state zoning authority over AI infrastructure; and requirements governing a state's own use of AI. Anything outside those boundaries—and much falls outside them—states would have no authority to address, regardless of the harms their residents face. This is not a fringe concern—polls consistently show majorities of Americans across party lines support more oversight of AI, not less.
Understanding what the framework declines to protect requires understanding the assumption it is built on.
Tucked into Section VII is a sentence that reveals more about the document's underlying logic than perhaps any other: States should not burden Americans' use of AI "for activity that would be lawful if performed without AI." The assumption embedded in this clause—that AI simply accelerates existing activity rather than transforming it—is what makes the rest of the document possible: a framework that can gesture at protection while systematically declining to provide it.
AI's capacity to process data at scale doesn't just speed up existing activities, it can also change their character entirely. A landlord rejecting one rental applicant based on a background check is a human decision subject to human limits and human accountability. A landlord using AI to screen thousands of applications simultaneously, drawing on historical patterns that reflect decades of discrimination, can produce outcomes that would never be tolerated if the full pattern were immediately apparent. This can happen invisibly—at a scale that makes individual challenge nearly impossible. The activity looks the same, but the harm is categorically different. The framework's assumption that these scenarios are equivalent is not a neutral observation. It is a choice to define a whole category of AI harm out of existence.
The document contains some normative statements about how AI shouldn’t be used that stop well short of actionable commitments. For example, it emphasizes that existing law enforcement efforts should be “augmented” to “combat AI-enabled impersonation scams and fraud” that target seniors. Deepfake scams are a problem and much more needs to be done to address them, but doing so will require regulation, digital literacy, and the kind of institutional capacity this framework explicitly declines to build.
That gap between naming a harm and committing to address it runs throughout the document. The framework’s internal contradictions reveal more about its priorities than its principles do; simply put, it does not treat all AI harms equally. It treats some as urgent policy problems and others as issues the market, the courts, or "existing regulatory bodies" can sort out while constraining their ability to do so.
Protection without rights
The framework's most developed section—and in many ways its most sincere—is on child safety. It is worth examining closely, not because children are the only people this framework fails, but because they are the people it is most trying to protect. Even there, the limits of its conception of protection are instructive. Even where the general political will to act is strongest, the framework treats protection as something done to people rather than with them or for them on their own terms.
The harms to youth that this framework addresses are real, widespread, and need urgent attention. Child sexual exploitation, deepfake abuse, unauthorized digital replicas of people's voices and likenesses are serious, and the framework is right to point out that they need to be curbed, especially as companies have repeatedly shown that they can’t be relied on to self-regulate. But the harms that are highlighted share a common characteristic: They are visible, discrete, and politically legible. They produce harms that are concrete and that we all should be acting to prevent.
Even though the framework rightfully calls these harms out, it makes choices about how to address them that are worth examining. Some provisions—including reducing the risks of sexual exploitation and affirming that existing child privacy laws apply to AI—could be genuinely valuable if enacted correctly. But the framing throughout centers parents and guardians, not children themselves.
The agency of children—their right to privacy, to information, to digital participation on their own terms—is largely absent. Children appear in this framework as objects of protection, not as people with their own stake in the outcome. The framework is designed to give adults more control over their children's digital lives, and while some parental controls can be a part of the solution, it not the same thing as giving children more safety, more privacy, or more recourse when something goes wrong.
This matters, especially now. Media literacy and critical thinking education are under coordinated attack from some of the same political forces that produced this framework. The previously released AI Action Plan called for the integration of AI into education curriculum while, in the same document, calling on the National Institute of Standards and Technology (NIST) AI Risk Management Framework to eliminate references to misinformation. That is not a tension the administration failed to notice—it is a choice about what kind of AI literacy it wants to produce.
Asking parents to manage their children's digital environments assumes parents themselves are equipped to navigate an information ecosystem that is increasingly difficult to parse. Protecting children online cannot be accomplished by shifting to parental controls alone. It requires equipping both children and adults with the tools to evaluate what they're seeing and to understand what the systems shaping their lives are actually doing and ones they want more control over. The framework's vision of child protection is narrower than the problem it's trying to solve.
The consequential harms they don’t want seen
The harms that don't appear here are different in character. AI systems trained on historically biased data will perpetuate those same biases in hiring, healthcare, housing, and credit decisions. They can do so in ways no individual human decision-maker ever could and in ways that are nearly impossible to detect or challenge. The use of AI in the criminal legal system, in decisions about who gets public assistance, in healthcare—these are domains where the evidence of harm is already accumulating and where the absence of a federal standard is most acutely felt. The framework contains no affirmative rights framework for the people on the receiving end of these decisions: no privacy protections beyond children, no due process requirements, and no non-discrimination standards. The administration is not unaware of these issues. Its framework simply treats them as someone else's problem.
The framework defaults to treating AI as a logistics problem: how much power it needs, how fast the infrastructure can be built, and how quickly workers can be trained to use it. That framing has a real appeal; it makes AI legible as something we can measure, build, and optimize. But it also displaces a harder set of questions about what AI actually does to people—in the decisions it makes about their lives and in the assumptions baked into how it's built. The contradictions that follow are not isolated inconsistencies. They reflect the same underlying choice: to make the framework's ambitions concrete and its obligations vague.
Purposeful contradictions
The clearest way to see what a framework is actually doing—as opposed to what it claims to be doing—is to look at where it contradicts itself. Throughout, there are commitments that cannot both be true, and the framework simply declines to notice.
Privacy for children but federal data for industry. The framework affirms that child privacy protections apply to AI systems and calls for limits on data collection for model training. Two sections later, it recommends making federal datasets available to industry and academia for AI model training. Those datasets include information about individual Americans: their health, their finances, and their interactions with the government. The framework does not explain how both commitments hold simultaneously. It simply holds them and moves on, against a backdrop where federal data has already been made available in ways that would have been unthinkable two years ago.
A free speech argument that cuts both ways. The framework proposes preventing the government from pressuring AI companies to change how they handle content based on political agendas. That's a legitimate concern. But earlier this year, the administration issued an executive order directing AI systems to eliminate what it characterized as ideological bias, which is precisely the kind of government pressure on AI content the framework claims to prohibit. You cannot simultaneously argue that the government shouldn't tell AI companies what to say and issue directives about what they should say.
AI literacy that can't name what it's preparing you for beyond innovation. The framework calls for incorporating AI training into apprenticeships and education programs, studying how jobs are shifting, and building capacity at land-grant institutions. None of that is wrong. But meaningful AI literacy isn't just knowing how to use these tools, it also means understanding how they fail, what assumptions are baked into them, and when handing a decision over to an algorithm is a moral failure. A framework that has defined systematic AI harms out of scope can't coherently ask schools and workforce programs to prepare people to recognize and question them. It is preparing workers to generate value from AI, not equipping the people most affected by it to question it.
Though these contradictions might seem like the product of hasty drafting or competing stakeholder pressures, patterns this seemingly deliberate reflect a framework written by drafters that know precisely which tensions they are willing to resolve and which they prefer to leave unexamined.
The people who wrote this framework are not unaware that AI is being used to make consequential decisions about people's lives. They appear to have simply decided that those people are an acceptable cost of someone else’s race to the top.
That is a choice, a choice about whose reality gets treated as a policy problem and whose gets treated as an externality. The people who will bear the cost of that choice—screened out of jobs, denied housing, flagged by systems they cannot see and cannot challenge—are the same people who are largely absent from this document. Their absence is the document's most honest statement of intent.
Authors
