Following Trump Executive Order on AI Congress Must Act
Kevin Frazier / Dec 12, 2025
The United States Capitol dome. Shutterstock
The executive order on AI signed Thursday by President Donald Trump aims to ensure that the federal government takes the lead on AI policy questions that implicate the nation’s economic and national security. How the order is implemented and whether the newly-created AI Litigation Task Force succeeds in that effort is to be determined. In any event, Congress must now take the lead on regulating AI.
The United States constitutional order was devised with the expectation that Congress would address matters of national significance. Each day of inaction is tantamount to delegating AI governance to Sacramento and subjecting 300 million Americans to laws enacted without their consent. What's more, the longer AI labs are forced to comply with state laws that implicate their ability to train, test, and deploy their models—laws often grounded in concerns about existential risk—they will fail to freely innovate and experiment, which, paradoxically, is likely the best approach to uncovering how to maximize the benefits of AI and minimize its risks.
When Congress acts, it should adhere to three principles: experimentation, adoption, and information sharing. In practice, this looks like legislation that permits labs to deploy new models, rewards individuals and entities for integrating AI, and facilitates information sharing by both developers and deployers. These principles will facilitate a virtuous cycle of technological development that at once ensures the US continues to lead in AI innovation while not imposing undue risk on the public.
Experimentation
Experimentation by the labs is the key to discovering new technical strategies that mitigate some of the known flaws with AI. It is uncontested that AI is not perfect. But to foreclose deployment of AI because of some risks is shortsighted. Each of these systems facilitates proper experimentation: toleration of errors that are "small and diverse" and a commitment to learning from those errors. Regulatory sandboxes, pilot programs, and legal safe harbors are the sorts of policies that should be at the top of Congress’s agenda. Each of these regulatory frameworks clears legal thickets that may otherwise cause labs to delay deployment or to forgo experiments in novel fields. Critically, Congress must foreclose states from enforcing laws based on the idea that the smallest possibility of existential disaster is justification for extreme precaution, while encouraging and even rewarding states that develop their own experiment-friendly regulatory paradigms.
Adoption
Advances in AI made possible by AI will mean little to the nation if the people fear AI and refuse to make use of it. Adoption of AI by the general public, especially small and medium-sized businesses, is essential to avoid a nation of AI haves and have nots, as well as to ensure we do not lag behind adversaries that have taken a far more aggressive posture toward adoption. Technological progress generally leads to societal progress, but that relationship is contingent on mass adoption of the new technology. Decades of disparate access to electrification between more urban and rural states resulted in wide gulfs in economic opportunity and longevity. A similar dynamic played out in the wake of globalization. The current regulatory trends suggest that we will again end up with a national bifurcation due to uneven impacts by emerging technology. An under-appreciated fact is that the very states rushing to regulate AI are those with the highest rates of AI adoption. They seem content to close the door to technological progress behind them—passing laws that may hinder the availability or willingness of other Americans to use AI who have yet to do so.
This is precisely why the Congress must take an active role in facilitating AI adoption. It has always been the case that the federal government has a duty to vigorously pursue a better future on behalf of the American people. The framers of the Constitution explicitly abandoned the Articles of Confederation because they desired a central government capable of advancing the well-being of the entire country.
As I have documented elsewhere, on repeated occasions throughout the eighteenth century, colonial governors and high-ranking officials acknowledged an obligation on the government to act in the interest of the general welfare, which referred to the welfare of society as a whole—not just the welfare of individuals in economically-dominant states. In short, the government must “meet the needs of society by appropriately managing public resources and public affairs” if it is going to live up to its lofty duties. In the case of emerging technologies, such as AI, this duty includes spreading awareness and adoption of the beneficial uses of the technology—something private actors may not be sufficiently incentivized to do.
Several specific policies align with this adoption mandate. As I have called for on several occasions, Congress should form an Office of AI Adoption tasked with a similar charge as the Rural Electrification Administration. This Office would at once create a path for tech-savvy Americans to help their neighbors and local small business owners learn about AI and incorporate it into their professional and personal lives. An Adoption Corps would emulate the effectiveness of forward-deployed engineers—members of AI labs sent to assist their customers with integrating AI into their operations. Like riding a bike, using AI is best done with training wheels—America will fall short of its potential with AI so long as it fails to heed this lesson.
Continuation of the status quo will result in AI abysses—regions in which AI talent and resources are lacking or even absent. Again, as we have seen in prior waves of technology, these sorts of regional disparities can have long-term negative economic and cultural consequences. Thus, the need for Congress, not states, to take primary regulatory authority over the direction, pace, and diffusion of AI innovation.
Information sharing
As experimentation and adoption take hold, Congress should devise mechanisms for AI developers and deployers to share information about the political and community consequences of AI diffusion.
Evidence-based policy requires evidence. As obvious as that may sound, it’s a principle that is being ignored by many states rushing to regulate AI. Despite acknowledgements from leading AI experts and policymakers that they are unsure of the magnitude of AI risks as well as what qualifies as reasonable AI development practices, states have enacted or considered laws, such as SB 53 in California and the RAISE Act in New York, with rigid definitions and thresholds that will likely not hold up well as AI advances in unpredictable fashions.
Congress can demonstrate what evidence-driven policy looks like in practice by mobilizing the Center for AI Standards and Innovation or another entity with commissioning and compiling reports on different AI initiatives. This information can then shape future regulatory efforts.
Conclusion
If Congress embraces experimentation, adoption, and information sharing, it will not merely fill a governance vacuum—it will chart the only path capable of aligning AI’s trajectory with the nation’s long-term interests. The choice before lawmakers is not between regulation and laissez-faire. It is between a fragmented system driven by fear and a coherent national framework that treats AI as a public opportunity rather than a public threat. The former locks innovation behind state borders and entrenches inequities; the latter enables the country to learn, adapt, and benefit together.
A forward-looking Congress can ensure that AI becomes a broad-based engine of economic mobility, not another catalyst of regional decline. It can ensure that risk mitigation is grounded in evidence rather than speculation. And it can reaffirm a simple but foundational constitutional promise: that laws with nationwide consequences are made by representatives of the entire nation. The work ahead is undeniably complex, but the stakes are too high—and the potential gains too great—for further delay. Congress need not predict the future of AI, but it must create the conditions for all Americans to shape it.
Authors
