AI Export Controls Need A Touch-Up, Not An Overhaul
Joseph Hoefer / Apr 24, 2025
Then-President Joe Biden meets with President-elect Donald Trump, Wednesday, November 13, 2024, in the Oval Office. (Official White House Photo by Cameron Smith)
The future of America’s leadership in artificial intelligence may hinge on how it navigates one of the most complex policy tools in its arsenal: export controls.
In recent months, some policymakers aligned with President Donald Trump have signaled an aggressive shift on the issue — one that could dramatically expand the Commerce Department’s Entity List and override the Biden administration’s more measured approach to governing the international diffusion of advanced AI models.
A new executive order or presidential memo is reportedly in the works to revise the Biden-era AI diffusion interim final rule (IFR) with a more sweeping, top-down regime. This comes amid a broader wave of trade upheaval, including fresh tariffs and new licensing requirements on chips, which risk further straining already fragile tech supply chains and shift the global AI calculus.
Proponents of this shift argue that AI capabilities are advancing too quickly for existing frameworks to keep up — that model weights could leak, adversaries could fine-tune US models for malicious ends, and that voluntary disclosure alone cannot prevent misuse.
National security should always be a top priority. But a heavy-handed, unilateral strategy risks turning a necessary shield into a blunt instrument.
If the US overcorrects, it could undermine its own innovation ecosystem — and drive away the allies it needs most in shaping responsible AI norms globally. Instead, the moment calls for a more strategic framework: one that protects sensitive capabilities while keeping the US connected to the global networks that make innovation possible.
Export controls are a tool, not a strategy
The Biden administration’s AI diffusion IFR attempted to strike a careful balance.
It introduced compute thresholds and reporting requirements for frontier AI models, laying the groundwork for a modern export control regime grounded in transparency and cooperation. While imperfect, it reflected an acknowledgment that innovation is global and that rigid controls could do more harm than good.
But with geopolitical friction rising and a new round of tech-related trade restrictions in motion, the emerging Trump-era posture appears to abandon that balance. Expanding the Entity List to sweep in dozens of AI-linked firms — many without clear military ties — and potentially rewriting the IFR through executive action would bypass critical stakeholder input. Such moves risk creating a chilling effect on research, business collaboration, and even academic partnerships, particularly in cases where the national security rationale is vague or undefined.
This approach assumes that broader restrictions will reduce adversary access, but, in practice, it may instead lead to jurisdictional arbitrage, weaken compliance incentives, and reduce visibility into where models are actually going.
The innovation ecosystem is global
American AI leadership is built not just on compute and capital, but on openness. US universities, startups, and research labs thrive in part because they attract top-tier global talent and collaborate across borders. Many of today’s most advanced models are the product of international research teams. Open-source frameworks, in particular, are often stewarded by contributors from multiple continents.
Sweeping, ambiguous export controls could fracture this ecosystem. If trusted allies and neutral countries perceive US policy as protectionist or unpredictable, they may begin building alternative AI development pathways insulated from US influence. For multinational firms and academic institutions alike, the signal from Washington matters. Layered atop recent tariff volatility and rising uncertainty over tech exports, an overly aggressive approach could push them to relocate research and development abroad, weakening US competitiveness in the process.
Allies, not isolation
The Biden administration made clear progress in aligning democratic nations around AI standards, including through the US-EU Trade and Technology Council, the Bletchley Declaration, and the G7 Hiroshima Code of Conduct. These efforts show that international cooperation on AI governance is possible, even amid rising geopolitical tension.
Rather than discard this progress, the next administration should build on it. A more collaborative export control strategy would focus on specific high-risk use cases, such as military applications and surveillance technologies, while creating streamlined mechanisms for working with trusted partners. That might include shared assurance labs, pre-vetted licensing agreements, and coordinated thresholds for model risk classification.
A modern framework for a connected world
Export control tools must evolve alongside the technologies they aim to regulate. The Entity List, designed for an earlier era of physical goods and semiconductors, is ill-equipped to handle the nuances of large-scale, distributed AI development. It paints with too broad a brush, often failing to distinguish between dangerous actors and legitimate players.
A smarter approach would start with a tiered system that classifies AI models and hardware by risk level and intended use. Frontier models built for general-purpose military or surveillance use might face strict limitations. But benign or beneficial tools — like those used in medicine, disaster response, or climate science should move more freely. The Bureau of Industry and Security (BIS) should also create clear carveouts for non-commercial research, academic projects, and open-source development under defined safeguards.
Most importantly, any such framework must be developed in consultation with the broader AI community. That means working not just with defense and intelligence stakeholders, but with universities, startups, civil society organizations, and allies abroad. Only through open dialogue can we ensure export controls do their job without throttling innovation.
Leading by example, not exclusion
The US faces a pivotal moment in shaping its global AI leadership. It could move toward a more insular approach, relying heavily on restrictive measures in the name of national security. Or it could choose a path rooted in strategic foresight and one that safeguards American interests while maintaining vital connections to the broader innovation ecosystem.
We don’t win the AI race by locking the lab door. We win by being the best place in the world to build, test, and responsibly deploy transformative technologies. That means building coalitions, not barriers. It means crafting policy that reflects how science actually works today. And it means understanding that trust, not fear, is what will keep the US ahead.
Done right, AI policy can serve both security and progress. But if Washington mistakes hard decoupling for leadership, it risks falling behind in the very race it hopes to win. A smarter strategy acknowledges the global nature of discovery, adjusts to real-time trade disruptions, leans into alliances, and commits to rules that build trust — not just walls.
Authors
