The US Is Fighting for Control of AI. It Would Be Better Off Building Standards.
T.J. Pyzyk / Apr 29, 2026
In this June 13, 2012, file photo, Rod Beckstrom, president of the Internet Corporation for Assigned Names and Numbers (ICANN), points from behind a podium during a speech in London on expanding the number of domain name suffixes. (AP Photo/Tim Hales, File)
The drama playing out between the Pentagon and the AI firm Anthropic reflects a genuinely novel scenario: the US government is on the outside looking in on the most transformational technology of the decade.
Amid ongoing legal battles over the Trump administration’s designation of Anthropic as a national security supply chain risk, a response to the company's refusal to grant the Pentagon unrestricted access to its models, President Trump appears to be softening his stance. For now, however, the technology is in the hands of extremely well-funded, mature private organizations that can live without US government contracts. Rather than trying to strong-arm its way in, the Trump administration should be building influence in AI the same way its predecessors in the US government established lasting influence over the internet.
The US cannot afford to miss the opportunity to shape the standards and governance architecture of AI’s future. The economic stakes for investors and for the country’s economic health are not abstract. The largest sources of value creation in the US economy over the last 30 years—enterprise software, social media, cloud infrastructure—were built on top of US-led internet interoperability standards. The leader in AI standards will be best positioned to capture the value of AI over the next 30 years.
The internet’s center of gravity
Unlike with the internet, the US government played no meaningful role in building the commercial AI systems that now define the frontier. The internet emerged from foundational protocols developed through the Pentagon’s Defense Advanced Research Projects Agency (DARPA)-funded research in the 1960s and 70s. The network infrastructure that became the commercial internet was built on that government-funded backbone. While broadly distributed, the internet has always had a structural center of gravity: domain names and Internet Protocol (IP) addresses. Whenever your computer receives a domain name, it needs to look it up in a directory to find the corresponding IP address. Someone needs to maintain that master list and allocate numbers, and for the first three decades of the Internet, that someone was the US government.
The US rarely misused this power, with one notable exception—in the early 2000s, when the George W. Bush administration blocked the creation of .xxx top-level domains. While blocking the creation of pornographic internet addresses may seem pedestrian today, in the Bush years, it was enough to forever cast doubt on the independence of internet governance bodies, giving fuel to the efforts of adversarial countries such as Russia and China to shift internet governance away from US influence.
The .xxx episode tarnished the US government’s authority, but the broader architecture held. The government eventually gave up control to the global multi-stakeholder community in 2016, but only after the governance architecture had matured to the point where the transition preserved the norms the US had established.
AI has no center of gravity
AI has no such structural center—at least not yet.
It’s become popular to argue that compute, or raw processing power, is the natural lever for AI governance. While restrictions on compute, such as chip export controls, help rein in access to the frontier of AI development, they have limited effectiveness in containing diffusion below it. As open-weight models proliferate and techniques develop to run models more efficiently, compute concentration weakens as a foundation for governance. Measuring influence through compute leads to an arms race rather than the judicious building of governance architecture.
Model weights are controlled entirely by frontier labs, which, unlike so many revolutionary computing technologies before them, are not the products of DARPA innovation. What we are witnessing is the US government clumsily trying to assert control using the crudest lever it has: government contracts. Even if you believe the government should have greater control over a frontier technology, watching it use the full weight of its authority to bully a private company into submission is uncomfortable. It reeks of overreach. In the end, however, the tech companies are the ones with the power. Anthropic and OpenAI can leave the US. It would be costly and onerous, but they can.
Standards, not strong-arming
So, what does the US government do?
We are in the early days, and more consequential models will continue to emerge. Enterprise AI is in its infancy. Agents will need to communicate across organizational borders, authenticate on behalf of users, and transact in a common language. The leading candidate for a coordination chokepoint—the closest thing to a directory of names and numbers that AI might produce—is agent interoperability: the protocols governing how AI systems interact across boundaries.
The internet existed on the periphery of most people’s awareness until consumer email came along, launching internet usage into the mainstream. Besides the nerdiest guy you know running OpenClaw on a Mac Mini, AI has yet to have its You’ve Got Mail moment. The question is whether similar dynamics will emerge and whether, as was the case with the Internet Protocol, the US will lead in establishing the standards that enable them.
The US is reasonably well-positioned. The Trump administration’s rebranding of the Biden-era National Institute of Standards and Technology (NIST) AI Safety Institute as the Center for AI Standards and Innovation (CAISI) could prove prescient, despite being contentious at the time. CAISI has already launched an AI Agent Standards Initiative, which will hopefully serve as the primordial birthplace of AI interoperability. The primary US standards organization serves as the secretariat for the principal international AI standards committee. The pieces are there.
But standards need buy-in, not just domestically but internationally. The US could bolster its natural positioning as the home of the best frontier labs and its history of leading Internet standards development. But the attacks on Anthropic—and with it, the specter of future such assaults on other US firms—risk the same kind of credibility damage that the Bush administration’s .xxx intervention inflicted on internet governance.
Standards cannot be infected by an administration’s performative, my-way-or-the-highway intimidation, and countries evaluating whether to adopt US-led AI infrastructure are watching the Anthropic episode unfold. Undue industry influence risks contaminating the reputation of institutions like NIST and the commercial viability of US AI labs abroad.
If President Trump wants to lead the world in AI, including in support of our military capabilities, he should double down on constructive measures: increase government investment in AI research, bolster CAISI, and work with frontier labs to build toward AI’s moment of interoperability. The window for this kind of architecture-shaping is finite and remarkably brief. The institutions, protocols, and norms taking shape today will determine how AI is governed and who is best situated to capture its value creation for decades.
The US is still in a favorable position, but the question is whether the Trump administration will choose to set standards rather than strong-arm AI companies– before the architecture is built by someone else.
Authors
