Fragmented AI Laws Will Slow Federal IT Modernization in the US
Daniel Castro / May 30, 2025Daniel Castro is vice president at the Information Technology and Innovation Foundation (ITIF) and director of ITIF’s Center for Data Innovation. ITIF is a nonprofit, nonpartisan research and educational institute whose supporters include corporations, charitable foundations, and individual contributors.

Congressman Jodey Arrington (R-TX) is pictured with Speaker of the House Mike Johnson (R-LA) at a press conference discussing the House passage of the "One Big Beautiful Bill Act." (March 29, 2025, X)
The United States Congress is considering a 10-year moratorium on state and local AI laws as part of the recently passed House budget reconciliation package. This comes at a critical time: A growing patchwork of conflicting state regulations is threatening to stall innovation, drive up compliance costs, and undermine federal efforts to modernize IT systems. A temporary pause on state enforcement would give policymakers time to create a national framework that supports innovation and ensures federal agencies can access the best commercial AI tools available.
States are advancing hundreds of new proposals to regulate AI that differ widely in scope and substance. This fragmented landscape not only burdens startups and tech firms but also poses an emerging obstacle to federal government IT modernization. The proliferation of inconsistent state rules risks undermining US leadership in the global AI economy. The federal government relies heavily on the private sector for the technology it uses to deliver services and execute critical missions. Indeed, the Federal Acquisition Streamlining Act (FASA) directs federal agencies to prioritize commercial off-the-shelf (COTS) technology that can be adapted for public sector needs over custom-built software.
Moreover, President Donald Trump signed an executive order in April reaffirming his administration’s commitment to procuring commercially available products and services “to the maximum extent practicable” in an effort to reduce unnecessary government spending. However, a reliance on COTS technology depends on a strong, scalable, and competitive private market, especially for emerging technologies like AI. But when individual states create conflicting regulatory requirements, they distort that market, limit what developers can build at scale, and ultimately reduce the quality and availability of tools federal agencies can adopt.
The Department of Veterans Affairs illustrates this challenge. As one of the largest users of AI in the federal government, the VA uses commercial AI tools to improve cancer detection, manage chronic diseases, and streamline patient care. Many of these tools come from companies operating in California and Colorado—states that have taken aggressive approaches to regulating AI. As state rules diverge and multiply, companies must start diverting more time to compliance and less on innovation. In addition, they often face the choice of either tailoring their products to comply with the most restrictive policies or retreating from certain markets altogether. Either option makes it harder and more expensive for federal agencies to procure and deploy the technologies they need to modernize.
The effect is particularly acute for AI applications that must work across jurisdictions. Facial recognition technology provides a useful example. The Department of Homeland Security relies on facial recognition to match traveler identities at airports nationwide. Developers have improved these systems to ensure low error rates across demographics and enable accurate matches with older images, such as those from expiring passports. These same technologies are also used in commercial settings, including hospitals, apartment buildings, and financial institutions. If developers had faced a fragmented legal landscape from the start—with different standards in every state—they would have struggled to improve performance, secure investment, or reach the scale necessary to meet federal needs. Instead, federal use cases benefited from a unified market that allowed technology to advance quickly and responsibly.
The stakes extend beyond better service delivery. Agencies like the Department of Defense and Department of Energy rely on cutting-edge AI to support national security, scientific leadership, and global competitiveness. When states introduce sweeping or inconsistent AI mandates, they do not just complicate compliance for vendors—they disrupt the very supply chain the federal government counts on to scale innovation. These rules can tilt the playing field in favor of companies with large compliance teams, not necessarily those building the most secure, accurate, or mission-ready systems. The result is a weaker foundation for public-sector AI deployment and a loss of momentum in areas where the US cannot afford to fall behind.
Congress has an opportunity to address this growing challenge. The budget reconciliation package just passed by the House includes a provision imposing a 10-year moratorium on state and local enforcement of AI laws. While not a permanent preemption, the moratorium would offer a critical pause—giving federal policymakers time to study the technology, develop appropriate guardrails, and ensure a consistent regulatory environment that supports innovation. The same bill would also invest $500 million through the Department of Commerce to modernize federal IT systems using commercial AI and automation technologies. These efforts are tightly linked. Without a stable national framework for AI, federal agencies will be forced to procure AI tools from a much less robust AI marketplace.
Some lawmakers have signaled concerns about whether the moratorium meets the requirements of the Byrd Rule, which governs what can be included in a budget reconciliation bill. But this provision directly supports federal IT modernization. When policies restrict the geographic market for technologies with high fixed costs and low marginal costs—like AI systems—they raise the average cost per user by limiting scale. This can lead to higher prices, reduced access, and less incentive for innovation. The moratorium is not an extraneous provision—it is an essential element to ensure state and local laws and regulations do not derail the effectiveness and efficiency of the very AI tools Congress seeks to fund.
To unlock the potential of AI in government, policymakers should ensure that federal agencies have access to the best tools the private sector can offer. That requires a unified national approach to AI regulation. A fragmented market shaped by dozens of state-level rules will not deliver the innovation, security, or efficiency federal agencies need. Including the moratorium would prevent that outcome and secure the foundation for long-term US leadership in AI.
Authors
