Home

Donate
Perspective

What AI Policy Can Learn From Cyber: Design for Threats, Not in Spite of Them

Camille Stewart Gloster, Afua Bruce / Jul 3, 2025

Yasmin Dwiputri & Data Hazards Project / Better Images of AI / Safety Precautions / CC-BY 4.0

If you want to understand why regulatory guardrails can supercharge, not stifle, technological innovation, don’t look to theory. Look to cybersecurity. The field is, by definition, mission-critical: cybersecurity keeps our technical infrastructure resilient, protects financial institutions, and allows both individuals and businesses to leverage the internet safely. Cybersecurity methods must evolve quickly, or our critical infrastructure could be at risk.

For decades, cybersecurity has been a proving ground for innovation despite many constraints. It has faced decentralized architectures, hostile threat actors, a fragmented policy landscape, and sprawling systems beyond any one entity’s control. As with many technical tools before, the challenges didn’t paralyze progress — rather, they drove people to invent new technologies and methods. And the innovations that emerged, like zero trust architecture, weren’t built in spite of policy pressure and hard constraints. They were built because of them.

While many are hailing the recent decision to strike the 10-year moratorium on state AI laws from the Senate’s budget bill as a step in the right direction, it’s far from the end of the debate. The instinct to preempt state action remains strong in Republican-controlled Washington, often cloaked as a desire to avoid a “patchwork” of regulation. But that patchwork, messy as it may be, is often where the real progress begins. Good policy doesn’t just keep bad tech in check. It makes better tech possible. And the antidote to a counterproductive patchwork is a federal baseline that sets a clear and consistent standard.

As we navigate a perception in Washington that guardrails stifle innovation, we should ask: What did we learn from cybersecurity? The answer should be obvious. Innovation didn’t die because of oversight. It flourished under it.

Cybersecurity is a case study in how innovation thrives not in regulatory vacuums but in thoughtfully constrained, collaborative ecosystems. Cybersecurity policies, such as the California Consumer Privacy Act, were written because policymakers, practitioners, community advocates, consumers, and businesses all acknowledged what was at stake. They recognized that as we as a society became increasingly reliant on technology, we needed guardrails to direct the development of tools.

Consider the golden child of modern cybersecurity: zero trust architecture. In the old model of network security, anyone who got inside a computer system’s digital boundary was assumed to be trustworthy. That model crumbled under the weight of cloud computing, remote work, and global supply chains, which prompted attackers to find new ways in using stolen passwords, mistakes in cloud setups, or hidden malware in software updates. Engineers could no longer control the perimeter, because the perimeter didn’t exist —the line between “inside” and “outside” the organization was gone. They could no longer control or fully see the systems they were building.

This innovation of architecture that assumes every access request is from an untrusted source until authenticated and authorized didn’t happen in spite of this constraint — it happened because of it. Zero trust wasn’t just a workaround — it became a superior design model for a decentralized world, showing that well-designed, collaboratively crafted, technology and context-informed policy constraints are prompts rather than barriers. They force engineers to think differently, more creatively, and more rigorously.

Most importantly, zero trust didn’t ignore or evade constraints. It embraced them. It turned limitations into design principles. It was shaped by guidance from the National Institute of Standards & Technology (NIST), spurred by government procurement policies, and supported by industry standards. It thrived in a policy ecosystem that demanded better answers and gave engineers a new blueprint to build against.

This is exactly the kind of policy-technology feedback loop we need in AI.

The narrative that constraints kill innovation is both lazy and false. In cybersecurity, we’ve seen the opposite. Federal mandates like the Federal Information Security Modernization Act (FISMA), which forced agencies to map their systems, rate data risks, and monitor security continuously, and state-level laws like California’s data breach notification statute created the pressure and incentives that moved security from afterthought to design priority. The private sector didn’t flee from these requirements. It evolved to meet them and, in many cases, to exceed them.

Yes, federal leadership matters. But the idea that states must sit on the sidelines for a decade while Washington catches up is both strategically naïve and historically unsupported. States have long been the laboratories of democratic governance, including in cyber. Think of California’s Consumer Privacy Act (CCPA), which forced companies nationwide to reckon with data rights. Or New York’s Department of Financial Services (DFS) cybersecurity regulations, which set a new bar for financial sector accountability.

These state-led efforts haven’t derailed innovation. They clarified expectations, set policy floors (not ceilings), and showed that governance can be iterative, flexible, and innovation-enhancing.

The same is true for AI. We’re already seeing AI systems shape hiring, housing, healthcare, and more often with opaque logic, little accountability, and disproportionate harm to the most vulnerable. Waiting a decade for federal consensus wouldn’t preserve innovation. It would preserve incumbency and inequity.

The irony is that the people who build AI, like their cybersecurity peers, are more than capable of innovating within meaningful boundaries. We’ve both worked alongside engineers and product leaders in government and industry who rise to meet constraints as creative challenges. They want clear rules, not endless ambiguity. They want the chance to build secure, equitable, high-performing systems — not just fast ones.

The real risk isn’t that smart policy will stifle the next breakthrough. The real risk is that our failure to govern in real time will lock in systems that are flawed by design and unfit for purpose.

Cybersecurity found its footing by designing for uncertainty and codifying best practices into adaptable standards. AI can do the same if we stop pretending that the absence of rules is a virtue.

We don’t need ten years of silence. We need active, iterative, multilevel governance that gives engineers something worthy to build toward. The future of AI won’t be defined by what it can do in a vacuum. It will be defined by what we choose to ask of it.


Authors

Camille Stewart Gloster
Camille Stewart Gloster, Esq. is the CEO of CAS Strategies, LLC and the former Deputy National Cyber Director for Technology & Ecosystem Security for The White House. In her role, Camille led technology, supply chain, data security, and cyber workforce and education efforts for the Office of the Nat...
Afua Bruce
Afua Bruce is a leading expert in technology, policy, and social impact. She is the author of The Tech That Comes Next and a sought-after advisor and speaker on ethical, human-centered technology. Through her work, Afua helps organizations leverage data and technology to drive meaningful change in t...

Related

News
US Senate Drops Proposed Moratorium on State AI Laws in Budget VoteJuly 1, 2025
Analysis
June 2025 US Tech Policy RoundupJuly 1, 2025
Perspective
How US Firms Are Weakening the EU AI Code of PracticeJune 30, 2025

Topics