Home

Donate
Perspective

The Age of AI Anxiety — and the Hope of Democratic Resistance

Lam Tran / Oct 17, 2025

A digital illustration titled, "A Rising Tide Lifts All Bots." (Rose Willis & Kathryn Conrad / Better Images of AI / CC 4.0)

In 2023, leading artificial intelligence researchers and executives called for a six-month pause on advanced model training, warning of “profound risks to society and humanity.” Governments around the world responded with summits, statements and frameworks — but little in the way of enforceable actions. Since then, frontier labs have released ever-larger models, chipmakers like Nvidia have broken valuation records and AI has permeated nearly every aspect of our daily lives, from customer service to services that help students cheat in school and mental-health apps. The White House’s AI Action Plan, while comprehensive in scope and encouraging in affirming United States leadership in AI development and deployment, reinforced the accelerationist trajectory: speed first, safeguards later. Amid the frenzy, one sentiment has crystallized across the public sphere: anxiety.

Unlike past waves of technological revolution, AI anxiety fuses economic insecurity, technical opacity and political disillusionment. AI-enabled automation is threatening jobs for white-collar workers and recent graduates, with layoffs and slow job growth happening at the same time as record corporate profits, eroding faith in the promise of shared prosperity. The technical “black box” problem — developers’ inability to explain or control model behavior — deepens that unease, especially as AI systems get integrated into decision-making, creative production, and interpersonal intimacy.

The storm center is further spiraled by a broader societal collapse of trust. Institutions long tasked with producing and disseminating knowledge — government, science, media — now face historic polarization and skepticism. Social platforms that once promised empowerment devolved into cycles of “enshittification”: declining quality, rising exploitation and algorithmic opacity. AI integration into business processes risks worsening this dynamic, outsourcing judgment without accountability. From the seemingly banal frustration with algorithmic error to corporate evasion that compounds workplace and consumer grievances, the public is forced to reckon with a transformational and disruptive technology — often without their consent and consultation.

Regulation is one mechanism governments deploy to address collective anxieties — changing the behaviors of economically and politically influential actors. However, the law only works when it can be enforced and felt by those it governs. In the case of AI, enforceability is precarious. The ongoing investment rush to build data centers and expand the energy grid to power AI is approaching the amount of capital spent during the railroad boom in the 19th century — turning AI development into a financial bubble with strong vested interests against oversight. The rapid pace and scale at which AI diffuses further outstrips bureaucratic capacity. These challenges raise serious doubts that government intervention alone can meaningfully address public concerns.

Divides among experts form an AI triad

The AI governance debate is fractured into three primary ideological camps, also called the AI-triad, a framing coined by Harvard Law Professor Jonathan Zittrain: accelerationists, safetyists and skeptics. Each has a distinct worldview, shaped by different assumptions about what AI’s transformation potential is, how much power to entrust to machines and what should be done now.

Accelerationists view AI as an inevitable and potentially utopian force. In The Techno-Optimist Manifesto, Marc Andreessen, one of the most influential venture capitalists in Silicon Valley, frames AI development as both a moral imperative and a competitive necessity to assert economic, cultural and geopolitical leadership. This camp’s policy prescription favors pro-market measures with minimal oversight and voluntary industry standards. Their worldview was reflected in the proposed federal AI moratorium in US President Donald Trump’s “Big Beautiful Bill,” which would have prevented states from regulating AI for a decade.

Safetyists, by contrast, see existential catastrophe on the horizon. For them, the core issue is how to halt or slow AI development until AI can be made interpretable and controllable. Figures such as Daniel Kokotajlo and Eliezer Yudkowsky argue that the rapid pursuit of artificial general intelligence (AGI) could result in a catastrophe for humankind. Their regulatory vision demands urgent restraint: licensing schemes for frontier models, mandatory pre-deployment audits and an international regime to coordinate safety standards. For example, California’s SB 1047 would have compelled developers of large AI models to submit safety plans, undergo independent audits, and be held liable for harms — but was vetoed by Governor Gavin Newsom (D).

Skeptics, meanwhile, reject the premise of either revolutionary potential or apocalypse. Scholars like Arvind Narayanan and Sayash Kapoor argue that AI is not exceptional but “normal” — like the internet or electricity, transformative but still subject to societal and governmental pacing. They believe AI’s short-term impacts — bias, fraud, corporate concentration, environmental costs — are more pressing than hypothetical superintelligence.

This group’s policy agenda calls for pragmatic adaptation: expanding the Federal Trade Commission’s consumer protection mandate to AI, implementing bias audits in hiring (as in New York City’s Local Law 144) and applying antitrust laws to AI-industry consolidation.

Accelerationists are outrunning regulators, creating a governance gap

The European Union’s AI Act, conceived as the world’s first comprehensive AI framework, has become a case study in how accelerationist pressure can undermine well-intentioned governance efforts. Aspiring to achieve the “Brussels Effect” as the General Data Protection Regulation (GDPR) did with setting global standards for data privacy, the AI Act aimed to influence international companies and governments to adopt a similarly stringent regulatory approach. Yet as it nears implementation, lobbying from Big Tech, pushback from member states and US trade pressure have turned the EU Act into a diluted compromise. What was once billed as a landmark in global AI governance risks codifying corporate priorities, as political leaders get increasingly swayed by accelerationist arguments.

Unlike Brussels’s more heavy-handed regulatory approach, Washington has historically embraced a more pro-market stance. This approach has given the US tech sector unprecedented financial and political power, allowing a handful of firms to evolve into quasi-sovereigns with influence that rivals nation-states. Historian Yuval Harari’s warning — that AI could concentrate power and erode democracy — is now materializing in real time in the US as Big Tech grows into Big AI with little constraint from the state.

Since Trump’s return to the presidency, the relationship between Silicon Valley and Washington has become overtly transactional and personality-driven. The US AI Action Plan treats AI as a race to be won rather than a technology to be governed. Beyond the White House, industry lobbying has intensified across federal and state levels, ensuring that the contours of AI policy remain shaped by those poised to profit most from its acceleration.

The result is a major disconnect: while the public and civil society broadly support stronger AI oversight, both state and corporate leaders are sprinting toward AGI with little accountability.

Toward an unlikely coalition

Divisions between safetyists and skeptics have often stalled concrete regulatory action, leaving a vacuum for accelerationist agendas to dominate. The near passing of the AI federal moratorium was alarming. Yet its last-minute defeat also revealed the potential of an unlikely coalition to safeguard the public interest in AI.

​Historian Thomas J. Sugrue defines unlikely coalitions as alliances built not on shared ideology, but on shared stakes in times of crisis. The 1940s United Packinghouse Workers of America offers an instructive precedent: Black and white workers, divided by racial and physical barriers that were exploited by employers, forged a cross-racial alliance that fought for wage increases and improved working conditions.

A similar “unlikely coalition” can and should emerge in AI governance among researchers, movement lawyers and movement technologists who recognize that the accelerating AI trajectory — opaque, profit-driven, and weakly governed — poses a systemic risk that transcends ideology.

The outlines of this convergence on particular issue areas are emerging in California. After Newsom vetoed SB 1047, a sweeping bill focused on frontier models and aligned with the safetyist vision of strict pre-deployment control, the governor convened the Joint California AI Policy Working Group to draft other policy alternatives. Their recommendations shaped SB 53, which represents a pragmatic compromise between the three AI governance camps.

The bill acknowledges existential risk (a safetyist concern), strengthens transparency and whistleblower protections (a skeptic and safetyist demand), easing compliance burdens compared to SB 1047 while investing in a fund for public computing (an accelerationist accommodation).

The politics surrounding SB 53 further underscore this fragile but promising coalition that bridges the safetyist-skeptic divide. Major industry players opposed the bill. However, Anthropic and other academic and public-interest groups endorsed it as a balanced framework for responsible innovation. Even opponents of SB 1047, like former White House AI policy adviser Dean Ball, have praised SB 53 as a more balanced and technically realistic approach to governing frontier AI.

The next frontier for this pro-AI governance coalition lies in addressing the over-concentration of power — Big AI itself.

Amazon, Microsoft, and Google control roughly two-thirds of global cloud computing — the infrastructure on which all advanced AI models depend. These tech giants are undertaking vertical integration as an AI-integrated economy begins to take shape: hyperscalers own the chips, the clouds and stakes in model developers like OpenAI and Anthropic. These so-called “partnerships” replicate Big Tech’s historical monopolization tactics, from Facebook’s purchase of Instagram to Google’s dominance of digital ads.

History offers a roadmap: Congress once forced railroads to divest from coal, required telecoms to interconnect networks and separated investment from commercial banks. The same principles should govern AI. Chips must be independent from clouds, and clouds from models. Regulators should reject cross-ownerships that entrench control over the digital economy.

The coalition forming around these goals of promoting competition and protecting workers’ and consumers’ rights cuts across partisan lines. Safetyists and skeptics are finding unexpected allies, from progressive Democrats focused on labor equity to right-wing populists suspicious of corporate power. Former White House strategist Steve Bannon helped mobilize opposition to the federal AI moratorium, framing it as a giveaway to Silicon Valley. Sen. Josh Hawley (R-MO) has proposed algorithmic-audit mandates and copyright protections for artists, while Sen. Bernie Sanders (I-VT) has proposed a “robot tax” to redistribute productivity gains from automation to displaced workers. Their convergence suggests a nascent bipartisan agenda with politicians and influential voices on both sides starting to treat AI governance not as a culture war, but as a struggle over economic equity and institutional control.

Anxiety about AI, if left unchecked, leads to paralysis. But when channeled into strategic coordination, it can become a force for change. The governance of AI should not be dictated solely by Silicon Valley or an administration intent on winning the AI race at all costs. It should be shaped by civil society, recognizing AI as a technology with transformative potential — both good and bad — that demands democratic mobilization and negotiation, not technological surrender. History shows that unlikely coalitions can redirect runaway systems. In the case of AI, they may not just be possible — they may be our best, and only, hope.

Authors

Lam Tran
Lam Tran is a Washington-based analyst focusing on tech policy and U.S.-Asia relations. She is a former Summer 2025 AISST Fellow at The Berkman Klein Center for Internet & Society at Harvard University and Cambridge Boston Alignment Initiative, focused on AI governance.

Related

News
What Happens When You Give Millions of People Free Access to AI?October 14, 2025
Perspective
To Have Democracy, We Must Contest DataOctober 14, 2025
Perspective
Generative AI’s Productivity MythOctober 13, 2025

Topics