Home

Donate
Perspective

Weaponizing AGI: How Speculative Futures Undermine Worker Protections

Natalia Luka / Jun 10, 2025

This piece is part of “Ideologies of Control: A Series on Tech Power and Democratic Crisis,” in collaboration with Data & Society. Read more about the series here.

Alina Constantin / Handmade A.I / CC By 4.0

Last week, Anthropic CEO Dario Amodei went on the record saying AI could wipe away up to half of all white-collar jobs in the next one to five years. Launching a thousand newspaper headlines with a few choice words, Amodei warned that “the broader public and politicians, legislators” were not “fully aware” of the sweeping changes to come in fields such as technology, finance, and law.

To Amodei’s point, federal and state governments have been slow to recognize this potential harm or pursue legislation specifically targeting the re-skilling and supporting workers in professions most vulnerable to disruption from AI tools. For example, jobs such as copy editors, translators, and call center workers are already being reduced as some AI systems offer an automated replacement.

However, the rhetoric of an impending, hyper-capable artificial general intelligence (AGI) has itself been used to justify unfounded and potentially harmful policies within both corporations and governments. Specifically, it has become a tool for two worrying trends: (1) politically and economically motivated layoffs framed as inevitable AI progress, and (2) legislative and regulatory inaction on current harms to workers in favor of sweeping deregulation to speed along innovation.

Layoffs disguised as progress

A March 2025 article from The Atlantic highlights this dynamic. Thomas Shedd, a former Tesla engineer and Elon Musk’s pick for director of the Technology Transformation Services IT department at the General Services Administration offered the following: “As we decrease [the] overall size of the federal government, as you all know, there’s still a ton of programs that need to exist, which is a huge opportunity for technology and automation to come in full force. Shedd’s framing reflects a broader narrative among AI supporters that embraces automation as a social good.

The promise of AGI has been used to justify job cuts in the service of technological progress, including the nearly 60,000 federal employees who have been laid off, and another 76,000 offered buyouts. The phenomenon is also taking place in the private sector. Salesforce CEO Marc Benioff, for instance, said in February that his company stopped hiring engineers because AI agents could partner with the existing workforce to get the job done.

Under the hood, however, these decisions often serve other agendas. The Trump White House, for example, has pursued an open plan of defunding independent federal agencies that fail to align with their policy objectives and cutting staffing at others. “We want to put them in trauma,” said Russell Vought, former Director of the Office of Management and Budget under Trump, in a speech last year. “We want their funding to be shut down so that the EPA can't do all of the rules against our energy industry because they have no bandwidth financially to do so.” Reporting on the work of the Department of Governmental Efficiency (DOGE) shows the agency has deployed AI primarily to analyze sensitive data and surveil federal employees.

In Benioff’s case, the decision to keep the workforce low follows years of public pressure by activist investors to increase profit margins. However, research on the use of generative AI tools by software developers is mixed on the overall benefit of these tools. The largest experimental study to date showed a 26% increase on average in weekly completed tasks among developers at three different companies but found no relationship between the use of these tools and project completion. “A less optimistic interpretation…is that developers may engage in more trial-and-error coding,” the study’s authors wrote, “Such a change in coding style could lead to lower-quality code in the long run and undermine efficiency gains in the quantity of code.”

Policy avoidance in the name of innovation

Just as the promise of AGI has been used to justify shrinking workforces, it has also served as an argument for delaying or avoiding near and medium-term regulatory protections that could tangibly benefit working populations. If enacted, President Trump’s “Big, Beautiful Bill,” passed by the House in May, would jeopardize most existing protections workers have against AI by imposing a 10-year moratorium on state-level regulation of AI. It is unclear yet what might take its place, though comments made by Republican lawmakers suggest that a federal policy would prioritize innovation over strict regulation.

To date, the vast majority of legislation to protect workers against present-day harms of AI technologies, such as workplace surveillance, algorithmic discrimination, and automated firing, has taken place at the state level. A 10-year moratorium on state AI laws would be dangerous not only because it revokes existing worker protections, but also because state-level legislation provides a critical testing ground for future federal policies. Without the ability to experiment at the state level, eventual federal policy is made based on less evidence and with far greater ramifications and higher risk for the American economy as a whole.

Indeed, one of the central challenges in crafting policy around AI and the workforce is that the evidence of AI’s actual impact remains limited and uneven. While some studies report productivity gains between 10% and 30% on average, they also reveal what researchers call a “jagged technological frontier.” AI performs well in structured, repetitive tasks, but still lags behind humans on tasks requiring more nuanced reasoning and communication. Productivity gains are also not evenly distributed across professions or among individuals, with the greatest gains tending to accrue to younger, less experienced workers.

Ultimately, the recent wave of job cuts and deregulation appears to be driven less by actual advances in AI than by the belief systems surrounding it.

While it is critical to prepare for the effects of AI on employment and invest in reskilling programs, this effort should be grounded in present-day evidence rather than speculative futures. Policy-making around the future of work must come with a real focus on where the productivity gains are happening, who is being impacted, and how. Absent specificity on these points, and set against a backdrop of mass panic, we risk either inaction or misdirected action on some of the most important issues of our time.

Authors

Natalia Luka
Natalia Luka is a Ph.D. candidate in Sociology at the University of California, Berkeley studying worker voice and AI and a researcher at the Berkeley AI Research Lab, Responsible AI Initiative. Previously, she was a Dissertation Scholar at the Washington Center for Equitable Growth, a Digital Ethic...

Related

Perspective
Austerity IntelligenceJune 4, 2025

Topics