Evaluating Trends and Challenges in State Regulation of Workplace Technologies
Mishal Khan, Annette Bernhardt / Nov 3, 2025Mishal Khan is a senior researcher and Annette Bernhardt is director of the Technology and Work Program at the UC Berkeley Labor Center.
Until recently, policy discourse in the United States has been contradictory when it comes to regulating digital technologies in the workplace. Concerns about job automation and invasive surveillance dominate the public discussion about AI, and yet policymakers have rarely made worker impacts a central concern, more typically focusing on consumer privacy and the existential risks posed by frontier models. But this has started to change over the past several years — and especially this year — with the introduction of a slew of bills that address a range of harms to workers from data-driven workplace technologies.
In fact, workers are arguably one of the largest groups impacted by emerging technologies. With the advent of big data and artificial intelligence, employers in a wide range of industries are increasingly capturing, buying, and analyzing worker data, electronically monitoring workers, using algorithmic management, and automating tasks and jobs. The use of data-driven technologies has been documented across multiple sectors including grocery stores, hospitals, call centers, trucking, fast food, hotels, entertainment, video gaming, warehouses, and the public sector. While these technologies have the potential to be beneficial, workers and researchers are increasingly reporting negative impacts such as work intensification, automated firing and discipline, race and gender discrimination, invasive surveillance, profiling of union organizers, and job automation and deskilling.
In response, unions and other worker advocates have made significant progress in developing a portfolio of policy concepts to regulate employers’ growing use of AI and other digital technologies in the workplace. In this article, we analyze key legislative trends, give examples of new policy concepts, and end with a discussion of challenges and opportunities for worker advocates going forward.
Trends and updates
The 2025 state legislative session has been something of a watershed moment for tech and work policy, with the introduction of several hundred new bills and the passage of a number of key laws. We recently completed a major overhaul of the UC Berkeley Labor Center’s comprehensive policy guide, which now covers over 350 technology-focused bills and laws that were either introduced by unions or that directly impact workers. We encourage the reader to visit this guide for in-depth coverage and analysis.
Here we highlight three significant trends, mainly discussing bills because this is such a new policy area. Still, in the absence of legislation at the federal level and an overall anti-regulatory mood in state legislatures, the progress we have seen this year is remarkable.
1. Electronic monitoring and algorithmic management
This year, unions in California, Massachusetts, and several other states put forth ambitious bills establishing a broad regulatory framework to rein in some of the most pernicious harms that digital technologies are causing workers. These bills contain both electronic monitoring and algorithmic management provisions covering all industries, all firms of all sizes, and all workers – both employees and independent contractors. These bills also include expansive definitions of “automated decision-making systems” covering technologies that either replace and assist human decision-making, ensuring that employers will not evade liability by claiming their use is not covered.
These bills target harmful practices such as robo-firing of workers, long documented by Amazon and Uber workers; surveillance of workers for protected activity like organizing; and making predictions and decisions about workers based on protected statuses such as race and pregnancy. The California state legislature passed the No Robo Bosses Act, or SB 7, in September this year. Although it was vetoed, it was one of the first bills of its kind to make it to the governor’s desk in any state.
Taken together, these bills establish a number of important regulations on employers’ use of algorithmic management. First, many ensure that workers receive detailed notice when an employer uses an algorithm to make an employment decision about them, with the right to access information about the data and models used. Second, many of these bills also require final employment decisions to be made by a human. Third, they often give workers the right to appeal decisions made about them. And finally, some of these bills prohibit certain practices such as predictive worker profiling and individualized wage setting.
Similarly, many of these bills would require employers to give detailed notice of electronic monitoring to workers, including information on how they intend to use the collected data. Another common provision is to prohibit employers from making employment decisions relying solely on electronic monitoring data, and place limits on the permissible uses of electronic monitoring — including prohibiting monitoring workers when they are off duty.
None of these bills passed this year, but their introduction signals an ambitious agenda on the part of unions and sets an important precedent for tech and work policy moving forward. Also, a related set of AI bills focus on discrimination in a broad range of sectors and require transparency and impact assessments; the 2024 Colorado AI Act is one example.
2. Automation
While fears about automation and job losses are not new, today’s digital technologies and their expanded capabilities have instigated a new wave of concern. In response, this year we saw the acceleration of a trend that started in 2024, with the introduction of bills prohibiting the replacement of specific occupations by digital technologies including teachers, media workers, mental health professionals, community college faculty, nurses, healthcare professionals, court reporters, retail workers, translators, and drivers.
These proposals differ in their approach. A bill in Texas, for example, outright prohibits the use of AI to replace or supplement the role of teachers in classrooms. The New York FAIR News Act requires employers to obtain consent and provide the opportunity to bargain before using workers’ creative output to train a generative AI system – protecting workers from being forced to train their replacement, which is one of the threats that generative AI poses to a wide range of workers. In yet another example, unions have sponsored bills that would prohibit companies from operating autonomous commercial trucks without a human operator in the vehicle.
Several bills of this type were signed into law. Oregon and California enacted laws prohibiting companies from advertising chatbots as licensed nurses or healthcare professionals. Illinois now prohibits the replacement of both mental health workers and community college faculty by AI. A law in Nevada prohibits the replacement of mental health counselors in schools with AI. At the city level, Long Beach, California passed an ordinance limiting the number of automated checkout counters in retail stores.
Importantly, this year we also saw a significant number of industry-regulation bills that were not focused on worker impacts, but rather on facilitating the adoption of AI. For instance, dozens of bills created regulatory structures that would allow companies to deploy driverless autonomous vehicles in the state. Other variants include bills authorizing the use of AI in translation services or in classrooms. Another set of bills create weak disclosure-only requirements when generative AI is used in specific industries, such as publishing, legal settings, and healthcare.
3. Human in the loop
Even when digital technologies do not automate entire jobs, they can still replace specific tasks, potentially de-skilling workers, leading to work intensification, eroding worker autonomy, and creating harmful impacts for the public. An important new policy approach this year was the introduction of scores of bills that establish rules for how workers interact with technology, ensuring that a human is always in the loop and has ultimate decision-making authority.
However, human-in-the-loop requirements vary significantly in their scope. They range from simple human review mandates to more robust provisions that ensure workers are not simply signing off on algorithmic recommendations, but are rather fully in command of the digital technologies they are working with.
Some of these bills (such as those covering media workers and nurses) contain language specifying that workers must be able to reject, modify, or override any outputs or recommendations generated by digital technologies without fear of retaliation or discipline — thus protecting workers’ professional scope of practice.
Similar concepts are also showing up in healthcare, critical infrastructure, criminal justice, and the public sector. In response to growing concern around the use of AI to deny health insurance benefits, dozens of states introduced versions of a 2024 California law stating that only a licensed physician, not an algorithm, can issue final adverse health benefit decisions. In the public sector, a number of states introduced and passed bills requiring human review when an algorithm is used to make decisions about benefits eligibility, for example.
Beyond specifying varying levels of human agency and oversight when working with AI systems, these bills often contain a host of additional requirements, such as reporting by employers of AI use to government agencies; maintaining a documentation trail to ensure accountability; mandating the provision of sufficient training and resources to workers and ensuring that decision-making remains in the hands of licensed professionals to protect against deskilling. There are also important questions about how liability for harms should be attributed when workers work alongside digital technologies, an area that continues to evolve.
Challenges for the future of tech and work policy
As important as this year has been for moving worker impacts into the tech policy arena, there are clearly challenges ahead for unions and other advocates. Analysts have documented an unprecedented increase in lobbying by tech companies against bills regulating AI, including those focused on protecting workers. As a prime example, this year the California Privacy Protection Agency (CPPA) passed significantly weakened regulations around automated decision-making in response to opposition from the tech sector. Similar battles have been playing out over the regulation of chatbots. And of course, tech companies continue to advocate for federal pre-emption of state regulation of AI, which is especially worrisome to unions, given that states (and cities) have long been leaders in setting strong labor standards.
Worker advocates also face specific headwinds. As noted above, there is growing legislative activity to set industry-specific standards for the adoption of AI, often without input from labor or attention to work impacts. The danger is that deployment reverts to default scenarios of exploitation or automation by employers, even when doing so is not in the interest of the public, workers, or even the company’s productivity. With continued federal inaction, we expect an escalation in this type of AI-enabling regulation that codifies weak protections and undermines the work done by advocates.
There is also the challenge that much of the current discussion about AI takes automation as inevitable – that the goals and trajectory of AI model development are inviolate, with job loss as the only possible outcome. Even the language we use (“AI is going to automate all our jobs”) is telling, because there is no mention of the actors themselves — the companies that design the tech, the venture capitalists that fund it, the vendors that develop applications for the commercial market, and the employers that implement it. For worker advocates (and indeed for anyone who is focused on regulating AI), this profoundly defeatist and disempowering narrative represents one of the biggest challenges to winning good tech policy.
But 2025 also showcased that there are significant opportunities to build common cause between unions and civil society. Many of the dozens of human-in-the-loop bills that we reviewed were not sponsored by unions but rather civil society groups, signaling an emerging societal consensus across labor, professional associations, and consumers that key functions such as education, health care, and government are vital for the public good and must be performed by a human. And in California, a robust coalition of labor, privacy and civil society groups have joined forces over several years, for example to fight weak data privacy regulations.
Ultimately, workers need more than good laws. They deserve a seat at the AI table, participating in decisions over which technologies are developed, how they are used in the workplace, and how the resulting productivity gains are shared. This will require a range of strategies, including the regulation of AI developers, investing in pro-worker R&D, and a significant increase in workers’ right to organize into unions and bargain collectively over new technologies. But establishing strong legal standards regulating employers’ use of workplace technologies is a critical bedrock strategy to ensure that workers are able to thrive in the 21st century economy.
Authors


