Home

New Technologies Require Thinking More Expansively About Protecting Workers

Alexandra Mateescu / Mar 6, 2024

Clarote & AI4Media / Better Images of AI / Labour/Resources / CC-BY 4.0

Last June, the US National Labor Relations Board issued a ruling that broadened the factors for determining whether a worker is an independent contractor or an employee. These changes, which take effect this month, may open the door for many workers to gain legal recognition as covered employees. Many labor advocates hope that fixing the problem of misclassification in the gig economy can help shield workers from the brutal instabilities imposed by algorithmic management. Platform companies have fought hard to maintain a business model premised on the assertion that their tightly-controlled workers are in fact self-employed businesses.

As critics have warned for years, the gig platform model brings together a host of abusive labor practices: subminimum wages and rampant tip theft; scant benefits and protections; the looming threat of arbitrary, automated account deactivations; and the use of algorithmic wage discrimination to pay each individual worker as little as they’re willing to tolerate.

However, these practices are far from unique to the gig economy. The kinds of data-intensive manipulations that companies like Uber have pioneered are in fact ubiquitous and have become the norm across a wide range of industries regardless of worker classification. Existing labor protections—including those available only to W-2 employees—are wholly inadequate in the face of challenges posed by data-hungry surveillance and AI technologies.

These issues are receiving increased attention. In a memo addressing the White House’s Blueprint for an AI Bill of Rights, the Department of Labor (DOL) rightly draws connections between the rise of surveillance and worsening workplace standards. Following the Biden administration’s Executive Order on AI, the DOL has noted the risks for job quality and worker displacement of artificial intelligence tools. The White House Office of Science and Technology Policy has also begun to investigate how employers surveil, monitor, evaluate, and manage workers.

Effectively furthering these goals, however, means recognizing that addressing algorithmic harms will require acknowledging the unprecedented scale and pace at which worker data extraction and tech-driven power asymmetries have become central to many business models. It will also require a more expansive understanding of the full range of harms experienced by workers, which go beyond more narrow, quantifiable issues that have been the focus of recent policymaking. Regulators will have to foreground both of these realities.

In most workplaces, continuous, mass data collection has become both pervasive and mundane. Worker data is a speculative commodity, both in its sale and its use in building AI systems. Yet at the same time, AI technologies are often implemented top down with little or no input from workers over issues that significantly affect workplace conditions.

Efforts to establish worker data rights grant workers some degree of control, but do little to challenge the broader systems that drive decision-making. Similarly, passing basic labor standards may not stop employers from continuing to leverage data as a cudgel against workers. For example, in New York recent legislation that was supposed to secure a minimum hourly wage for delivery workers was quickly undermined by platform companies that simply found another way to pay workers less: by imposing more surveillance and punishing workers for being too slow.

As with technology in other areas, workplace AI and surveillance tools are often first introduced experimentally into industries with precarious, low-wage workforces, disproportionately harming workers of color and immigrants who often enjoy the fewest legal protections and avenues for worker representation. Today, we see this with call center workers whose data is being used to train their own chatbot replacements, with nurses who are picking up hospital shifts on gig apps to supplement low wages, with companies’ reliance on vast subcontracted workforces to build artificial intelligence, and more.

Moreover, the harms workers are experiencing cannot always be articulated through the frame of concrete labor standards, like wages or workplace discrimination. Algorithmic harms can affect workers’ whole lives, and intersect with other structures that already target and exploit marginalized people. The growing use of networked doorbell cameras like Amazon Ring have intensified policing and racial profiling of delivery workers. Surveillance infrastructures in Amazon warehouses organize racist hierarchies between diverse floor workers and mostly white managers. There are also emerging questions of dignity and bodily autonomy, as actors and fashion models lose control of their likenesses, exposing them to everything from post-mortem AI recreations to fake, nonconsensual pornography to digital white-washing.

In any given industry, technologies are amplifying and institutionalizing already-existing inequities, forms of exclusion, and social precarity—and the workers most harmed by AI systems often have the least power to challenge or shape the role of technology. These technologies must be understood and addressed as the labor issues they actually are, and workers must have a voice in decisions about how such systems are used.

In the face of the gig economy, attaining employee status has rightly been a goal for many app-based workers. But without addressing the root causes based in rampant data extraction, employers will continue to further entrench new mechanisms of exploitation and control. Regulatory efforts, whether federal or state, should envision more expansive worker rights and protections beyond the bare minimums that traditional employment confers.

Authors

Alexandra Mateescu
Alexandra Mateescu is a researcher in the Labor Futures program at Data & Society.

Topics