Why Gig Platform Wage Theft is a Governance Crisis
David Sathuluri / Jan 13, 2026
The Uber and Lyft logos displayed on a phone and backdrop. (Stock Catalog)
In May, the nonprofit Human Rights Watch released a damning investigation that should alarm policymakers worldwide.
The group argued that the seven largest gig platforms in the United States — Amazon Flex, DoorDash, Favor, Instacart, Lyft, Shipt and Uber — are using algorithmic systems not simply to manage workers but to systematically extract labor while evading fundamental legal obligations. The report, “The Gig Trap: Algorithmic, Wage and Labor Exploitation in Platform Work in the US,” exposes what amounts to a crisis of AI governance algorithms that have become the primary instrument through which corporations deprive workers of minimum wage protection, disable collective bargaining, and execute instant termination without due process.
Yet despite this, the response in Washington and state legislatures has been fragmented, timid and ultimately inadequate.
The problem is not that algorithms manage workers — it is that they have become employers without accountability, and our legal system has not caught up.
How opacity disguises inequality, not efficiency
The platforms frame algorithmic management as a neutral, efficient technology. They claim their algorithms optimize matching, reduce transaction costs and enable the flexibility workers desire. The HRW report reveals this framing to be false. Consider dynamic wage-setting. A DoorDash or Uber driver does not know why their pay for the same delivery route fluctuates week to week. The platforms justify this through what they describe as 'dynamic pricing' based on 'real-time demand,' 'market factors,' and other undisclosed variables. The HRW report reveals this framing to be false. As some legal reviews have argued, this opacity allows for "algorithmic wage discrimination," where platforms leverage granular data — acceptance rates, response times, location history and braking patterns — to calculate the minimum pay each individual will accept. A desperate worker with few outside options may be systematically offered lower wages for identical work. This is not pricing efficiency; it is algorithmic wage theft, tailored to each worker's vulnerability.
The International Labour Organization debate on algorithmic management, held at the 113th International Labour Conference in June, crystallized this contradiction. Employers and some governments argued that regulating algorithmic systems amounted to interference in commercial law. Workers' representatives responded that this framing evades a fundamental truth, that the algorithms are management systems, and management systems that set pay, assign tasks and discipline workers are inherently labor matters. The European Union, the Workers' group, and governments such as Chile insisted that algorithmic governance is inseparable from labor regulation, as it directly determines pay, hours, and conditions of work. The Trump administration, represented by U.S Department of Labor officials, and China, represented at the ILO, sided with employers — arguing that algorithmic governance is beyond the ILO's mandate.
This position is untenable. An algorithm that unilaterally sets a worker's hourly wage is not a commercial feature but an employment practice.
Surveillance as disciplinary infrastructure
The HRW findings on worker surveillance should disturb anyone committed to human dignity. Platforms track location, speed, braking habits and even phone usage, often extending surveillance to off-duty time. This data feeds into behavior-scoring systems that determine wages, offer frequency, and, ultimately, whether a worker remains on the platform. Gamified incentives—such as Uber's 'Quest' bonuses for completing consecutive rides or DoorDash's 'Delivery Streaks' for accepting back-to-back orders—are psychologically coercive, compelling longer hours or shift flexibility that mimics direct employment while evading employment protections
And then there is algorithmic deactivation — instant termination by algorithm, with no human review and limited recourse. A customer gives a poor rating, or the algorithm detects suspicious behavior, and a worker's income vanishes. The appeals process, when it exists, is often fully automated. These are not disciplinary systems designed for fairness; but are the systems designed for total corporate control without corporate accountability.
Misclassification as a legal shield
The deeper question is why these practices persist. The answer is misclassification.
By reclassifying employees as independent contractors, platforms sidestep minimum wage laws, overtime protections, workers' compensation and — critically — the duty to bargain with unions. A conventional employer who surveilled workers with this intensity, set wages this opaquely and terminated workers this arbitrarily would face immediate legal exposure. But because gig workers are classified as contractors, the legal system treats these practices as between a platform and an autonomous service provider, not employer and employee.
This classification is not a legal accident, but rather an underlying architecture. It is the foundation upon which wage suppression is built. And recent policy developments suggest the structure is hardening.
In December, President Donald Trump signed an executive order explicitly directing federal agencies to challenge state artificial intelligence regulations—specifically naming Colorado's algorithmic discrimination law and targeting California's AI safety disclosure requirements—as unconstitutional interference with commerce. The stated rationale is to ensure that American AI companies are free to innovate without cumbersome regulation. This order signals that the federal government will actively defend platforms' right to use opaque, discriminatory algorithms against workers. At the same moment, the European Parliament voted to ban automated hiring, firing and pay decisions, requiring human review and worker contestability for all algorithmic employment decisions.
The transatlantic divergence is stark, but it favors platforms. The EU Platform Work Directive imposes enforceable obligations on platforms—including human oversight of algorithmic decisions and bans on automated dismissals—while the Trump administration's executive order uses litigation to eliminate state-level worker protections rather than establish new ones.
Why current approaches fall short
Existing labor law is insufficient.
The National Labor Relations Act, signed in 1935, assumes a visible employer who can be held accountable for supervisory decisions. It does not contemplate algorithmic employers. Reclassifying gig workers as employees is necessary but insufficient — it does not automatically make algorithms more transparent, more fair or more contestable. A worker classified as an employee but still managed by an opaque algorithm has only marginally more protection than a contractor managed by the same algorithm.
What is required is algorithmic governance which should be specific, enforceable rules about how algorithms can use worker data, set wages, make disciplinary decisions and terminate work. This must include algorithmic impact assessments , mandatory transparency about wage-setting factors, human review before termination and worker and union participation in the design and auditing of management algorithms.
The European Union is moving in this direction.
The proposed EU Directive on Algorithmic Management bans final algorithmic decisions on hiring, firing and pay, requires human oversight and prohibits surveillance of off-duty conduct and emotional states. It is not perfect — enforcement remains uncertain and implementation will be complex. But it establishes a principle: algorithms are not neutral, they require governance and workers are not powerless against them.
The US, by contrast, is moving backward. Rather than regulate algorithmic management, the Trump executive order is actively seeking to preempt state efforts to do so.
This would leave millions of platform workers exposed to algorithmic wage theft, algorithmic surveillance and algorithmic termination — all of which the HRW report documented in meticulous detail.
A democracy problem, not just a labor problem
This matters beyond labor markets. Algorithmic governance, when left unregulated and concentrated in corporate hands, is a tool for eroding democratic accountability. When platforms use algorithms to control wages and hours, suppress organizing and prevent collective contestation, they are building unaccountable power structures that operate outside democratic institutions. This is a form of private authoritarianism enabled by algorithmic opacity.
Moreover, algorithmic wage discrimination, if allowed to proliferate unchecked, will deepen economic inequality.
Platforms can dynamically adjust compensation based on real-time data about worker desperation, family structure, geographic location and borrowing capacity. They can target vulnerable populations — immigrants, workers of color, single parents — with systematically lower wages for identical work. The HRW report finds that this is not theoretical but it is happening now in our backyard.
What policymakers should do
Three policy moves are urgent.
First, lawmakers should pass comprehensive algorithmic governance legislation that applies specifically to algorithmic management. This should include mandatory algorithmic impact assessments focused on wages, hours, health and freedom of association; transparency requirements that workers have the right to understand how algorithms determine their pay, task assignment and performance evaluation; mandatory human review before termination or deactivation; and worker and union participation in the design and auditing of management algorithms.
Second, reclassify gig workers as employees (or create a third category with equivalent protections), combined with algorithmic governance safeguards. Reclassification alone is insufficient if the algorithmic management infrastructure remains unregulated.
Third, establish the ILO algorithmic management standard as the baseline, rejecting the argument that algorithms are commercial rather than labor matters. The US should stop opposing ILO efforts to regulate algorithmic governance and instead lead in setting global standards that protect workers from algorithmic exploitation.
Finally, reverse recent executive orders that seek to preempt state AI regulation. States like California and Colorado are pioneering algorithmic safeguards that the federal government should learn from, not litigate against.
The HRW report documents the infrastructure of algorithmic exploitation now operating in plain sight. The question facing policymakers is whether they will allow this infrastructure to calcify into the default architecture of digital work, or whether they will assert democratic authority over the algorithmic systems that govern millions of workers' lives and livelihoods.
The algorithm is the employer. It is time to regulate it as such.
Authors
