Between the Algorithm and a Hard Place: The Worker's DilemmaDiana Enríquez / Dec 11, 2023
For many, the economy is rough these days. Let’s say you decide to earn a little extra cash by picking up another job – one with flexible hours and a somewhat predictable pay schedule. You decide to pick up a few hours here and there with Uber, and whatever shifts you can grab from AmazonFlex.
The onboarding for both of these jobs is pretty straightforward – you log in for your shift on an app. You accept a ride from a passenger and your Uber maps a route to pick them up and drop them off wherever they want to go. You show up to an Amazon warehouse and scan your assigned packages into your phone so your AmazonFlex app can plan your route. Usually the maps work pretty well and your jobs go smoothly.
But today there is a glitch in the app. You’ve heard this happens before from your friends who recommended you join Uber/AmazonFlex, but you haven’t had to navigate it before.
Today you’re supposed to drop off a package in a location outside the route provided by the AmazonFlex app’s map. The passenger needs to go somewhere and the app wants you to drive on a street that you know has very hazardous road conditions. You also know that the app is always tracking your location and how closely you stick to the “optimized route.” You’ve heard from other drivers that you might get a warning and a strike against you if you go too far off route. Too many strikes means you’ll lose your flexible job, and the supplemental income that is helping you pay your bills.
You have two options:
- Break the rules but complete the goal – you decide to leave the route and reach your destination, though it is outside the tracked route. Or, you avoid the hazardous road because you are responsible for maintaining your car and you get to the end destination without any damage. You wait a few days to see what happens… and you get an automated email warning you that your driver score was marked down by your passenger for taking a “longer route” or a warning saying they needed to check whether or not you delivered the final package because they saw you left the optimized route.
- Follow the rules but at a heavy cost – you’ve heard too many stories about people being deactivated for not obeying the app’s guidance, so you stick to the route and try to figure out how to reach your final goal anyway. You take the short route but damage your car. The final bill to repair your car costs more than you made on the trip. You take the route and drive in circles around the house where you need to drop off a package – you’re burning gas but you’re hoping that with enough refreshing, the app might reroute and might finally let you reach your end goal.
These are the types of decisions one has to make when your manager is automated. While AI and other algorithms that manage our routines at work can do a lot of things well, these systems do not work well in gray areas when conditions quickly change.
In the lose-lose situations described above, the driver has to decide 1) whether and how to override their automated manager and 2) what kind of cost they want to absorb based on their decision. What makes these kinds of decisions trickier is that there often isn’t a human manager above the automated management tools for the drivers to appeal to. Some wiggle room for workers to improvise is necessary to handle changing conditions, but in an effort to be “hyper efficient” these platforms decided against allowing it. Rather than identifying and addressing changing working conditions, my research found that most workers lose their jobs for failing to obey "optimized" instructions in similar automated working conditions.
While algorithms can do many impressive things, they cannot handle the gray areas that require human problem solving that we all frequently face in the working world. Uber and AmazonFlex have been able to automate a layer of management on their platforms because there is a lot of routine in the work – but the workers I’ve interviewed over the last few years also demonstrate that a significant part of their day is spent negotiating with the automated management tools and deciding how to get their jobs done. The glitches many of us have learned to ignore when we interact with a customer service chat bot or a weird Netflix recommendation won't derail our day, but imagine if those glitches and poor matches informed how you had to do your job.
At present, automated management is happening primarily in lower paying jobs, but these experiences are also a preview of what we might expect as we see companies turning to ChatGPT and other AI tools with the hopes it might reduce some of their workforce costs. The workers in Amazon’s full-time driving fleet and its warehouses know how tough it can be to work in these conditions - that’s why more of them have been going on strikes recently. AI and similar automation tools will not stay isolated in low wage jobs for long - companies like BCG are experimenting with ChatGPT as an extension of their analysts.
AI only becomes a true threat to people and their labor when we treat it with reverence instead of scrutinizing its application with a realistic and critical eye. Technology is a tool, so we must see and work with it accordingly. AI is good at some functions but not others. AI and other automated technology is not guided by some divine truth - anything automated is full of design choices. The best tools are often developed in conversation with our end users and their broader communities. Rather than replacement, we should build work systems and AI tools designed to support workers. These design decisions require deliberate conversations about 1) technology designs and 2) organizational designs.
As we think about the next iterations of AI and its implementation in the workforce, we have to think carefully about who the designers are and what it means when we allow a few winners to dictate how everything is designed and implemented. Amazon and Uber are often held up as examples of the power of automation at scale. They can also serve as a warning of what it looks like when an automated layer of management gets to be too rigid. Given their scale, their design decisions also start to impact broader industry networks in negative ways too - just look at how the full-time US postal service workers are now forced into similarly grim conditions as Amazon’s gig workers.
What researchers consistently find is that our best work environments provide some rules for structure and quality control alongside flexibility for workers to innovate and adapt to changing conditions. Automation is designed for the “average case” and performs poorly in “edge cases” or changing conditions. Adaptation and innovation is what allows humans to thrive. There is a clear route here for a better partnership - so long as we don’t ignore the strengths and weaknesses of humans and machines. But it requires leadership to make design choices that reward BOTH rule following and adaptation, instead of strictly prioritizing process at the expense of poorer outcomes.
At this crucial moment in time, in which companies are deciding how and where to automate and introduce AI, we have an opportunity to think more critically as designers. Organizations can ask themselves questions about 1) what kind of automated tools would be useful to my teams and 2) how can they be involved in the design and implementation process? They must also ask - how do I create space for my teams to adapt to changing conditions? Do we also create space for these teams to weigh in on necessary process changes, when the existing process no longer serves our intended outcomes? Organizations that do not work through these kinds of co-design discussions and productive sources of conflict are more likely to suffer when automation is implemented.
These design choices will impact how people work, learn, and grow in their careers. This is our chance to adapt the conversations around AI towards human-computer partnership and away from a labor replacement. Many innovative ideas have come from people combining a few existing tools into a new use case that benefits others. But we need to create and protect this kind of space for improvised work - and trust in the individual worker - in every sector and at every level of the economy.