Home

Should We Leave It to AI to Make Our Hard Choices?

Atay Kozlovski / Aug 3, 2023

Atay Kozlovski is a Postdoctoral Researcher at the University of Zurich’s Center for Ethics.

Alina Constantin / Better Images of AI / Handmade A.I / CC-BY 4.0

In this most recent wave of AI hype, AI systems are being touted not only as optimisers of tasks, but also as better decision-makers in that they can be more just, efficient, and objective than humans. How could one deny this? We all know that our work capacity is limited by our need for sleep, food, entertainment, and the occasional bubble bath. Moreover, people often allow their prejudices, inconsistencies, and emotions to affect their decisions. AI systems seem to offer a natural solution to this all too human condition. In various fields, we already see this transition taking place - algorithms help judges determine the risk of recidivism, help diagnose cancer, identify which job applicant is most suited to fill a vacant position, and determine which advertisement is best suited to an individual’s consumer profile.

But as the use of AI systems has increased, AI ethicists have sounded the alarm over the many ethical questions that need to be addressed if we want to ensure that AI systems do not cause unintended harm. AI developers have responded by calling for regulation and voluntarily committing to take steps to ensure the safety of their products. They also claim that many of these issues, such as ‘hallucinations’, AI value alignment, bias, and discrimination, can be mitigated or even eliminated over time.

But even if this is true in the long run, should our goal be to embrace this emerging technology in order to optimize our own life choices? If an algorithm can identify the perfect candidate for a job, can’t it also identify the perfect job for me? If AI can help treat my mental health problems, can it not also tell me what type of lifestyle I should live, Which partner I should marry, And whether I am suitable to be a parent?

Many people might find something unappealing or downright strange about the prospect of allowing AI systems to make such important life decisions for them. But is there a way for the public to determine where AI should be used and where it should not? Today's philosophical work on the intersection between Axiology (the research of values) and rational decision-making can offer us insight into this question.

A first pass at this question may lead us towards claiming that the stakes of the decision determine whether an AI system should or should not decide for us. That is, we might argue that mundane decisions such as which toothpaste to buy or what to have for lunch can be delegated, while important decisions like where to live and who to marry should remain in our hands. The problem with this line of thought is that we are inclined to believe that if done properly, some high-stakes decisions can and should be delegated to AI systems. We already allow AI systems to trade stocks, determine whether someone should be eligible for a bank loan, and drive cars, so the stakes of the decision cannot be the difference.

A second pass may claim that the distinction is between personal (subjective) and impersonal (objective) decisions. On this line of thought, my individual and personal decisions should not be delegated, but those decisions that need to be impartial or objective can perhaps be handed over. This line of thought fails on both counts - if the underlying assumption is that an AI system can make better decisions than I can, then why should it matter if the decision is personal or not? If there is a correct answer as to which job is ideal for me, it would be absurd to simply ignore this fact because it is a decision that affects me personally. Similarly, at the impersonal level, we may doubt whether there is always an objective or correct answer to every case - is there an objective answer as to whether we ought to allocate scarce health resources to those who are worst off or to those who would benefit most from them?

These first two failed passes lead us to a more fruitful third attempt to answer the question of when we ought to delegate a decision from a human to an AI system. What we have learned is that the answer is neither a function of the stakes involved nor whether the decision is personal or impersonal. Rather, the answer concerns the structure of the choice-making situation and the values at stake.

Over the past 20 years, the philosopher Ruth Chang has stressed that there is a significant difference between normative decisions such as ‘What is the best career path for me?’ or ‘Is it better to get married or stay single?’ and non-normative questions such as ‘Which stock will provide me the best return on my investment?’ or ‘How tall is the Eiffel tower?’. The most important distinction between the normative and the non-normative is that the former can be precisely and mathematically measured while the latter cannot. Normative questions bring with them a type of imprecision that relates to the incommensurability of the values involved - the inability to measure those values along a single scale.

Those who argue that AI is or will exceed human decision-making capabilities presuppose that all decision-making situations can be treated as if they were non-normative. This presupposition is directly linked to what has come to be known as the ‘Trichotomy Thesis.’ This thesis holds that when comparing two options or courses of action, one will necessarily be either better, worse, or equal to the other. Why necessarily so? Because the thesis treats all decisions as if they were non-normative. If we were asked to determine which of two items is heavier, we could simply place them on a weight and see whether one is heavier, lighter, or equally heavy as the other.

Philosophers as far back as Aristotle have understood that normative questions do not function in the same way. When faced with a normative decision, we will often deliberate on the situation and ultimately conclude that although we have several relevant options, no single option is best. The primary reason for this is that multiple values may be at play and that we lack the means to trade these different values against each other.

For instance, consider facing a choice between pursuing your passion for music or taking up a job as an accountant. For simplicity, let’s assume that the only values at play in this choice are financial security and artistic pleasure. While working as an accountant will be better in terms of financial security, working as a musician will be better in terms of artistic pleasure. Since rational choices are choices that consider all relevant factors, we want to know which is the best option in terms of both financial security and artistic pleasure. If this were a non-normative question, we could simply identify how much ‘weight’ each value has and then measure which is better. But as this is a normative question, we may come to the somewhat perplexing conclusion that both options are good but in different ways. While there may be clear-cut cases in which one job will be preferred to the other (a highly paid musical position or an accounting job which requires only 10 hours of work per week), there will be a wide range of cases in which all we can say is that neither is better or worse than the other.

In such cases, ought we then to conclude that the two jobs are equally good? Is a financially secure yet unsatisfying job equally good as a satisfying yet financially insecure one? A short thought experiment will help show that this cannot be the case. Imagine being presented with a third alternative - a slightly improved accounting job in which you earn another $500 annually. While this job is better than the original accounting job, is it better than working as a musician? Not necessarily. The initial accounting job was already better than the musician job in terms of financial security, such that adding a slightly higher salary would not necessarily change the balance in this debate. But if this is true, then it cannot be that the two original jobs were equally as good since when two things are equal, any slight change towards one side or the other should tip the scales in that direction. And so, when confronted with normative choices, we will often conclude that neither option is better, worse, nor equally as good as the other.

Choice-making situations like in the example above are known as ‘Hard Choices’. These decisions are hard not because it is difficult to know the right thing to do, but rather because, in such cases, there is no answer to the question - ‘what is the right decision?’ So what are we to do in such cases? How should we make a decision when faced with a Hard Choice? Should we just randomly pick one of the options or flip a coin?

In rational decision theory, an isomorphism exists between values, reason, and action. Simply put, the balance of reasons favoring each course of action determines what one ought to do rationally. When A is better than B one ought to choose A; if A and B are equally good, then we can randomly pick between them. Hard Choices are cases in which A is neither better, worse, nor equal to B, but, as Chang puts it, the options are on-a-par.

When two options are on-a-par, we can rationally neither choose nor pick one of the options; instead in such cases, we need to make a personal commitment towards one of the alternatives. When forced to select between working as an accountant or as a musician, if we conclude that these options are normatively on-a-par, we need to make an active commitment as to what type of life I am going to live - am I a person who will prioritize financial security over artistic pleasure or the other way around? Once I have made this commitment, I can rationally choose one of these options. In this way, rational decision-making is not simply a process of evaluating the reasons which favor each option, but also a process of creating new reasons through our rational commitments.

Turning back to AI, if we are correct in claiming that human life is filled with Hard Choices, then we would be fundamentally harming our agency by delegating such choices to an AI system. There are many benefits to employing AI systems and to developing collaborative decision-making mechanisms, but AI is limited in that it cannot decide for us when we face Hard Choices. To be clear, it is not that we cannot develop an algorithm that would tell us what to do in such cases, that would be easy; it is rather that by doing so, we would be giving up the power to actively shape our own personal rational identity. Regardless of the amount of data the AI system is trained on, the sophistication of the algorithm it is built on, and the computing power it has access to, the fundamental issue at hand is that Hard Choices are not a function of analyzing and finding the right answer. There is no single right answer in such cases!

In regular rational decision-making, we function as detectives and try to find the best answer, but in Hard Choices, we are not detectives but authors. Hard Choices are our chance to write the next line in our life story - am I a person who favors artistic pleasure or financial security? Are we, as a company, committed to teamwork or individual efficiency? Do we as a society prioritize the worst off or those who would benefit the most? By delegating our Hard Choices to an AI, we will not improve our decision-making capabilities; rather, we would turn ourselves into the marionettes of an artificial master.

Authors

Atay Kozlovski
Atay Kozlovski is a Postdoctoral Researcher at the University of Zurich’s Center for Ethics. He holds a PhD in Philosophy from the University of Zurich, an MA in PPE from the University of Bern, and a BA from Tel Aviv University. His current research focuses on normative ethics, hard choices, and th...

Topics