Algorithmic Systems Designed to Reduce Polarization Could Hurt Democracy, Not Help It

Austin Clyde / Feb 17, 2022

Austin Clyde is a visiting research fellow at the Harvard Kennedy School’s Program on Science, Technology, and Society and a Ph.D. candidate in computer science at the University of Chicago.

In a recent working paper titled "Can Algorithmic Recommendation Systems Be Good for Democracy?", Aviv Ovadya makes a case for what he calls "bridging-based" content recommendation systems as an alternative to the engagement-based systems commonly employed on social media platforms. His idea is that recommendation systems should prioritize content that decreases the extent of algorithmically inferred societal divisions, hence prioritizing content that "bridges" division rather than perpetuating it.

Ovadya explains that the challenge to achieve this technology is in an algorithm's ability to understand "local language" and non-textual forms of media "such as audio and video,” and calls for more public and private investment to overcome these technological hurdles. This approach is more progressive than the current debate over the merits of chronological versus engagement-based recommendation systems, and Ovadya gives an excellent account of how both prioritize values that are antithetical to a healthy democracy.

Like Ovadya, I am a proponent of the idea that we should increase the investment and technical attention to remedying the harms of technology. More scholarship is necessary into how we can use advances in technology to build better mechanisms for accountability, inclusion, and responsible innovation at large (which I have written about in the context of scientific investment).

However, Ovadya’s account of a "bridging-based" ranking system fails to meet the basic tenet of his democratic desideratum, facilitating "understanding, trust, and wise-decision making." Algorithms undermine what it means to understand each other at every level, from diluting the nature of understanding in intersubjective communication to black-boxing systematic discrimination and human rights abuses.

The real question is if forcing any quantitative, disembodied metric over the real, messy, and "thick" sense of being a citizen in democracy is ever justified. We know that the choices of metrics we use to govern the algorithms on very large online platforms have systematic and real "nudge"-like effects on society. When evaluating their application, we should turn back to the basic principles of democracy, which rest on the division and constraint of power (pluralism) supported by the equal standing of rights-bearing citizens before a democratically determined legal system.

Could “bridging-based systems” promote understanding?

I am interpreting Ovadya’s view of "bridging-based" content recommendation systems as revolving around a notion of deliberation that suggests it is preferable that "opposing sides understand each other." The "bridging-based" metric that would produce such an understanding, then, is an algorithm that gauges the level of understanding or potential for mutual justification a post or some other unit of content poses to a diverse collection of social groups. The idea is that by incentivizing posts that attempt to bridge different viewpoints, a recommendation system contributes to people understanding each other rather than just talking past each other.

Research into producing productive democratic deliberation is no small area of political science research. A state-of-the-art technique for fostering healthy democratic deliberations is mini-publics and their practical counterpart, deliberative polls. These deliberative mechanisms gather a group of statistically diverse citizens for a few days of deliberation on a political issue. Empirical evidence shows that these forums foster less polarization while often changing the minds of the citizens in the room. One could imagine interpreting the success of these systems as a motivating factor for engineering a "bridging-based" recommendation system such as that imagined in Ovadya's proposal.

The technical infrastructure of mini-publics produce, in part, the conditions for participants to undertake intersubjective commitments to mutual understanding by removing them from their social contexts in the presence of a trained moderator, experts, and support staff. One could see how the "bridging-based" metric could be an attempt to reproduce such technical infrastructure as a reward function for social media posts. After all, a behavioral economist could read mini-publics as merely establishing a set of initial conditions which construct specific incentives for actors to reach opinions that they could endorse.

However, this view has two significant flaws when considered against basic democratic tenets. First, a behavioral reduction of what happens inside deliberations fails to recognize what understanding and bridging mean for participants. It is not simply the case that the quality of deliberation or the fact that citizens in mini-publics do bridge views that makes this approach democratic. What makes mini-publics unique is that citizens leave with a well-formed political opinion that they can reflectively endorse. The fact that certain technical circumstances or means lead to this result, which may or may not include bridging views, is not an end in and of itself.

Another flaw is that algorithms that nudge citizens towards consensus may in fact short circuit or otherwise interfere with important processes that may be necessary for a democracy to determine the best path forward.

The technical solution of utilizing algorithms to gauge content and predict whether they may "bridge" competing views treats the citizenry as those outside of the deliberation room. They are not privy to the benefits of the algorithm’s calculations, and therefore may not benefit from the bridging effect. Worse, if the algorithm cannot explain itself, it runs the risk of violating human rights protections afforded to minority groups in political discourses.

Metrics cannot quantify healthy democratic conditions.

While I endorse Ovadya's view that current recommendation systems do not perpetuate democratic values, I question whether there is a quantifiable closed metric that can genuinely support democracy. I worry that over-optimizing a bridge view may decrease polarization, but do so in a way that is detrimental to pluralism. Democracy goes deeper than just consequential values.

In their recent book System Error, Stanford professors Rob Reich, Mehran Sahami, and Jeremy Weinstein argue there is an issue with optimization when it treats a social issue as an abstract mathematical metric which is apt for over-optimization. Over-optimization should be truly scary for society as it becomes increasingly codified into digital infrastructure. Optimization routines, as shown by the Facebook Files, can ultimately affect a wide range of social circumstances- from teenagers' mental health to geopolitical stability. There are also many fundamental democratic instruments such as basic human and associative rights that are deontological and thus difficult or impossible to pose as optimization problems.

The most significant risk of the bridging-based recommendation system, or any metric, is over-optimization leading to reducing pluralism or treating pluralism as a problem. Rather than taking the positive side of only "bridging understanding," the system could lead to the more optimal viewpoint that completely bridges and removes opposing viewpoints, softening the discourse and silencing certain viewpoints altogether. Should climate-change advocates soften the news of the deeply troubling changes coming our way, for instance? This reduction in polarization leads to a reduction of pluralism by treating views as just singularities with varying measures of extremes. But, political questions are not morally closed such that one could pick a metric to normalize society against. Pluralism is the freedom for forming dissenting opinions, which is what necessitates freedom and liberty. Put more simply- sometimes extremes are good and necessary to move society forward.

We need solutions to pernicious and affective forms of polarization, but solutions cannot, under any circumstances, deny the moral and political right to pluralism. Pluralism is a feature of democracy, not a deficit. Algorithms cannot capture what a healthy pluralism can produce, nor can algorithms drag us through the process of deliberation and democratic will-formation at the push of a button. Despite the allure of a golden civil society which harmoniously agrees, democracy relies on moving forward despite our disagreements, even the ones that are never truly resolved.


Austin Clyde
Austin Clyde holds a Ph.D. in computer science from the University of Chicago, where he also taught a course on artificial intelligence, algorithms, and rights. This course was awarded the University of Chicago Pozen Family Center for Human Rights Graduate Lectureship. He was a visiting research fel...