One Year After the Storming of the US Capitol, What Have We Learned About Content Moderation Through Internet Infrastructure?Corinne Cath, Jenna Ruddock / Jan 6, 2022
Corinne Cath is an anthropologist of technology who studies the politics of Internet infrastructure, and Jenna Ruddock is a Senior Researcher with the Tech, Law and Security Program at the American University Washington College of Law.
2021 was a landmark year at the intersection of content moderation and internet infrastructure. Just two weeks into the year, Amazon, Google, and Apple all made headlines for cutting the social media platform Parler off from cloud and mobile app store hosting services in the wake of the January 6 attack on the U.S. Capitol. In June 2021, an “unexplained configuration error” at Fastly, a lesser known but dominant Content Delivery Network (CDN) service provider, disrupted access to a range of news, government, and other high-traffic websites for users in multiple countries. Governments, too, turned to infrastructure providers in their efforts to exert control over content: in September, Russia successfully pressured both Google and Apple to delist a mobile app created by supporters of opposition leader Alexei Navalny, while internet shutdowns remained pervasive across the world throughout the year.
One year after the storming of the U.S. Capitol and Parler’s subsequent struggle to remain online, similar events continue to highlight the power infrastructure actors have to both enable and disrupt users’ access to both the internet as a whole, as well as to specific websites and mobile content. These events call for a retrospective on what we learned in 2021 about the challenges of internet infrastructure providers acting as content moderators and online power brokers. There is a particular need to understand how the individuals and companies leading the development of internet infrastructure take on the call for increased accountability–or don’t. Major social media platforms are facing mounting pressure over their content moderation practices and failure to listen to those most impacted – Black women especially, as well as people based outside the U.S. and in regions where English isn’t a dominant language. But until recently, infrastructure providers have largely evaded such scrutiny – even as major infrastructure players like Google and Amazon have faced fierce criticism over their non-infrastructure services and operations.
While many warn, justifiably, against the risks of infrastructure providers engaging in content-based decision-making, these often invisible players have nonetheless engaged in such decision-making while simultaneously embracing one narrative in particular: neutrality. This claim to neutrality stems, in part, from the bygone roots of internet culture in US counter-culture. Many early adopters defined the internet as the ultimate frontier of freedom – despite the obvious infrastructural dominance of governments, elite universities, and private corporations at its birth. This cultural mythology positioned computers – and the internet – as neutral tools through which it was possible to achieve a utopia untethered from existing social, political, and economic power dynamics. These particular notions of freedom and neutrality are deeply ingrained in modern day tech culture – and often underpin corporate decisions regarding content moderation.
Many infrastructure companies refer to their work as neutral service providers, rather than as online power brokers. For example, in August 2019 CDN company Cloudflare initially said it would not rescind its contract with a platform that had, once again, been connected to a mass shooting. Nothing in Cloudflare’s Terms of Services (ToS) required it to react. Yet within a day, the company cut ties with this platform nonetheless, citing its “lawlessness” and Cloudflare’s discomfort about “playing the role of content arbiter,” which it stated is a role “they do not plan to exercise often.”
Yet, the discomfort expressed by Cloudflare reinforces how prevalent the narrative of neutrality is in the infrastructure industry, even as key actors are starting to explicitly recognize their inherently political role. Rather than approaching every decision they make – whether to maintain or rescind their services – as a political decision, many infrastructure providers will only act in the most extreme cases, and then without clear policies in place. This means that company remedies are applied haphazardly and without accountability, often following the fear of a public relations scandal rather than looking to a transparent, human rights-informed framework.
The guiding principles of neutrality and PR also mean that it is most often the U.S.-based cases that draw public scrutiny and provoke company responses. Examples outside of the U.S., like the earlier mentioned Apple decision in the Russian context, are less likely to lead to corporate mea culpas or regulatory follow up. Furthermore, the heavily U.S.-based tone of the debate also means that the various values that companies weigh are those slanted towards the U.S. constitution, in particular a First Amendment-informed interpretation of free expression.
What would happen if we turned this culture on its head? Encouraging infrastructure providers to think of all the work they do as political? Would it mean they refrain from intervening at all, even when their services form part of a network that facilitates incitement to violence online, or other forms of harm? Or would it, as some argue, lead these infrastructure providers to overcensor for fear of regulatory and public scrutiny? To prevent these two extremes, more nuance is needed in how infrastructure providers approach these topics.
Moving forward, infrastructure companies will need to develop novel policies regarding content, with clearer terms of service. They could also consider, like social media companies, developing or growing their trust and safety teams. Doing so would allow them to maintain some of their cherished neutrality, by avoiding overt acknowledgement of their role as reluctant content moderators and power brokers while still responding to internal and external pressure to act.
Some companies, like Cloudflare, are getting out ahead of the debate by developing human rights policies and other grounding principles to guide decisions regarding controversial content. Such initiatives should be encouraged. These policies should also be subject to outside scrutiny of civil society, academics, and other groups mandated to hold the tech sector to account, to ensure that public interest values are prefaced. We may not see another storming of the Capitol in 2022, but we will undoubtedly be faced with many questions regarding the role of internet infrastructure companies in mediating politically contentious public debate and access to information. When– not if– those situations arise, companies should be prepared to answer the hard questions around accountability proactively and with reasons rooted in concrete policy rather than statements that address PR concerns and claims to neutrality.