Home

Donate

A Multi-Stakeholder Approach to Content Moderation Policy

Justin Hendrix / Sep 15, 2021

One of the content moderation incidents that still hangs in the polluted air of American politics like a naughty apparition concerns moves taken by Facebook and Twitter to limit the spread of a New York Post story published in October 2020 about emails found on a laptop that belonged to Democratic presidential nominee Joe Biden’s son, Hunter Biden.

Putting aside the merits and the facts of the story itself, how the decision to limit its distribution was handled by the platforms in those final weeks before the election prompted concern not only from partisans who hoped what the Post called a “Smoking Gun” would sink then-candidate Joe Biden’s campaign, but also observers in the journalism and fact-checking community. For instance, Cristina Tardáguila, the International Fact-Checking Network’s Associate Director, published a column on Poynter arguing that the “decision to reduce or prevent the distribution of the New York Post’s article based on some mysterious, non-transparent criteria and an unknown methodology is a serious mistake.”

Twitter handled the situation so badly- walking back its decision after it acknowledged it had made a mistake in contradicting its own policies- the Republican National Committee filed a complaint with the Federal Election Commission that, as the New York Times reports, accused the social media firm of “using its corporate resources” to advantage the Biden campaign. While that complaint was dismissed this week- the Commission found that Twitter made the decision on commercial, not political grounds- the scenario still haunts the fever dreams of the former President’s supporters, who cling to all manner of “evidence” that the 2020 election was “stolen” from him.

Indeed, it did not matter so much what the merits of the decisions by the platforms were, or on what basis they were taken. The methods by which they arrived at and implemented their decisions were enough to produce an enduring conspiracy theory that posits Jack Dorsey is a secret Democrat, and that if only the New York Post story had circulated unhindered the outcome of the election might have been different. (Never mind the fact that the controversy following the moves by the platforms resulted in the story occupying more headlines, cable news segments, social media chatter and indeed ultimately government hearings than the Murdoch tabloid could ever have dreamed possible.)

This problematic parable illustrates the urgency of getting to content moderation policies and practices that are managed by the major platforms in ways that contribute to the legitimacy of the public discourse, and what happens when decisions are made clumsily and without transparency in our highly divisive political environment.

Enter the R Street Institute, a D.C. think tank that advocates for free markets, which recently set out to explore what a properly designed “multi-stakeholder” process to explore “problems at the intersection of harm arising from online content moderation, free expression, and online management policies and practices” would look like, with funding from the Knight Foundation. In a new report issued today, Applying Multi-Stakeholder Internet Governance to Online Content Management, authors Chris Riley and David Morar report on the process they arrived at and its prospects for driving “sustainable progress in online trust” that is the result of “constructive discussions in the open.”

Source

After a nod to Eric Goldman’s excellent Michigan Technology Law Review paper, Content Moderation Remedies, Riley and Morar conclude that neither of the two recently established trade associations- the Trust & Safety Professional Association (TSPA) and the Digital Trust and Safety Partnership (DTSP)- both only around for a matter of months- appear to meet the standard to be considered “multi-stakeholder” in the context of internet governance, which “requires the inclusion of perspectives from industry, civil society and government voices in governance discussions.”

So, they set out to design their own model of what a true multi-stakeholder process would look like, in a methodical effort at facilitating participation from civil society, academia and the tech industry:

The goal was to create a set or framework of voluntary industry standards or actions through spirited but collegial debate. The objective at the outset was not to “solve” the issue of online content management, which is an unfeasible objective, but to generate a space for discussion and forward-thinking solutions.

Once underway, the group arrived at some key points of consensus, which include reasonable ideas such as that “content management must not be the perfect and total prevention of online harm, as that is impossible,” that “content management does not resolve deeper challenges of hatred and harm, and at best works to reduce the use of internet-connected services as vectors,” and that “automation has a positive role to play in content moderation, but is not a complete solution.”

Among the points of agreement that the report suggests the process led to is what is- and is not- possible to address with regard to content moderation in this mode of engagement.

Of particular importance to the stakeholders whose input shaped this process is the recognition that this work, like the space of content management more broadly, is not meant to address the full depth of harm in human connection and communication over the internet. Too often, content moderation is seen as the entire problem and solution for disinformation and hate speech, when it is not. We must all explore potential improvements to day-to-day of online platform practices, while at the same time invest in greater diversity, localism, trust, agency, safety and many other elements. Likewise, content moderation is not a substitute solution to address harms arising in the contexts of privacy or competition.

Having set some bounds for itself, the group then went on to develop a set of propositions accompanied by “positives, challenges and ambiguities” that would need to be more fully examined. These include ideas such as “down-ranking and other alternatives to content removal,” more granular or individualized notices to users of policy violations, “clarity and specificity in content policies to improve predictability at the cost of flexibility,” the introduction of “friction in the process of communication” to potentially reduce the spread of misinformation and other harms, and experimentation with more transparency in how recommendation engines work.

All of the ideas described are supple and nuanced enough to point to the potential of such a multi-stakeholder process. But, the group gathered for this prototype effort was small, and it’s hard to load test to see how it would scale. The R Street Institute authors take inspiration from the National Institute of Standards and Technology (NIST), which recently ran what looks like a successful process to arrive at “standards, guidelines and best practices” in cybersecurity, and the National Telecommunications and Information Administration (NTIA), which recently ran a process to arrive at some consensus on technology policy topics such as facial recognition. Indeed, they posit the NTIA may be the obvious vehicle to further pursue the problem of online content management.

But what of Hunter Biden’s laptop? How would this expertly facilitated process help us avoid such a scenario? I put that question to Chris Riley.

“My hypothesis is that these are inherently hard problems and platforms are - in most but not all cases! - trying to do the best they can in good faith, but stuck within silos,” he told me. “The idea of multi-stakeholder engagement is to help them have better information and factors for consideration for such decisions up front, seeing around inherent myopia / blinders (which we all have), as well as some better trust with civil society to draw on in times of crisis to expand the bubble a little bit. Practically speaking that means they'd be more likely to make a decision and stick to it - which doesn't mean it's necessarily correct (on some level there's subjectivity here), but at least that it's a clearer articulation of the company's values in such a complex decision environment.”

That leaves the question of the politicians, though- and whether they are willing to come to the table in good faith. Senators such as Ted Cruz (R-TX) and Lindsey Graham (R-SC) jumped on the Hunter Biden laptop imbroglio last October, demanding hearings and invoking the First Amendment, even though they both know full well that the Constitution does not include any prohibition that would limit social media firms from deciding what content to host on their platforms.

Riley is optimistic that a robust multi-stakeholder policy process may ultimately settle the score.

“A lot of politicians want to carve a pound of flesh from big tech right now, but for different motivations, and if any agreement is found there it will come from policy,” he said.

Perhaps the NTIA will take on this difficult task. The Biden administration has yet to choose a permanent leader for it. When someone is in the chair, the R Street Institute process may be a ready playbook. Meanwhile, bills such as the Online Consumer Protection Act, which would demand social media companies provide more specific terms on content moderation practices, including what actions prompt moderation and on what grounds, and clarity on how users are notified about and might appeal such decisions- creep along in Congress, while the next election cycle looms.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & Inno...

Topics