Home

UNC Researchers: Grounds for Banning Trump from Facebook are Clear

Justin Hendrix / Feb 11, 2021

The below statement was released by the UNC Center for Information, Technology and Public Life (CITAP) in response to the Facebook Oversight Board call for public comment on Facebook's suspension of Donald Trump from the platform.

To the Oversight Board –

It is the belief of the University of North Carolina at Chapel Hill’s Center for Information, Technology, and Public Life (CITAP) that Facebook’s decision to suspend President Trump’s accounts for an indefinite period was both a proper application of the company’s pre-existing policies and fulfilled its stated responsibilities to respect and protect the freedom of expression and human rights of its users. While the transparency and consistency of Facebook’s enforcement actions are flawed, the suspension itself was justified and the appeals process in this particular case is clear. The grounds for removing the former president from the platform permanently in the context of Facebook’s Community Standards are also clear. Facebook’s actions follow the company's history of suspending users who repeatedly violate policies.

Under international human rights law, “the exercise of the right to freedom of expression carries with it special duties and responsibilities.” Over a period of months leading up to the January 6, 2021 attack on the U.S. Capitol, President Trump used his singularly powerful expression on Facebook to deny the expressive rights of the American public at the ballot box. The president did so by repeatedly and deliberately making false claims about the processes and procedures of voting and the security of the vote. In this, the president actively and deliberately created the context where his supporters perceived the election as fraudulent.

In the context of the specific posts at issue in this case, the president’s false claims after the voting had ended and states certified their outcomes that the “election was stolen from us” and that it was a “fraudulent election” were deliberate attempts to deny the expressive voting rights of the American public and overturn the election. The president’s long and consistent record of public comments since his initial presidential candidacy in 2015, especially in the context of social movements for racial justice and the 2020 presidential election, implicitly and explicitly sanctioned extrajudicial violence by his supporters against detractors. This includes his remarks at the January 6th rally that “We fight like Hell and if you don’t fight like Hell, you’re not going to have a country anymore.” While there are debates over whether this met the legal definition of incitement, Facebook was correct in acknowledging that President Trump was speaking these words in the context of this history and his false claims that the election was stolen.

Facebook’s immediate ban on President Trump must be understood in this context. We also believe his permanent ban is more than justified given the former president’s repeated violations of Facebook’s Community Standards, the ongoing threat to U.S. democratic institutions including the public’s exercise of voice at the ballot box, and ongoing potential for violence. President Trump’s repeated use of Facebook to deny the voice of others, to circulate disinformation about election security, and degrade the dignity of political opponents, in the context where the former president has recognized hate groups, failed to condemn extrajudicial violence, and worked to have federal agencies downplay the threats of armed paramilitary groups, justifies these actions.

There is a long and well-established body of research that finds media and public discourse are part of large ecosystems. What happens on President Trump’s Facebook and other social media accounts shapes public discourse in other media, and vice versa. As such, Facebook’s policies must be interpreted to take into account the holistic context within which users post, especially in relation to election interference, voter suppression, anti-democratic actions, and the incitement of violence. Indeed, Facebook’s own policies account for off-Facebook facts in stating that "we do not allow any organizations or individuals that proclaim a violent mission or are engaged in violence to have a presence on Facebook.” Facebook also should not limit the interpretation of its policies to discrete posts, but must evaluate accounts and content over longer time frames that shape the context within which expression is interpreted. In addition to holding people accountable for specific statements, Facebook users must be responsible and accountable for their actions on the platform over time.

To return to the specific case at issue, President Trump’s posts on Facebook as well as on other platforms, and his words at rallies, in interviews, and in press statements, told a remarkably consistent story designed to undermine the legitimacy of the election, subvert the president’s accountability at the ballot box, and create a context within which his supporters were encouraged to take extrajudicial actions designed to intimidate political opponents and duly elected officials. In the specific context of incitement: only examining specific posts, at specific moments in time, for potential incitement is dangerous. Such a standard would only enable Facebook to be reactive to harms, not to prevent them, which we view as the company’s ethical and moral responsibility and is in keeping with its Community Standards. This is especially true in the context of world leaders and other political elites who are uniquely capable of harm, especially those with very large audiences on Facebook or other platforms. While we know this is complicated and are not advocating for automated predictive solutions, more robust forms of human review of elite accounts should be implemented to protect against harm.

As Facebook’s own Civil Rights Audit has shown, the company has often failed to fulfill its obligations to protect the voices of those most vulnerable. These failures are not confined to the U.S.; President Trump is not the only example of a world leader that has used Facebook to undermine electoral accountability at the ballot box, delegitimize political opposition, and subvert democratic institutions acting as a check on their power. Facebook should draw a bright red line at the attempts of any political leader, or those vying to become one, to undermine democratic checks, including the workings of elected representative bodies, judicial systems, and those tasked with the non-partisan administration of state functions including elections.

As decades of political science research shows, what elites in particular say and do matters most for peaceful democratic transitions and the preservation of democratic institutions. While the public has a stake in hearing from those representing them (or seeking to), democratic publics also face clear and present dangers in the possibility that disinformation and divisive or inciting speech undermines democratic accountability. Preserving this accountability should be the standard through which Facebook judges the expression of the political actors that use its platform. While there will be difficult cases, Facebook should have the flexibility to interpret them against this standard, especially in consultation with other institutions that operate in the public’s interest, including journalists, non- governmental organizations, and scholars.

The Facebook Oversight Board will accept public comment on the case of the former President until February 12th at 15:00 UTC.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a new nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & ...

Topics