Addressing tradeoffs in content moderation: questions for the tech CEOsEvan Greer / Mar 25, 2021
Last week, Tech Policy Press published a list of questions jointly with Just Security for Congress to ask the tech CEOs at a hearing on Thursday, March 25th. Evan Greer, Deputy Director at Fight for the Future, encouraged the inclusion of questions that address international perspectives and concern for global human rights and freedom of expression. Here are the questions Evan suggested.
In March of last year, multiple media reports emerged about Facebook removing large numbers of posts containing legitimate public health information about COVID-19 posted by medical professionals. The company blamed it on "a bug."
When YouTube announced it would remove white nationalist content from its platform, the company also took down videos by anti-racist groups like the Southern Poverty Law Center.
Facebook has also incorrectly labeled posts about the U.S. government's mass surveillance programs as "misleading," based on a fact-checker who cites a former top NSA lawyer as a source.
Over the last several years, researchers found that Big Tech platforms' automated "anti-terrorism" filters regularly removed content from human rights organizations and activists, disproportionately impacting those from marginalized groups outside the U.S.
Attempts to remove or reduce the reach of harmful or misleading content, whether automated or conducted by human moderators, always come with tradeoffs and can lead to the silencing of legitimate speech, remove documentation of human rights abuses, and undermine social movements who are confronting repressive governments and corporate exploitation.
- Does your company have a way to measure and report on the number of legitimate posts that are inadvertently deleted, labeled, or algorithmically suppressed as part of efforts to remove disinformation?
- Does your company maintain demographic data to assess whether the "collateral damage" of efforts to remove or suppress disinformation has a disproportionate impact on the speech of marginalized groups?
- For Facebook: does your company believe that activist groups opposing racism are the same as white nationalist groups? Why did you ban multiple Black liberation activists and organizations during a purge of accounts ahead of Joe Biden's inauguration? What steps have you taken since to prevent efforts that you claim are intended to address hate and disinformation from silencing anti-racist activists?
- Has your company studied the potential long term impacts of collateral damage to online freedom of expression and human rights caused by haphazard attempts to address online disinformation?
- Will your company commit to moderation transparency, providing researchers and advocates with a complete data set of all posts that are removed or algorithmically suppressed as part of efforts to stem the spread of disinformation so that the potential harm of these efforts can be studied and addressed?