Transparency in Content Moderation

Nicolas Suzor

What content are you allowed to see and share online? The answer is surprisingly complicated. Our new project, funded by the Internet Policy Observatory, and including researchers from OnlineCensorship.org, Queensland University of Technology, and the Annenberg School for Communication and Journalism, works to engage civil society organizations and academic researchers to create a consensus-based priority list of the information users and researchers need to better understand content moderation and improve advocacy efforts around user rights.

The secret rules of content moderation

Search engines, content hosts, social media platforms, and other tech firms often make decisions to delete content, block links, and suspend accounts. The Terms of Service of these providers give them a great deal of power over how we communicate, but they have few responsibilities to be consistent, fair, or transparent.

Content moderation is a difficult task, and the decisions that platforms make are always going to upset someone. It’s little surprise that platforms prefer to do this work in secret. But as high profile leaks and investigative journalism, lsuch as the recently published Guardian ‘Facebook Files’, start to expose the contradictions and value judgments built into these systems, they’re becoming more controversial all the time. As Tarleton Gillespie puts it, the secrecy makes this entire process more difficult and more contentious:

The already unwieldy apparatus of content moderation just keeps getting more built out and intricate, laden down with ad hoc distinctions and odd exceptions that somehow must stand in for a coherent, public value system. The glimpse of this apparatus that these documents reveal, suggest that it is time for a more substantive, more difficult reconsideration of the entire project — and a reconsideration that is not conducted in secret.

The need for transparency

As the United Nations’ cultural organization UNESCO has pointed out, there are real threats to freedom of expression when private companies are responsible for moderating content.

When governments make decisions about what content is allowed in the public domain, there are often court processes and avenues of appeal. When a social media platform makes such decisions, users are often left in the dark about why their content has been removed (or why their complaint has been ignored).

It turns out that we know very little about the rules that govern what content is permitted on different social media platforms. Organizations like Ranking Digital Rights evaluate how well telecommunications providers and internet companies perform against measures of freedom of expression and privacy. In its 2017 report, RDR found that ‘Company disclosure is inadequate across the board’:

Companies tell us almost nothing about when they remove content or restrict users’ accounts for violating their rules. Through their terms of service and user agreements, companies set their own rules for what types of content or activities are prohibited on their services and platforms, and have their own internal systems and processes for enforcing these rules. Companies need to disclose more information about their enforcement processes and the volume and nature of content being removed.

What does ‘transparency’ mean?

While there have been many calls for greater transparency in content moderation decisions, there is little guidance available for internet intermediaries about the types of information they are expected to produce.

This project sets out to build consensus on a practical set of guidelines for best practices in transparency for content moderation practices.

We do this first by undertaking a review of the most common demands from users themselves. Now in its second year, Onlinecensorship.org has been collecting reports on users’ experiences when their accounts are suspended or content is deleted. From these complaints, we identify specific measures that intermediaries might be able to take to improve the experiences of users who have either had content removed or requested the removal of another user’s content.

We will then organize a series of workshops at academic conferences and civil society meetings over the next year to produce a prioritized list of specific recommendations for telecommunications providers and internet intermediaries. Because demands for greater transparency have so far been made in general and sometimes conflicting terms, there is little specific guidance about what measures are likely to be most useful.

We’ll be posting more updates here as the project progresses. If you’d like to get involved in this work, please contact Nicolas Suzor at QUT School of Law: n.suzor@qut.edu.au.