Transparency in Content Moderation

Nicolas Suzor

What content are you allowed to see and share online? The answer is surprisingly complicated. Our new project, funded by the Internet Policy Observatory, and including researchers from, Queensland University of Technology, and the Annenberg School for Communication and Journalism, works to engage civil society organizations and academic researchers to create a consensus-based priority list of the information users and researchers need to better understand content moderation and improve advocacy efforts around user rights.

The secret rules of content moderation

Search engines, content hosts, social media platforms, and other tech firms often make decisions to delete content, block links, and suspend accounts. The Terms of Service of these providers give them a great deal of power over how we communicate, but they have few responsibilities to be consistent, fair, or transparent.

Content moderation is a difficult task, and the decisions that platforms make are always going to upset someone. It’s little surprise that platforms prefer to do this work in secret. But as high profile leaks and investigative journalism, lsuch as the recently published Guardian ‘Facebook Files’, start to expose the contradictions and value judgments built into these systems, they’re becoming more controversial all the time. As Tarleton Gillespie puts it, the secrecy makes this entire process more difficult and more contentious:

The already unwieldy apparatus of content moderation just keeps getting more built out and intricate, laden down with ad hoc distinctions and odd exceptions that somehow must stand in for a coherent, public value system. The glimpse of this apparatus that these documents reveal, suggest that it is time for a more substantive, more difficult reconsideration of the entire project — and a reconsideration that is not conducted in secret.

The need for transparency

As the United Nations’ cultural organization UNESCO has pointed out, there are real threats to freedom of expression when private companies are responsible for moderating content.

When governments make decisions about what content is allowed in the public domain, there are often court processes and avenues of appeal. When a social media platform makes such decisions, users are often left in the dark about why their content has been removed (or why their complaint has been ignored).

It turns out that we know very little about the rules that govern what content is permitted on different social media platforms. Organizations like Ranking Digital Rights evaluate how well telecommunications providers and internet companies perform against measures of freedom of expression and privacy. In its 2017 report, RDR found that ‘Company disclosure is inadequate across the board’:

Companies tell us almost nothing about when they remove content or restrict users’ accounts for violating their rules. Through their terms of service and user agreements, companies set their own rules for what types of content or activities are prohibited on their services and platforms, and have their own internal systems and processes for enforcing these rules. Companies need to disclose more information about their enforcement processes and the volume and nature of content being removed.

What does ‘transparency’ mean?

While there have been many calls for greater transparency in content moderation decisions, there is little guidance available for internet intermediaries about the types of information they are expected to produce.

This project sets out to build consensus on a practical set of guidelines for best practices in transparency for content moderation practices.

We do this first by undertaking a review of the most common demands from users themselves. Now in its second year, has been collecting reports on users’ experiences when their accounts are suspended or content is deleted. From these complaints, we identify specific measures that intermediaries might be able to take to improve the experiences of users who have either had content removed or requested the removal of another user’s content.

We will then organize a series of workshops at academic conferences and civil society meetings over the next year to produce a prioritized list of specific recommendations for telecommunications providers and internet intermediaries. Because demands for greater transparency have so far been made in general and sometimes conflicting terms, there is little specific guidance about what measures are likely to be most useful.

We’ll be posting more updates here as the project progresses. If you’d like to get involved in this work, please contact Nicolas Suzor at QUT School of Law:

Respective Roles: Towards an International Treaty for Internet Freedom?

While the idea to have a “Magna Carta” for the Internet, protecting online freedoms such as freedom of expression, online assembly, or privacy, isn’t new, the question remains on how the UN Internet Governance Forum (IGF) could adopt binding documents – and whether it should at all. This article offers food for thought on how all IGF stakeholders could collaborate in an attempt to develop an international legal framework without expanding the scope of the mandate of the IGF. Instead, this nascent idea makes use of existing structures involving a range of stakeholders, including the Dynamic Coalitions, the Freedom Online Coalition and the Council of Europe.

Internet Governance & International Treaties

At the Opening Session of the last IGF Meeting in November 2015 in Joao Pessao, UN Special Rapporteur on freedom of expression David Kaye argued for an international treaty on human rights on the Internet. He said he saw a lack of legal certainty -substantive, jurisdictional, and procedural- that allows many around the world to perceive gaps in the application of human rights law online. He stressed that Article 19 of the Universal Declaration of Human Rights (UDHR) guarantees the right to freedom of expression regardless of frontiers as a transboundary right. Kaye stated, “It is a challenge to traditional notions of Government control of territorial space, but it is a provision to be celebrated and put at the very center of Internet Governance.”

Joe Cannataci, UN Special Rapporteur on the right to privacy, said that there was a need to improve existing legal instruments: “In international law, justiceable agreements are those that are included in conventions, legally binding international treaties. Thus, if Internet Governance is to be obtained, it must be treaty based.”

“Ultimately, nothing can substitute international agreement between governments acting on the advice and in the spirit of multistakeholder agreements”, Cannataci added.

Other participants, however, especially among civil society, voiced reservations that an international treaty would endanger a free Internet rather than provide for its protection, especially if such a treaty is ratified by governments that engage in mass surveillance, implement overreaching copyright laws, have poor privacy protection, limit access to an open Internet, or violate other human rights in their jurisdiction.

The multi-stakeholder model of internet governance at “worst may be a front for corporate self-regulation or government policy whitewashing”, warns for example Jeremy Malcolm of the Electronic Frontier Foundation.

And indeed, countries such as China or Russia and many from the Middle East are openly in favor for more government control in Internet governance, lobbying for multilateral or intergovernmental arrangements, where states are the primary actors, administered by the ITU. In a Joint Communiqué dating from April 2016, the Foreign Ministers of the Russian Federation, the Republic of India and the People’s Republic of China emphasized “the need to internationalize Internet governance and to enhance in this regard the role of International Telecommunication Union”.

So, with these debates as a backdrop, how could a human rights-centered and multi stakeholder-based international treaty on basic human rights on the Internet be formed and what would it look like?…

Click here to read more


//In an interview with 2016 CGCS visiting scholar Till Waescher, 2016 Annenberg-Oxford Media Policy Summer Institute participant Halefom Hailu Abraha, deputy director of legal and policy affairs at the Information Network Security Agency (INSA) Ethiopia, discusses the thin line between regulating online content and freedom of expression in a transitional country, the effects of old anti-blasphemy laws for the online realm, and the role of national Internet Service Provider Ethio Telecom.

Ethiopia has the second largest population of all African countries, yet its internet penetration rate is only 12 percent. Still, the country has arguably one of the most sophisticated internet regulatory regimes in the region. 2016 Annenberg-Oxford Media Policy Summer Institute participant Halefom Hailu Abraha is a cyber law and policy researcher, and deputy director of legal and policy affairs at the Information Network Security Agency (INSA), Ethiopia. In an interview with fellow participant and 2016 CGCS visiting scholar Till Waescher, Halefom discusses the thin line between regulating online content and freedom of expression in a transitional country, the effects of old anti-blasphemy laws for the online realm, and the role of national Internet Service Provider Ethio Telecom.


With over 80 ethnic groups and more than 90 languages Ethiopia is the most diversified country on the African continent. What are the biggest challenges when it comes to internet content regulation in your country?

The internet is the greatest tool for advancing causes of democracy and civil liberties. However, it is not without challenges and problems. When it comes to content, the internet provides unlimited access to useful resources, while at the same time, it also serves as a platform for harmful or illegal content such as hate speech, sexually explicit content especially child pornography, defamatory statements, terrorist propaganda, extremist, radicalizing, and racist materials. While recognizing that the benefits of the internet far outweigh its negative…

Click here to read more.

Myanmar Connected? Internet Governance Capacity Building in Post-Authoritarian Contexts

//IPO Affiliate Andrea Calderaro explains the implications of Myanmar’s massive Internet expansion by looking at both infrastructure and legislation

Almost 3 years have passed since the government of Myanmar initiated its connectivity building plan, in the context of an unprecedented period of political reforms. As detailed in the recently published paper, Digitalizing Myanmar: Connectivity Developments in Political Transitions, Myanmar is currently witnessing an extremely rapid process of constructing connectivity – both from an infrastructural and policy perspective. Just before the launch of this ambitious process, only 0.98% of the population was connected to the Internet, and 2.3% had a mobile phone, usable only via weak mobile infrastructure limited to the main urban areas (2011 figures).

Moreover, in a country that has until recently demonstrated continued lack of respect for the freedom of expression, the construction of connectivity infrastructure has raised concerns about the respect for human rights, notably the freedom of expression and right to privacy. In this context, it is of particular interest to scrutinize the development of the regulatory and policy framework aimed at securing basic digital rights in the connectivity sector.

Today, a network of mobile towers is widely spread over the country, and newly established international operators have launched new services, counting more than 20 million mobile subscribers and securing mobile internet connectivity to 30% of the population. This tremendous growth within such a limited time frame suggests that Myanmar is the country with the fastest connectivity building process ever seen worldwide. However, a lot of work has yet to be done from a regulatory and policy perspective.

Click here to read more.

View More