Transparency in Content Moderation

Nicolas Suzor

What content are you allowed to see and share online? The answer is surprisingly complicated. Our new project, funded by the Internet Policy Observatory, and including researchers from OnlineCensorship.org, Queensland University of Technology, and the Annenberg School for Communication and Journalism, works to engage civil society organizations and academic researchers to create a consensus-based priority list of the information users and researchers need to better understand content moderation and improve advocacy efforts around user rights.

The secret rules of content moderation

Search engines, content hosts, social media platforms, and other tech firms often make decisions to delete content, block links, and suspend accounts. The Terms of Service of these providers give them a great deal of power over how we communicate, but they have few responsibilities to be consistent, fair, or transparent.

Content moderation is a difficult task, and the decisions that platforms make are always going to upset someone. It’s little surprise that platforms prefer to do this work in secret. But as high profile leaks and investigative journalism, lsuch as the recently published Guardian ‘Facebook Files’, start to expose the contradictions and value judgments built into these systems, they’re becoming more controversial all the time. As Tarleton Gillespie puts it, the secrecy makes this entire process more difficult and more contentious:

The already unwieldy apparatus of content moderation just keeps getting more built out and intricate, laden down with ad hoc distinctions and odd exceptions that somehow must stand in for a coherent, public value system. The glimpse of this apparatus that these documents reveal, suggest that it is time for a more substantive, more difficult reconsideration of the entire project — and a reconsideration that is not conducted in secret.

The need for transparency

As the United Nations’ cultural organization UNESCO has pointed out, there are real threats to freedom of expression when private companies are responsible for moderating content.

When governments make decisions about what content is allowed in the public domain, there are often court processes and avenues of appeal. When a social media platform makes such decisions, users are often left in the dark about why their content has been removed (or why their complaint has been ignored).

It turns out that we know very little about the rules that govern what content is permitted on different social media platforms. Organizations like Ranking Digital Rights evaluate how well telecommunications providers and internet companies perform against measures of freedom of expression and privacy. In its 2017 report, RDR found that ‘Company disclosure is inadequate across the board’:

Companies tell us almost nothing about when they remove content or restrict users’ accounts for violating their rules. Through their terms of service and user agreements, companies set their own rules for what types of content or activities are prohibited on their services and platforms, and have their own internal systems and processes for enforcing these rules. Companies need to disclose more information about their enforcement processes and the volume and nature of content being removed.

What does ‘transparency’ mean?

While there have been many calls for greater transparency in content moderation decisions, there is little guidance available for internet intermediaries about the types of information they are expected to produce.

This project sets out to build consensus on a practical set of guidelines for best practices in transparency for content moderation practices.

We do this first by undertaking a review of the most common demands from users themselves. Now in its second year, Onlinecensorship.org has been collecting reports on users’ experiences when their accounts are suspended or content is deleted. From these complaints, we identify specific measures that intermediaries might be able to take to improve the experiences of users who have either had content removed or requested the removal of another user’s content.

We will then organize a series of workshops at academic conferences and civil society meetings over the next year to produce a prioritized list of specific recommendations for telecommunications providers and internet intermediaries. Because demands for greater transparency have so far been made in general and sometimes conflicting terms, there is little specific guidance about what measures are likely to be most useful.

We’ll be posting more updates here as the project progresses. If you’d like to get involved in this work, please contact Nicolas Suzor at QUT School of Law: n.suzor@qut.edu.au.

The Internet Policymaking Landscape in Pakistan

Posted in

Usama Khilji & Saleha Zahid

As smartphones and mobile data rates have become cheaper, internet access in Pakistan has expanded rapidly and more and more Pakistanis are now online. This has increased people’s access to information, and provided a much needed platform for citizens to express opinions through criticism of state policies, dissent, and political commentary. For a state machinery like Pakistan’s that is not shy of clamping down on press freedom, the internet poses a new challenge: how can the internet be regulated, and information be controlled?

Internet governance today is a global challenge, especially with regards to the balance between civil liberties and security. This is concerning in a country like Pakistan which has faced the challenge of local terrorism since soon after 9/11. However, the government’s actions against online actors have largely targeted critics of the state rather than violent, non-state actors that use social media for disseminating hate speech.

This is the first study focused on Pakistan that attempts to map the country’s internet policymaking process, identify its stakeholders, and analyse the strengths and shortcomings of each. The main bodies for law and policy making related to the internet in Pakistan are the Ministry of Information Technology and Telecom (MoITT), the Pakistan Telecommunications Authority (PTA), and the National Assembly and Senate Standing Committees on Information Technology and Telecom. Further, the study looks at specific cases in internet policy making, such as the processes surrounding both the recently passed Pakistan Electronic Crimes Act (PECA) 2016, and the Internet Clearing House (ICH) issue. This research also chronicles the history of internet policymaking in Pakistan, starting with the Pakistan Telecommunications Authority (PTA) Act of 2002.

Via interviews with key stakeholders, this study reveals Pakistan’s ad-hoc, reactionary, internet policymaking, as well as a state apparatus, including the bureaucracy, politicians, and the judiciary, that has little technological understanding and hence mandates orders that are ineffective, undemocratic, and draconian. The blockages of Facebook in 2010, and of YouTube in 2008 and from 2013 to 2016 are testimony to the government’s tendency toward knee-jerk reactions to perceived challenges online.

The main questions explored in this research include: what is the internet policymaking process in Pakistan? Is it democratic? How inclusive is it? Do policymakers and legislators invite and include public input? Does the process involve multiple stakeholders such as academics, technology experts, businesses, internet users, and activists? The study also explores whether the laws and policies related to the internet in Pakistan are constitutional, in line with international standards, in support of fundamental rights, and effective. The case study of the Inter Ministerial Committee for Evaluation of Websites (IMCEW) shows how a body formed by the executive was eventually found unconstitutional and disbanded on court orders.

The key findings of the report indicate that the Ministry of Information Technology and Telecom lacks the trust of stakeholders, that there is consensus among the politicians related to blockage of content that is blasphemy and pornographic, and that long-term strategic plans for internet and telecom policy in Pakistan are absent. The study concludes with recommendations for a transparent policy- and law-making process that includes all stakeholders.

To read the full report, please click here.

How ICT companies operate vis-à-vis human rights issues and the repertoire of company-oriented advocacy

Sarah T. Roberts and Nathalie Maréchal

It seems hard to believe that only a few years ago, asserting that private ICT companies were the “sovereigns of cyberspace,” as Rebecca MacKinnon put it in “Consent of the Networked” (2012), was a fairly new idea. Researching companies’ impact on human rights and pressuring them to amend their practices and provide greater transparencies is now a mainstay of digital rights advocacy, yet many researchers and activists struggle to apply their training and expertise in researching and lobbying governments to the private sector. At a time when network shutdowns, media manipulation, and cybersecurity are making headlines around the globe, it is more vital than ever for civil society to understand how companies make these consequential decisions, how they are implemented, what their effects are, and what kinds of advocacy efforts are most likely to have an impact.

With support from the Internet Policy Observatory, we (Sarah T. Roberts and Nathalie Maréchal) launched this research project to not only better understand how ICT companies operate vis-à-vis privacy, free expression, and other human rights issues, but also to investigate the epistemology of company research and the repertoire of company-oriented advocacy. This blog post is the second of four planned deliverables. The first was a roundtable discussion held on March 31 at RightsCon (more below); next, we plan to write a civil society-friendly white paper on company research and advocacy as well as a more formal academic paper on the topic. The rest of this blog post describes the RightsCon roundtable and sets the stage for the forthcoming white paper.

How to Listen So Companies Will Talk, And Talk So Companies Will Listen

On Friday, March 31st, eight speakers from a variety of sectors and global perspectives convened, alongside a full house of audience participants, at this year’s Brussels-based RightsCon. Participants included:

Moderated by Ranking Digital Rights’s Nathalie Maréchal, the event brought policy makers, academics and NGO leaders together to talk about their successes, as well as their difficulties, engaging in research related to ICT companies. The session was timely, as it coincided with RDR’s launch of their latest Corporate Accountability Index, covering 22 of the world’s most powerful internet, telecommunications, and mobile firms and their public, disclosed policies and commitments related to users’ freedom of expression and privacy. RDR’s 2017 report served as an excellent jumping-off place, as it is a powerful example of the kind of research that can be undertaken largely without deep corporate cooperation or access to a firm’s inner circle.

Following this baseline, each participant shared insights about his or her own research on internet and telecom firms and policy, describing how they have undertaken their work in the face of varying degrees of cooperation or blocking, and at various registers and levels, from company-specific to country- or region-specific.

Continue reading

Turkey’s Internet Policy after the Coup Attempt: The Emergence of a Distributed Network of Online Suppression and Surveillance

Bilge Yesil, Efe Kerem Sozeri

In the early 1990s, the internet in Turkey was in the purview of academic and research institutions and had not yet become a commercial medium available to the masses. Today, 61% of the population (approximately 49 million) is online, and the government is heavily investing in fiber optic infrastructure to attract foreign capital to the country’s growing telecom sector. However, in parallel with the expansion of the digital communications network and the steady growth in overall usage, governmental policies have become increasingly restrictive over the years. In this report, Bilge Yesil and Efe Kerem Sözeri (with assistance from Emad Khazraee in data collection) examine the evolution of internet policy in Turkey from the early 2000s to the present time, analyze the emergence of new forms of internet regulation in a precarious democracy marked by authoritarian impulses, and reveal the fragility of the so-called links between the increase in digital communications and the creation of a pluralistic online sphere.

The report begins with an overview of the AKP (Justice and Development Party) government’s regulatory measures, and discusses its initiatives that aim to confine the networked public sphere in response to political crises and the potentially disruptive affordances of social media platforms. Following this overview, the report focuses on the emerging policy developments and online restrictions in the aftermath of the 2016 coup attempt that triggered the expansion of an online surveillance-censorship-control regime.

Between the early 1990s and mid-2000s, internet regulation was largely left to the courts, which prosecuted individual users in a somewhat random fashion, generally penalizing them based on alleged crimes against national unity and identity. The next decade witnessed the passage of the first Internet Law in 2007 that was largely propelled by online child pornography concerns, and the construction of legal and technical infrastructures that enabled administrative entities and courts to block so-called harmful content, create default filters, and ban tens of thousands of websites.

The year 2013 marks a turning point regarding the AKP government’s internet policy. During the Gezi Park protests and the corruption scandal, the AKP government became acutely aware of the role of social media in organizing protests, mobilizing activists, and disseminating information to the masses. To crack down on such activities considered threatening to its rule and legitimacy, it introduced new limitations on online communications and privacy, such as the passing of stricter internet legislation, use of throttling and content removal, and surveillance and prosecution of social media users.

Online restrictions worsened considerably in the aftermath of the coup attempt. Under the declared state of emergency that has been in place since the summer of 2016, the AKP government has expanded its powers by passing decree laws, issuing gag orders, blocking websites, shutting down the internet in certain parts of the country, restricting VPN and cloud services, and enlisting partisan social media users to harass and intimidate oppositional voices.

Drawing on data gathered from analyses of Twitter activity before and after the abortive coup; Twitter, Facebook and Google transparency reports; Lumen database on Turkish court orders and traffic data on throttling, as well as interviews with internet activists and legal scholars, the report points to the emergence of a distributed and decentralized system of suppression, surveillance and intimidation that involves both government and non-government actors, and hard and soft forms of control. The authors note that the Turkish government’s use and abuse of its powers has heralded a perilous era for online freedoms of information, speech and privacy.

To read the full report, please click here.

View More