Andrew Quodling

As part of our project with Onlinecensorship.org, our research team has been analysing the user reports submitted through the Onlinecensorship.org website. In these reports, users detail their experiences of content moderation practices and policies on a variety of social media platforms and discuss the impacts that the moderation decisions of internet companies have on their lives.  We’ve just completed our first pass of analysis for this data — familiarising ourselves with the issues described by users and exploring themes that are apparent amongst user responses.

As Nic mentioned in our previous blog post about this project, we know very little about the rules that govern what content is permitted on different social media platforms. One of the issues we’ve found particularly interesting in the reports submitted to onlinecensorship.org is the way in which users seem to create their own folk knowledge around bans, censorship and other moderation practices on internet platforms. Users who do this effectively create their own narrative frameworks for understanding the ways that platforms operate in the absence of transparent decision-making processes and compelling explanations from platform operators.

One user even reported that although their posted content had been censored, they felt that the act of moderation had validated their own conclusions about the platform:

“I’m actually really glad that they suspended me because it validates the work I’m doing. The cover-up proves conspiracy. So I wear my Twitter suspensions like badges of honor.”

This knowledge-creation behaviour seems to manifest most often when a user attempts to explain how or why an internet platform seems to be overtly policing some views in relation to domestic politics or broader sociopolitical, geopolitical, and commercial issues. This search for alternative rationales for moderation may point towards a broader lack of trust in internet platforms. We suspect that a lack of transparency from platform moderators and operators of internet platforms, combined with user frustrations with seemingly ad-hoc moderation processes, have served to exacerbate problems like this.

Alongside this creation of folk knowledge around moderation practices, we’ve also observed users expressing concerns and frustrations in regards to their freedom of expression, and the ethical and legal legitimacy of content moderation decisions. Many of these users’ questions about content moderation practices and their rights focus on the inconsistencies that users have observed and experienced during these processes.

We’ve noted with particular interest the way users specifically discuss the effects of moderation decisions on users’ behaviour online and daily social, political, and economic lives. Some users report receiving temporary or permanent bans from platforms in conjunction with content moderation takedowns. Given that many internet platforms like Facebook, Twitter and Youtube hold special importance in the lives of their users, bans are seen as particularly significant penalties by users. Many user responses in the surveys collected by onlinecensorship.org highlight the frustrations that people express when faced with a temporary ban — as banned users can no longer participate in civic and social discourse, communicate with friends or family, or even communicate with customers and prospective business contacts.

Systematically analysing these survey responses will help us work to understand and identify the needs and desires of users in relation to content moderation and issues of transparency. In the coming weeks we’ll work through the survey data for this project comprehensively: coding each response, and identifying emergent themes in the data. This will help inform the identification of specific measures for internet to improve the experiences of users on these platforms. We’ll be posting more updates here as the project progresses.

 

Andrew Quodling is a Postdoctoral researcher associated with the Faculty of Law Research and the Digital Media Research Centre at Queensland University of Technology (QUT). During his PhD research, he investigated governance and political conflict in online spaces, with particular interests in the strategies used by platforms governing these spaces, and the tactics deployed by dissident users in conflict with social media platforms.