Social media platforms are increasingly accused of shaping public debate and engineering people’s behavior in ways that might undermine the democratic process. In order to vitalize a much-needed multistakeholder dialogue on corrective measures against the spread of false information, this project has undertaken a truncated multistakeholder consultation, addressing experts from academia, civil society, governments and the industry to assess diverging perspectives on institutional proposals, legislative responses, and self- regulation resolutions that have sprung up around the world. It also asks what new challenges platform moderation and related “fake news” issues pose to what might be called the “procedural fitness” of the current multistakeholder internet governance system. Finally, it suggests recommendations for architectural changes that could promote constructive and inclusive debate on the topic.
As a culmination of one of the series of the IPO’s “research for impact” projects, this declaration builds on research and extensive consultations on best practices for social media platforms to provide transparency and accountability in their content moderation practices. This project engaged civil society organizations, industry representatives, policymakers, and academic researchers to create a priority list of the information users and researchers need to better understand commercial content moderation on social media platforms.