Freedom of Expression in the Age of Online Platforms: The Promise and Pitfalls of a Human Rights-Based Approach to Content Moderation
In today’s digital public sphere, individuals have little choice but to participate on online platforms, whose design choices shape what is possible, content policies influence what is permissible, and personalization algorithms determine what is visible. Ensuring that online content moderation is aligned with the public interest has emerged as one of the most pressing challenges for freedom of expression in the twenty-first century. Taking this challenge as its focus, this Article examines the promise and pitfalls of a human rights-based approach to content moderation—with a specific focus on the choices and challenges that online platforms are likely to confront in adhering to their corporate responsibility to respect human rights in this context. The Article examines three dimensions of a human rights-based approach to platform moderation in particular: a substantive dimension, encompassing the alignment of content moderation rules with international human rights law; a process dimension, encompassing the standards of transparency and oversight that platforms should implement as part of their human rights due diligence processes; and a procedural-remedial dimension, encompassing the procedural guarantees and remediation mechanisms that platforms should integrate within their systems of content moderation. The Article concludes by reflecting on some of the limits of the human rights-based approach and cautioning against viewing human rights as a panacea.
Recommended Citation: Barrie Sander, Freedom of Expression in the Age of Online Platforms: The Promise and Pitfalls of a Human Rights-Based Approach to Content Moderation, 43 Fordham Int'l L.J. 939 (2020).