Freedom of Thought on Social Media: Supporting User Navigation of False Content

CIGI Paper No. 336

October 27, 2025

Human interaction mediated by social media introduced a different component to the supply and consumption of information: the tailoring of content to individuals via algorithmic curation. This personalization made possible by data profiling shapes the distribution and effects of false content. A related risk concerns the potential impact of misbelief on users, specifically with respect to their freedom of thought. The purpose of this paper is to explain this risk, and to outline discussion points on how it may be mitigated by technical interventions designed with reference to the human right to freedom of thought, while highlighting the implementation challenges and limitations of each intervention.

In forming part of efforts to mitigate this risk of misbelief and its connected effects, leveraging the relationship between online expression and thought holds promise. This paper explores the potential of doing so by deploying technical measures that moderate the consumption of false content (in contrast to those focused on its supply). The measures considered here are aimed at protecting against misbelief leading to regressive thought while respecting user agency and choice. They consist of labelling false content, watermarking content generated by computer applications, presenting alternative informational sources via interstitial pop-ups linked to false content and collectivizing user reporting. Each one is intended to encourage the real-time exercise of the human right to freedom of thought when individuals encounter and engage with false content — aiming to disrupt quick, reactive thinking and prompt slower, deliberative thinking.

About the Author

Richard Mackenzie-Gray Scott works across the areas of human rights, digital technologies, constitutional studies and international law and relations, comprising research, teaching, policy engagement and legal practice.