Can Free Thinking Survive the Onslaught of False Content?

The hidden impact of social media as false content shapes minds, narrows thinking and threatens the freedom to reason independently.

October 28, 2025
M.G. Scott, Richard - Freedom of Thought final
Anyone with a device connected to the internet can sign up to join in the rigmarole of scroll, click, repeat. (Hamad I Mohammed/REUTERS)

Social media use has become a staple in many people’s diet of information consumption. Generally, users connect to platforms that they have been led to favour, and with this connection begins their exposure to a torrent of content. Everything from memes, media coverage and the celebration of milestones gets conveniently packaged into a customized feed. Anyone with a device connected to the internet can sign up to join in the rigmarole of scroll, click, repeat. The appeal of such a tool lies in societies structured around the appearances of busyness, productivity and success. It promises to be a communication and information outlet for the time-poor, all while providing never-ending entertainment. Yet a key difference with this tool is that the line between it and its users has blurred.

Social media platforms are tools indeed, not just for their users, though, but also for the companies that operate them. These companies and their business plans aim to maximize user engagement to extract and exploit related data and increase firm revenue. Arrays of digital designs are deployed on these platforms to keep individuals encaged for this purpose. Whether out of choice or necessity, affirming that individuals use social media can be a bit of a misnomer, as users and their data are also used by these platforms.

Throughout this process, content containing false information from various sources is supplied and consumed. This is not automatically a problem, because even if false content is available on a given platform, it may not necessarily be consumed. Even if it is consumed, it does not mean a user will believe it, and even if they do, their behaviour will not necessarily change to correspond with that misbelief. The type of false information also matters, since certain misbeliefs do not necessarily result in harm, and, in fact, can actually lead to positive outcomes. One example is children believing in Santa Claus, the Easter Bunny or the Tooth Fairy. Kids have fun immersing themselves in the related storytelling, and their parents enjoy the resulting behavioural changes. As such, even if the link between false online content and harm should be acknowledged, it is also important to recognize that it is not clear-cut.

When considering examples of harm, attention tends to fall on examples that are perceptible and measurable, such as storming state institutions or swallowing poison to combat a disease. Although it is difficult to establish a causal link between social media use and harm, whether individual or societal, assumptions readily arise in debates about the damage being done across societies due to false content on social media. But there is another harm that is subtler and considerably more sinister.

Exposure to falsehoods can increase the inclination to believe them, a problem exacerbated by the reality that self-directed attention is under assault when people connect to social media.

Understanding Regressive Thought

Although misbelief does not necessarily lead to harm, and a harmful consequence of consuming and then believing false content is often limited to a single instance of individual or collective behaviour, harm can also be invisible and occur in the mind. Content personalization on social media takes advantage of human cognition to shape thoughts and beliefs. From this perspective, false content carries the risk that misbelief will transform into what I call “regressive thought.” This term refers to the process encompassing the onset, development and consolidation of misbelief in the user’s mind, during which the thoughts that form become regressive. An example would be someone who had no previous history of vaccine hesitancy gradually becoming more anti-vaccine and ultimately refusing to accept any information and thoughts that do not align with the related misbelief that vaccines should be avoided.

Individuals can become more narrow-minded about a particular topic when their thinking is anchored in a misbelief sparked by false content on social media. In such cases, mental activity is constrained by parameters shaped more by a complex system than by the individual user, comprising factors such as algorithmic curation, content overload, interface design and peer influence. Should users increasingly engage with false content online, they face more instances that might trigger regressive thought.

Exposure to falsehoods can increase the inclination to believe them, a problem exacerbated by the reality that self-directed attention is under assault when people connect to social media. That means users’ ability to focus on things they deliberately choose may diminish over time as they spend more hours connected to a particular platform. The “illusory truth effect” shows that the perceived truthfulness of information can increase when it is encountered more frequently. There is also the issue of false equivalence, which in this context occurs when users are exposed to vast amounts of content appearing in rapid succession, to the point where they perceive true and false content as equally credible. More exposure to false content, therefore, risks stimulating more misbelief, which can then create a feedback loop that misleads users into a state of regressive thought on a specific topic.

This mental harm raises red flags when considering freedom of thought, understood as the right to form, hold and change or develop thoughts — all while keeping them private. A state of regressive thought inhibits these freedoms because it undermines the human capacity to reason, to make rational decisions and to remain open toward new information. Regressive thought binds thinking to a misbelief, closing off the mind to other avenues of thought and corrupting agency and autonomy in the process. Mindlessness can ensue, where users are in thrall to false content that influences them, which might engender further harm through behavioural changes.

Yet none of these harms need arise. Appropriate regulatory responses that support social media users can mitigate the risk of regressive thought, and implementing protections to freedom of thought is essential to such efforts.

Protecting Freedom of Thought

The relationship between online expression and thought holds considerable promise in protecting freedom of thought. One way to leverage this relationship is to recognize that thought is, in part, a social process that requires exposure to different views, interactions and guidance. Deploying technical measures on social media platforms can facilitate these things. While labelling false content is a well-known — though perhaps not the best — example of this approach, there are many others. For example, content generated by computer applications can be watermarked, embedding visible information into online content to inform users that it does not originate from a human. Another example is presenting alternative sources of information to accompany false content, such as in-platform pop-ups that provide trusted and accurate information tailored to the specific user. Another is to collectivize user reporting, meaning users can report and comment on content (and vote on those reports) to annotate it, providing a collectively agreed-upon understanding that adds perspective to potentially false content.

A key factor in empowering social media users is shifting their passive, unconscious consumption of information toward more deliberate, active and conscious online engagement. The aim of utilizing these measures is to promote this shift and help mitigate the risks connected to misbelief, guard against regressive thought and enable freedom of thought when people connect to social media. It involves supporting users in navigating false content while respecting their agency and choice to access and assess information. Striking this balance is necessary to ensure that the moderation of false content aligns with human rights and collective interests.

Many more opportunities to protect freedom of thought also exist beyond technical approaches. Realizing them requires looking beyond concerns regarding the supply and consumption of false social media content and focusing on the structural causes and underlying conditions that limit the exercise of this human right. This approach includes educational efforts, concrete policy and community cooperation to address matters such as digital addiction, smartphone dependency, mental well-being, public distrust and malicious online actors.

Another element to think about is rest, particularly the right to it. Imagine a world where everyone has plenty of opportunities for physical and mental rest. Social media users would have the time and space to think freely, without being boxed into mindsets that limit their freedom, or distracted by engineered overload. In a world like that, freedom of thought might not need to fight so hard for survival against false content because it would be nourished by conditions that allow it to flourish. The hard question, then, is not only how to manage false content on social media, but whether we can create societies where proper rest gives everyone the capacity to resist it.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Richard Mackenzie-Gray Scott works across the areas of human rights, digital technologies, constitutional studies and international law and relations, comprising research, teaching, policy engagement and legal practice.