A Call to Address Anthropomorphic AI Threats to Freedom of Thought

Policy Brief No. 206

August 28, 2025

Bonding chatbots represent a new frontier in digital technology — one that merges affective computing with machine learning to simulate emotionally significant human relationships. While these tools may offer support to individuals grappling with loneliness or emotional distress, they also present unprecedented ethical challenges.

At the heart of this issue lies the right to freedom of thought. The immersive and affective nature of interactions with bonding chatbots can manipulate users’ mental states in subtle but powerful ways, eroding their capacity for autonomous cognitive and emotional development. Emotional attachment to artificial entities can reconfigure users’ priorities, habits and evaluative frameworks, often without their full awareness or informed consent.

Given their potential for both benefit and harm, bonding chatbots should be understood as dual-use technologies — tools that can support or undermine human flourishing depending on how they are designed, marketed and regulated. A medical model of regulation provides a feasible path forward. It acknowledges the psychological risks involved without prematurely eliminating the possibility of innovation.

About the Author

Abel Wajnerman Paz is a professor of neuroethics at the Institute of Applied Ethics of the Pontificia Universidad Católica de Chile and a researcher at the National Center for Artificial Intelligence (CENIA), Basal Center FB210017 funded by the Chilean National Agency for Research and Development (ANID).