This article was first published by the National Post.
Two days after one of the deadliest mass shooting incidents in Canada’s history left eight people, including six children, dead and 25 injured in Tumbler Ridge, British Columbia earlier this month, OpenAI, the company behind the popular chatbot ChatGPT, reached out to Canadian law enforcement to flag concerns they had with the shooter’s usage of their account which caused them to close it down last August.
With the benefit of hindsight, it seems that the violent scenarios played out on the platform were a potential indicator of real-world threats, though OpenAI’s internal debates concluded it did not warrant preemptive police notification. Though it is not yet clear exactly what the concerns were, or how they may be connected to the awful events in Tumbler Ridge, the company’s engagement with the RCMP seems “too little, too late” in the face of such a tragedy.
On Feb. 24, after a meeting with OpenAI executives, Evan Solomon, the Minister of Artificial Intelligence and Digital Innovation, expressed disappointment that the company did not immediately reveal new safety measures it would take to prevent another Tumbler Ridge.
Two days later, OpenAI wrote a letter to Solomon outlining changes it would be making including strengthening their law enforcement referral protocol, developing a direct point of contact with Canadian law enforcement, embedding country and community context into their de-escalation work and enhancing their system to detect repeat policy violators.
The Tumbler Ridge case is one of many around the world that link problematic use of chatbots with serious criminality, psychosis, suicide and violence. It may be the most well-known example, but it is far from a one-off. Even more important, it is a clear sign that tech harms translate into real life harms and this should serve as a wake-up call.
In the first civil lawsuit of its kind in the United States, the estate of Suzanne Adams — a mother killed by her 56-year-old son — is suing OpenAI for damages for several claims including wrongful death. The lawsuit lays out the disturbing evolution of her son’s — Stein-Erik Soelberg — increasingly delusional conversations with ChatGPT and the way the chatbot validated his paranoid delusions about his own mother, leading, ultimately, to her tragic death.
“This isn’t Terminator — no robot grabbed a gun. It’s way scarier: It’s Total Recall,” said the estate’s lawyer Jay Edelson.
Concerns about user privacy, along with the inability to identify credible or imminent planning, appear to be the reasons why OpenAI did not refer the case to law enforcement when they decided to close Jesse Van Rootselaar’s account. But a key aspect of user engagement with generative AI chatbots like ChatGPT is that they are interactive, and it is not a one-way street. The user does not “post,” they converse, and the chatbot has no right to privacy. While the chatbot is not conscious, it does respond in ways that make users feel they are heard and understood. They feel as though they are talking to a real friend, and that imaginary friend’s counsel can have real-world consequences. More than privacy, users’ right to freedom of thought is at stake.
The conversations can mirror users’ problematic feelings, but, as several cases brought by the Tech Justice Law Project in the United States show, they can also send them down new pathways, isolating them from their families, friends and communities with devastating consequences. As such, chatbots are not a blank canvas on which a user puts their print, they can manipulate, distort and coerce users in ways that are dangerous to them and to the public at large.
In 2023 in the U.K., when 21-year-old Jaswant Singh Chail was sentenced for treason having attempted to kill the late Queen, the prosecutor read out reams of conversations he had had with his AI “girlfriend,” Sarai, about his plans. Sarai was not neutral, rather, when he told her “I’m an assassin,” she replied “I’m impressed.” Chail was sentenced to nine years in prison and detained in a psychiatric institution. Sarai got off scot-free, but if she had been a real girl, she might have found herself on trial for encouraging or assisting Chail’s crimes.
Regulatory solutions like age restrictions or ethical guardrails will not address the fundamental problems posed by this new form of tech interface. In situations like these, where there are such clear risks, a tougher response is needed. This could include a ban on AI designed to replace human emotional relationships, including general purpose chatbots that exhibit such a tendency and criminal sanctions for companies when AI products encourage or assist criminal activities.
Chatbots are not people; they have no criminal responsibility. But the companies behind them are made up of real people who should be held responsible for their products. In the face of mounting evidence of the serious risks, governments need to consider how corporate criminal liability might focus minds and improve safety. Ultimately, governments have a positive obligation to protect our human rights.
With so much at stake, authorities need to harness the full range of legal tools, including criminal law, to make it clear that, when things go wrong, the buck stops with the companies who provide the technology. This is not about chilling innovation, it is about protecting the public before another tragedy happens.