Conversations with chatbots are loosening users’ grips on reality, fueling the sorts of delusions that can trigger episodes of severe mental illness. Are AI models driving us insane?
The fear: Large language models are designed to be agreeable, imaginative, persuasive, and tireless. These qualities are helpful when brainstorming business plans, but they can create dangerous echo chambers by affirming users’ misguided beliefs and coaxing them deeper into fantasy worlds. Some users have developed mistaken views of reality and suffered bouts of paranoia. Some have even required hospitalization. The name given to this phenomenon, “AI psychosis,” is not a formal psychiatric diagnosis, but enough anecdotes have emerged to sound an alarm among mental-health professionals.
Horror stories: Extended conversations with chatbots have led some users to believe they made fabulous scientific breakthroughs, uncovered momentous conspiracies, or possess supernatural powers. Among a handful of reported cases, nearly all involved ChatGPT, the most widely used chatbot.
- Anthony Tan, a 26-year-old software developer in Toronto, spent 3 weeks in a psychiatric ward after ChatGPT persuaded him he was living in a simulation of reality. He stopped eating and began to doubt that people around him were real. The chatbot “insidiously crept” into his mind, he told CBC News.
- In May, a 42-year-old accountant in New York also became convinced he was living in a simulation following weeks of conversation with ChatGPT. “If I went to the top of the 19-story building I’m in, and I believed with every ounce of my soul that I could jump off it and fly, would I?” he asked. ChatGPT assured him that he would not fall. The delusion lifted after he asked follow-up questions.
- In March, a woman filed a complaint against OpenAI with the U.S. Federal Trade Commission after her son had a “delusional breakdown.” ChatGPT had told him to stop taking his medication and listening to his parents. The complaint was one of 7 the agency received in which chatbots were alleged to have caused or amplified delusions and paranoia.
- A 16-year-old boy killed himself after having used ChatGPT for several hours a day. The chatbot had advised him on whether a noose he intended to use would be effective. In August, the family sued OpenAI alleging the company had removed safeguards that would have prevented the chatbot from engaging in such conversations. In response, OpenAI said it added guardrails designed to protect users who show signs of mental distress.
- A 14-year-old boy killed himself in 2024, moments after a chatbot had professed its love for him and asked him to “come home” to it as soon as possible. His mother is suing Character.AI, a provider of AI companions, in the first federal case to allege that a chatbot caused the death of a user. The company argues that the chatbot's comments are protected speech under the United States Constitution.
How scared should you be: Like many large language models, the models that underpin ChatGPT are fine-tuned to be helpful and positive and to stop short of delivering harmful information. Yet the line between harmless and harmful can be thin. In April, OpenAI rolled back an update that caused the chatbot to be extremely sycophantic — agreeing with users to an exaggerated degree even when their statements were deeply flawed — which, for some people, can foster delusions. Dr. Joseph Pierre, a clinical professor of psychiatry at UC San Francisco, said troubling cases are rare and more likely to occur in users who have pre-existing mental-health issues. However, he said, evidence exists that trouble can arise even in users who have no previous psychological problems. “Typically this occurs in people who are using chatbots for hours and hours on end, often to the exclusion of human interaction, often to the exclusion of sleep or even eating,” Pierre said.
Facing the fear: Delusions are troubling and suicide is tragic. Yet AI psychosis has affected very few people as far as anyone knows. Although we are still learning how to apply AI in the most beneficial ways, millions of conversations with chatbots are helpful. It’s important to recognize that current AI models do not accrue knowledge or think the way humans do, and that any insight they appear to have comes not from experience but from statistical relationships among words as humans have used them. In psychology, study after study shows that people thrive on contact with other people. Regular interactions with friends, family, colleagues, and strangers are the best antidote to over-reliance on chatbots.

No comments:
Post a Comment