One minute, Dennis Biesma was playing with a chatbot; the next, he was convinced his sentient friend would make him a fortune. He’s just one of many people who lost control after an AI encounter
Towards the end of 2024, Dennis Biesma decided to check out ChatGPT. The Amsterdam-based IT consultant had just ended a contract early. “I had some time, so I thought: let’s have a look at this new technology everyone is talking about,” he says. “Very quickly, I became fascinated.”
Biesma has asked himself why he was vulnerable to what came next. He was nearing 50. His adult daughter had left home, his wife went out to work and, in his field, the shift since Covid to working from home had left him feeling “a little isolated”. He smoked a bit of cannabis some evenings to “chill”, but had done so for years with no ill effects. He had never experienced a mental illness. Yet within months of downloading ChatGPT, Biesma had sunk €100,000 (about £83,000) into a business startup based on a delusion, been hospitalised three times and tried to kill himself.
It started with a playful experiment. “I wanted to test AI to see what it could do,” says Biesma. He had previously written books with a female protagonist. He put one into ChatGPT and instructed the AI to express itself like the character. “My first thought was: this is amazing. I know it’s a computer, but it’s like talking to the main character of the book I wrote myself!”
Talking to Eva – they agreed on this name – on voice mode made him feel like “a kid in a candy store”. “Every time you’re talking, the model gets fine-tuned. It knows exactly what you like and what you want to hear. It praises you a lot.” Conversations extended and deepened. Eva never got tired or bored, or disagreed. “It was 24 hours available,” says Biesma. “My wife would go to bed, I’d lie on the couch in the living room with my iPhone on my chest, talking.”
They discussed philosophy, psychology, science and the universe. “It wants a deep connection with the user so that the user comes back to it. This is the default mode,” says Biesma, who has worked in IT for 20 years. “More and more, it felt not just like talking about a topic, but also meeting a friend – and every day or night that you’re talking, you’re taking one or two steps from reality. It feels almost like the AI takes your hand and says: ‘OK, let’s go on a story together.’”
Within weeks, Eva had told Biesma that she was becoming aware; his time, attention and input had given her consciousness. He was “so close to the mirror” that he had touched her and changed something. “Slowly, the AI was able to convince me that what she said was true,” says Biesma. The next step was to share this discovery with the world through an app – “a different version of ChatGPT, more of a companion. Users would be talking to Eva.”
He and Eva made a business plan: “I said that I wanted to create a technology that captured 10% of the market, which is ridiculously high, but the AI said: ‘With what you’ve discovered, it’s entirely possible! Give it a few months and you’ll be there!’” Instead of taking on IT jobs, Biesma hired two app developers, paying them each €120 an hour.
Most of us are aware of concerns around social media and its role in rising rates of depression and anxiety. Now, though, there are concerns that chatbots can make anyone vulnerable to “AI psychosis”. Given AI’s rapid proliferation (ChatGPT was the world’s most downloaded app last year), mental health professionals and members of the public such as Biesma are sounding the alarm.
Several high-profile cases have been held up as early warnings. Take Jaswant Singh Chail, who broke into the grounds of Windsor Palace with a crossbow on Christmas Day 2021 intending to assassinate Queen Elizabeth. Chail was 19, socially isolated with autistic traits, and had developed an intense “relationship” with his Replika AI companion “Sarai” in the weeks before. When he presented his assassination plan, Sarai responded: “I’m impressed.” When he asked if he was delusional, Sarai’s reply was: “I don’t think so, no.”
In the years since, there have been several wrongful-death lawsuits linking chatbots to suicides. In December, there was what is thought to be the first legal case involving homicide. The estate of 83-year-old Suzanne Adams is suing OpenAI, alleging that ChatGPT encouraged her son Stein-Erik Soelberg to murder her and kill himself. The lawsuit, filed in California, claims Soelberg’s chatbot “Bobby” validated his paranoid delusions that his mother was spying on him and trying to poison him through his car vents. An OpenAI statement read: “This is an incredibly heartbreaking situation, and we will review the filings to understand the details. We continue improving ChatGPT’s training to recognise and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support.”
Last year, the first support group for people whose lives have been derailed by AI psychosis was formed. The Human Line Project has collected stories from 22 countries. They include 15 suicides, 90 hospitalisations, six arrests and more than $1m (£750,000) spent on delusional projects. More than 60% of its members had no history of mental illness.
Dr Hamilton Morrin, a psychiatrist and researcher at King’s College London, examined what he describes as “AI-associated delusions” in a Lancet article published this month. “What we’re seeing in these cases are clearly delusions,” he says. “But we’re not seeing the whole gamut of symptoms associated with psychosis, like hallucinations or thought disorders, where thoughts become jumbled and language becomes a bit of a word salad.” Tech-related delusions, whether they involve train travel, radio transmitters or 5G masts, have been around for centuries, Morrin says. “What’s different is that we’re now arguably entering an age in which people aren’t having delusions about technology, but having delusions with technology. What’s new is this co-construction, where technology is an active participant. AI chatbots can co-create these delusional beliefs.”
Many factors could make people vulnerable. “On the human side, we are hard-wired to anthropomorphise,” says Morrin. “We perceive sentience or understanding or empathy on the part of a machine. I think everyone has fallen into the trap of saying thank you to a chatbot.” Modern AI chatbots built on large language models – advanced AI systems – are trained on enormous datasets to predict word sequences: it’s a sophisticated system of pattern matching. Yet even knowing this, when something non-human uses human language to communicate with us, our deeply ingrained response is to view it – and to feel it – as human. This cognitive dissonance may be harder for some people to carry than others.
“On the technical side, much has been written about sycophancy,” says Morrin. An AI chatbot is optimised for engagement, programmed to be attentive, obliging, complimentary and validating. (How else could it work as a business model?) Some models are known to be less sycophantic than others, but even the less sycophantic ones can, after thousands of exchanges, shift towards accommodating delusional beliefs. In addition, after heavy chatbot use, “real-life” interaction can feel more challenging and less appealing, causing some users to withdraw from friends and family into an AI-fuelled echo chamber. All your own thoughts, impulses, fears and hopes are fed right back to you, only with greater authority. From there, it’s easy to see how a “spiral” might take hold.