As AI becomes more popular, concerns grow over its effect on mental health

By neurosciencenews||Source
Too much of anything is bad for you, including faux-magical statistical models

There are numerous recent reports of people becoming too engaged with AI, sometimes to the detriment of their mental health.

Those concerns hit the mainstream last week when an account owned by Geoff Lewis, managing partner of venture capital firm Bedrock and an early investor in OpenAI, posted a disturbing video on X. The footage, ostensibly of Lewis himself, describes a shadowy non-government system, which the speaker says was originally developed to target him but then expanded to target 7,000 others.

"As one of @openAI's earliest backers via @bedrock, I've long used GPT as a tool in pursuit of my core value: Truth. Over years, I mapped the Non-Governmental System. Over months, GPT independently recognized and sealed the pattern," he said in one cryptic post. "It now lives at the root of the model."

The post prompted concerns online that AI had contributed to Lewis's beliefs. The staff at The Register are not mental health professionals and couldn't comment on whether anyone's posts indicate anything other than a belief in conspiracy theories, but others did.

Some onlookers are convinced there's a budding problem. "I have cataloged over 30 cases of psychosis after usage of AI," Etienne Brisson told the Reg. After a loved one experienced a psychotic episode after using AI, Brisson started helping to run a private support group called The Spiral, which helps people deal with AI psychosis. He became involved a. He has also set up The Human Line Project, which advocates for protecting emotional well-being and documents stories of AI psychosis.

Big problems from small conversations

These obsessive relationships sometimes begin with mundane queries. In one case documented by Futurism, a man began talking to ChatGPT by asking it for help with a permaculture and construction project. That reportedly morphed quickly into a wide-ranging philosophical discussion, leading him to develop a Messiah complex, claiming to have "broken" math and physics, and setting out to save the world. He lost his job, was caught attempting suicide, and committed to psychiatric care, the report says.

Another man reportedly began using AI for coding, but the conversation soon turned to philosophical questions and using it for therapy. He used it to get to "the truth", recalled his wife in a Rolling Stone interview, who said that he was also using it to compose texts to her and analyze their relationship. They separated, after which he developed conspiracy theories about soap on food and claimed to have discovered repressed memories of childhood abuse, according to the report.

Rolling Stone also talked to a teacher who posted on Reddit about her partner developing AI psychosis. He reportedly claimed that ChatGPT helped him create "what he believes is the world's first truly recursive AI that gives him the answers to the universe". The man, who was convinced he was rapidly evolving into "a superior being," threatened to leave her if she didn't begin using AI too. They had been together for seven years and owned a house.

In some cases the consequences of AI obsession can be even worse.

Sewell Seltzer III was just 14 when he died by suicide. For months, he had been using Character.AI, a service that allows users to talk with AI bots designed as various characters. The boy apparently became obsessed with an AI that purported to be Game of Thrones character Daenerys Targaryen, with whom he reportedly developed a romantic relationship. The lawsuit filed by his mother describes the "anthropomorphic, hypersexualized, and frighteningly realistic experiences" that he and others experience when talking to such AI bots.

Correlation or causation?

As these cases continue to develop, they raise the same kinds of questions that we could ask about conspiracy theorists, who also often seem to turn to the dark side quickly and unexpectedly. Do they become ill purely because of their interactions with an AI, or were those predilections already there, just waiting for some external trigger?

"Causation is not proven for these cases since it is so novel but almost all stories have started with using AI intensively," Brisson said.

"We have been talking with lawyers, nurses, journalists, accountants, etc," he added. "All of them had no previous mental history."

Ragy Girgis, director of The New York State Psychiatric Institute's Center of Prevention and Evaluation (COPE) and professor of clinical psychiatry at Columbia University, believes that for many the conditions are typically already in place for this kind of psychosis.

"Individuals with these types of character structure typically have identify diffusion (difficulty understanding how one fits into society and interacts with others, a poor sense of self, and low self-esteem), splitting-based defenses (projection, all-or-nothing thinking, unstable relationships and opinions, and emotional dysregulation), and poor reality testing in times of stress (hence the psychosis)", he says.

What kinds of triggering effects might AI have for those vulnerable to it? A pair of studies by MIT and OpenAI has already set out to track some of the mental effects of using the technology. Released in March, the research found that high-intensity use could increase feelings of loneliness.

People with stronger emotional attachment tendencies and higher trust in the AI chatbot tended to experience greater loneliness and emotional dependence, respectively, the research said.

This research was released a month after OpenAI announced that it would expand the memory features in ChatGPT. The system now automatically remembers details about users, including their life circumstances and preferences. It can then use these in subsequent conversations to personalize its responses. The company has emphasized that users remain in control and can delete anything they don't want the AI to remember about them.

A place in the medical books?

Should we be recognizing AI psychosis officially in psychiatric circles? The biggest barrier here is its rarity, said Girgis. "I am not aware of any progress being made toward officially recognizing AI psychosis as a formal psychiatric condition," he said. "It is beyond rare at this point. I am aware of only a few reported cases."

However, Brisson believes there might be many more in the works, especially given the large number of people using the tools for all kinds of things. A quick glimpse at Reddit shows plenty of conversations in which people are using what is nothing more than a sophisticated statistical model for personal therapy.

"This needs to be treated as a potential global mental health crisis," he concludes. "Lawmakers and regulators need to take this seriously and take action."

We didn't get an immediate response from Lewis or Bedrock but will update this story if we do. In the meantime, if you or someone you know is experiencing serious mental distress after using AI (or indeed for any other reason) please seek professional help from your doctor, or dial a local mental health helpline like 988 in the US (the Suicide and Crisis Hotline) or 111 in the UK (the NHS helpline). ®