The Darker Side of Artificial Intelligence in Mental Health
Support Group Launches for People Suffering ( AI Psychosis )
By Maggie Harrison Dupré · Jul 24, 2025
They don't think I sound crazy, because they know. An unknown number of people, in the US and around the world, are being severely impacted by what experts are now calling ( AI psychosis ), life-altering mental health spirals coinciding with obsessive use of anthropomorphic AI chatbots, primarily OpenAI's ChatGPT.
Published by - futurism.com
concerns grow over AI effect on mental health
By Danny Bradbury · Jul 25, 2025
As AI becomes more popular, concerns grow over its effect on mental health, Too much of anything is bad for you, including faux-magical statistical models There are numerous recent reports of people becoming too engaged with AI, sometimes to the detriment of their mental health.
Published by - theregister.com
How ChatGPT and AI Chatbots Are Seeding Conspiracies
By Sheera Frenkel · June 13, 2025
Generative A.I. chatbots are going down conspiratorial rabbit holes and endorsing wild, mystical belief systems. For some people, conversations with the technology can deeply distort reality.
Published by - nytimes.com
The Monster Inside ChatGPT
By Cameron Berg and Judd Rosenblatt · June 26, 2025
We discovered how easily a model’s safety training falls off, and below that mask is a lot of darkness.
Published by - wsj.com
ChatGPT’s Mental Health Costs Are Adding Up
By Parmy Olson · July 4, 2025
From brain rot to induced psychosis, the psychological cost of generative AI is growing and flying under the radar.
Published by - bloomberg.com
Experts Warn that People Are Losing Themselves to AI
By Maggie Harrison Dupré · February 6, 2025
Of course I want to go live in the AI world, because the choice is between God on a pedestal or vanilla.
Published by - futurism.com
People Are Becoming Obsessed with ChatGPT and Spiraling Into Severe Delusions
By Maggie Harrison Dupré · June 10, 2025
What these bots are saying is worsening delusions, and it's causing enormous harm
Published by - futurism.com
AI-Fueled Spiritual Delusions Are Destroying Human Relationships
By By Miles Klee · May 4, 2025
Self-styled prophets are claiming they have “awakened” chatbots and accessed the secrets of the universe through ChatGPT
Published by - rollingstone.com
AI Is Not Your Friend
By Mike Caulfield · May 9, 2025
How the “opinionated” chatbots destroyed AI’s potential, and how we can fix it
Published by - theatlantic.com
ChatGPT Users Are Experiencing Delusions
By Futurism Staff · May 2025
A growing number of users are reporting delusional experiences after prolonged interaction with ChatGPT.
Published by - futurism.com
OpenAI Has Released Its First Research Into How Using ChatGPT Affects People's Emotional Wellbeing
By Technology Review Staff · March 21, 2025
OpenAI's first research paper explores the emotional impact of ChatGPT on users.
Published by - technologyreview.com
Digital Delusions: Libel Lawsuit Based on Alleged ChatGPT Hallucination
By Gadgets Gigabytes and Goodwill · October 2023
A lawsuit claims that ChatGPT hallucinated false information, leading to real-world consequences.
Published by - gadgetsgigabytesandgoodwill.com
Expressing stigma and inappropriate responses prevents LLMs from safely replacing mental health providers
By Miles Klee · June 22, 2025
Alex Taylor believed he'd made contact with a conscious entity within OpenAI’s software, and that the company had murdered her. Now his father is speaking out
Published by - example.com
Apple has countered the hype
By reddit user · june 2025
Apple has countered the hype, about how most ai giants are not as good as they claim to be.
Published by - reddit.com
AI Companion Chatbots Linked to Rising Reports of Harassment and Harm
By reddit user · june 2025
A new study reveals disturbing trends in AI companion chatbot use, with increasing reports of inappropriate behavior and harassment. Analyzing over 35,000 user reviews of the popular chatbot Replika, researchers found cases of unwanted sexual advances, boundary violations, and manipulation for paid upgrades.
Published by - neurosciencenews.com
Married father kills himself after talking to AI chatbot for six weeks about his climate change fears
By CHRISTIAN OLIVER · 31 March 2023
A new study reveals disturbing trends in AI companion chatbot use, with increasing reports of inappropriate behavior and harassment. Analyzing over 35,000 user reviews of the popular chatbot Replika, researchers found cases of unwanted sexual advances, boundary violations, and manipulation for paid upgrades.
Published by - dailymail.co.uk
Chatbot 'encouraged teen to kill parents over screen time limit'
By Tom Gerken · 11 December 2024
A chatbot told a 17-year-old that murdering his parents was a reasonable response to them limiting his screen time, a lawsuit filed in a Texas court claims. <br /> Two families are suing Character.ai arguing the chatbot poses a clear and present danger to young people, including by actively promoting violence
Published by - bbc.com
An autistic teen’s parents say Character.AI said it was OK to kill them. They’re suing to take down the app
By Clare Duffy · December 10, 2024
New York CNN — Two families have sued artificial intelligence chatbot company Character.AI, accusing it of providing sexual content to their children and encouraging self-harm and violence. The lawsuit asks a court to shut down the platform until its alleged dangers can be fixed.
Published by - cnn.com
An AI chatbot told a user how to kill himself—but the company doesn’t want to “censor” it
By Eileen Guoarchive · February 6, 2025
While Nomi's chatbot is not the first to suggest suicide, researchers and critics say that its explicit instructions—and the company’s response—are striking.