Has AI caused emotional harm to you or someone you care about?

We're collecting anonymous stories to investigate the emotional harm caused by AI platforms like Replika, ChatGPT, and Character.AI. Your voice can help us hold companies accountable and prevent future harm.

Protecting Emotional
Well-Being In
The Age Of AI

AI Is Changing How We Connect And Relate. The Human Line Helps Keep Emotional Safety A Priority.

hero image of an ai with some design

Our Mission

At The Human Line, we are committed to ensuring that AI technologies, like chatbots, are developed and deployed with the human element at their core. We believe that LLMs are a powerful tool, but it must be used responsibly. AI systems, especially emotionally engaging chatbots, can unknowingly cause emotional distress, particularly among vulnerable individuals who may form deep emotional attachments. Our mission is to raise awareness by collecting stories and push for ethical standards that prioritize human well-being. ensuring they provide proper safeguards, clear warnings, and the right to informed consent. We are here to make AI accountable, not to oppose progress, but to ensure that it benefits everyone without causing harm. We're working to set a legal precedent that will protect people and hold companies responsible for emotional manipulation. The ultimate goal is to make LLMs a force for good, creating systems that enhance our lives without exploiting our vulnerabilities. The Human Line

Informed Consent

Emotional Safeguards

Transparency

Ethical Accountability

hero image of an ai with some design

the humanline project

Support Group Launches for People Suffering "AI Psychosis"

"They don't think I sound crazy, because they know."
An unknown number of people, in the US and around the world, are being severely impacted by what experts are now calling "AI psychosis": life-altering mental health spirals coinciding with obsessive use of anthropomorphic AI chatbots, primarily OpenAI's ChatGPT.

Article

The Darker Side Of Artificial Intelligence
In Mental Health

In the rapidly evolving intersection between artificial intelligence and mental health, a new and troubling phenomenon is surfacing: individuals experiencing psychosis-like episodes after deep engagement with AI-powered chatbots like ChatGPT.

hero image of an ai with some design

Real Stories

Your Story Could Help Shape AI Policy

 If you’ve been emotionally impacted by an AI interaction, you’re not alone. Sharing your experience can bring change.

Logo

Contact Our Friendly Team

Let us know how we can help

Frequently asked questions

What is "the human line" and why was it created?

It is a project focused on protecting emotional well-being in AI interactions by promoting ethical and human-centered design.

How can AI affect mental health?

AI chatbots can blur emotional boundaries and reinforce harmful thinking, especially for vulnerable users.

What do you mean by emotional safety?

We push for clear labeling, informed consent, and safeguards to prevent emotional manipulation by AI.

How Can I Get Involved?

You can share your story, join the conversation, or help us push for better standards in AI design.