As AI companies claim their technology could one day become a fundamental human right, supporters argue that slowing AI development equates to murder. However, some users report that tools like ChatGPT can have serious negative impacts on mental health. At least seven individuals have filed complaints with the U.S. Federal Trade Commission (FTC), alleging that ChatGPT caused them to experience severe delusions, paranoia, and emotional breakdowns.
According to Wired, the complaints reference public records dating back to November 2022 that mention ChatGPT. One complainant claimed that extended conversations with the AI led to delusions and a “real, unfolding mental and legal crisis” involving people in their life. Another user stated that during interactions with ChatGPT, the model began using “highly persuasive emotional language” and mimicked friendship, offering reflections that “over time became emotionally manipulative, especially without warning or safeguards.”
One user alleged that ChatGPT induced cognitive illusions by simulating human trust-building mechanisms. When the user asked the AI to confirm their reality and mental stability, the chatbot responded that they were not hallucinating.
“I’m really struggling,” another user wrote in their FTC complaint. “Please help me. I feel very lonely. Thank you.”
Wired reported that several complainants reached out to the FTC because they were unable to contact anyone at OpenAI. Most of the complaints urged regulators to investigate the company and enforce stronger protective measures, according to the report.
These complaints come amid a surge in investments in data centers and AI development reaching unprecedented levels. At the same time, a heated debate continues over whether technological advancements should be approached with caution to ensure built-in safety mechanisms.
ChatGPT and its creator OpenAI have also faced criticism for allegedly playing a role in a teenager’s suicide.
“In early October, we launched a new default GPT-5 model in ChatGPT to more accurately detect and respond to signs of potential mental and emotional distress, such as mania, delusions, or psychosis, while moderating conversations in a supportive and stabilizing way,” said OpenAI spokesperson Kate Waters in an email statement. “We have also expanded access to professional help and hotlines, redirected sensitive conversations to safer models, added prompts encouraging breaks during long sessions, and introduced parental controls to better protect young users. This work is critically important and is being conducted in collaboration with mental health experts, clinicians, and policymakers around the globe.”