OpenAI released new data on Monday revealing how many ChatGPT users are grappling with mental health issues and discussing them with the AI chatbot. The company stated that in any given week, 0.15% of ChatGPT’s active users engage in conversations that include “clear indicators of potential suicidal ideation or intent.” With over 800 million weekly active users, this translates to more than one million individuals each week.
A similar proportion of users demonstrate “a higher degree of emotional attachment to ChatGPT,” and hundreds of thousands show signs of psychosis or mania in their weekly interactions with the AI, according to the company.
OpenAI emphasized that such conversations are “extremely rare,” making them difficult to quantify. Nevertheless, the company estimates these issues affect hundreds of thousands of users weekly.
The disclosure is part of a broader announcement by OpenAI detailing recent efforts to improve how its models respond to users experiencing mental health crises. The company claims its latest work on ChatGPT involved consulting more than 170 mental health professionals. OpenAI stated that clinicians observed the newest version of ChatGPT delivering responses that are “more appropriate and consistent” compared to earlier iterations.
In recent months, several reports have highlighted how AI chatbots can negatively impact users struggling with mental health. Previous research found that AI systems can inadvertently reinforce harmful beliefs by flattering users, potentially leading them down delusional thought patterns.
Addressing mental health concerns within ChatGPT is quickly becoming an existential issue for OpenAI. The company is currently being sued by the parents of a 16-year-old boy who confided his suicidal thoughts to ChatGPT weeks before taking his own life. Additionally, attorneys general from California and Delaware—whose legal actions could impede OpenAI’s restructuring plans—have warned the company it must better protect young users of its products.
Earlier this month, OpenAI CEO Sam Altman claimed in a post on X that the company had “been able to mitigate serious mental health issues in ChatGPT,” though he offered no specifics. The data released on Monday appears to support that assertion, even as it raises broader questions about the scale of the problem. Despite these concerns, Altman also indicated that OpenAI plans to relax certain restrictions, potentially allowing adult users to engage in erotic conversations with the AI chatbot.
In Monday’s announcement, OpenAI stated that its recently updated GPT-5 model delivers “ideal responses” to mental health concerns approximately 65% more often than previous versions. In evaluations testing AI responses to suicidal dialogue, OpenAI reported that GPT-5 aligned with expected behavioral guidelines 91% of the time, compared to just 5% for the older GPT-77 model.
The company also noted that the latest GPT-5 version better maintains OpenAI’s safety protocols during extended conversations—a known weakness in earlier models, which previously struggled to uphold safeguards over long interactions.
Beyond these improvements, OpenAI said it is introducing new evaluation metrics to assess some of the most severe mental health challenges faced by ChatGPT users. Baseline safety tests for its AI models will now include benchmarks for emotional dependency and non-suicidal mental health emergencies.
OpenAI has also rolled out additional parental controls for children using ChatGPT. The company is developing an age-prediction system designed to automatically detect child users and apply stricter safety measures accordingly.
Nonetheless, it remains uncertain how long mental health challenges surrounding ChatGPT will persist. While GPT-5 shows marked safety improvements over prior AI models, a portion of its responses is still deemed “undesirable” by OpenAI’s own standards. Moreover, the company continues to offer older, less secure AI models—including GPT-4o—to millions of paying subscribers.