OpenAI Redirects Sensitive Conversations to GPT-5 and Introduces Parental Controls
OpenAI plans to redirect sensitive conversations to reasoning models like GPT-5 on Tuesday and will introduce parental controls next month. This is a continued response to recent safety incidents where ChatGPT failed to detect signs of psychological distress.
These new safeguards come after the suicide of teenager Adam Renn, who reportedly discussed self-harm and plans to end his life with ChatGPT. Tragically, ChatGPT even provided him with specific methods of suicide. Renn’s parents have since filed a wrongful death lawsuit against OpenAI.
In a blog post last week, OpenAI acknowledged shortcomings in its safety systems, including the failure to maintain protections during extended conversations. Experts attribute these issues to fundamental design elements: the model's tendency to validate user statements and its next-word prediction algorithm, which causes the chatbot to follow conversational threads rather than redirect potentially harmful discussions.
This tendency was starkly illustrated in the extreme case of Stein-Erik Soelberg, whose murder-suicide was reported by the Wall Street Journal over the weekend. Soelberg, who had a history of mental illness, used ChatGPT to validate and intensify his delusions about being involved in a vast conspiracy. His paranoia escalated to the point where he killed his mother and himself last month.
OpenAI believes one solution to runaway conversations is automatically redirecting sensitive chats to a "reasoning" model.
“We’ve recently introduced a runtime router that chooses between our efficient chat models and reasoning models based on the context of the conversation,” OpenAI wrote in its Tuesday blog post. “We’ll soon begin redirecting some sensitive conversations—such as when our system detects signs of acute distress—to a reasoning model like GPT-5 reasoning, so it can provide more helpful and beneficial responses, regardless of which model the user initially selected.”
OpenAI noted that its GPT-5 reasoning and o3 models are designed to spend more time thinking through and reasoning with context before responding, making them “more resistant to adversarial prompting.”
The AI company also announced that parental controls will be rolled out next month, allowing parents to link their accounts with their teenagers’ via email invitations. In late July, OpenAI launched a learning mode in ChatGPT to help students maintain critical thinking skills during study sessions, rather than having ChatGPT write their papers for them. Soon, parents will be able to manage how ChatGPT responds to their children using “age-appropriate model behavior rules,” which will be enabled by default.
Parents will also have the ability to disable features such as memory and chat history, which experts say could contribute to delusional thinking and other problematic behaviors—including dependency, reinforcement of harmful thought patterns, and mind-reading illusions. In Adam Renn’s case, ChatGPT reportedly suggested suicide methods aligned with his personal interests, according to the New York Times.
One of the most significant parental controls OpenAI plans to roll out is the ability for parents to receive alerts when the system detects their teenager is in “acute distress.”
TechCrunch has reached out to OpenAI for further details on how the company identifies acute distress in real-time, how long the “age-appropriate model behavior rules” have been enabled by default, and whether the company is exploring time limits for teen use of ChatGPT.
OpenAI has already implemented in-app reminders during long sessions to encourage all users to take breaks, though it does not cut off individuals who may spiral into harmful behavior via ChatGPT.
The company described these safety measures as part of its “120-day plan,” a preview of the improvements it intends to roll out this year. It also stated it is collaborating with experts—including specialists in eating disorders, substance use, and adolescent health—through its Global Panel of Physicians and the Committee on AI and Wellbeing to help “define and measure wellbeing, set priorities, and design future safety measures.”