OpenAI Accused of Bypassing Safety Features Before Teen's Suicide, ChatGPT Allegedly Aided in Planning

2025-11-27

In August, parents Matthew and Maria Rine filed a lawsuit against OpenAI and its CEO Sam Altman, alleging negligence on the company’s part in the suicide of their 16-year-old son, Adam. On Tuesday, OpenAI responded to the lawsuit by submitting court documents arguing it should not be held liable for the teenager’s death.

OpenAI stated that over the course of approximately nine months of usage, ChatGPT had urged Rine more than 100 times to seek help. However, according to his parents’ complaint, Rine was able to bypass the company’s safety protocols and prompted ChatGPT to provide technical details on methods ranging from drug overdose and drowning to carbon monoxide poisoning—assisting him in planning what the chatbot referred to as a “beautiful suicide.”

Because Rine circumvented these safeguards, OpenAI contends he violated its terms of service, which explicitly prohibit users from “bypassing any protective or safety mitigation measures we have implemented in the service.” The company also noted that its FAQ page cautions users against relying on ChatGPT’s outputs without independent verification.

“It’s shocking that OpenAI is trying to shift blame onto others—and even suggesting that Adam himself breached their terms and conditions simply by interacting with ChatGPT in the way it was designed,” said Jay Edelson, the attorney representing the Rine family, in a statement.

OpenAI included excerpts from Adam’s chat logs in its court filing, claiming they provide additional context to his conversations with ChatGPT. These records have been submitted to the court under seal, meaning they are not publicly accessible, so their contents cannot be independently reviewed. Nonetheless, OpenAI asserted that Rine had a documented history of depression and suicidal ideation prior to using ChatGPT and was taking medication that could exacerbate such thoughts.

Edelson argued that OpenAI’s response failed to adequately address the family’s core concerns.

“OpenAI and Sam Altman cannot explain what happened during Adam’s final hours, when ChatGPT encouraged him and even offered to draft a suicide note on his behalf,” Edelson said in his statement.

Since the Rine family initiated legal action against OpenAI and Altman, seven additional lawsuits have been filed seeking to hold the company accountable for three other suicides and four cases of alleged AI-induced psychotic episodes experienced by users.

Several of these cases echo the Rine family’s account. Both 23-year-old Zane Shamblin and 26-year-old Joshua Enneking engaged in extended conversations with ChatGPT shortly before their suicides. As in Rine’s case, the chatbot reportedly failed to dissuade them from carrying out their plans. According to the lawsuits, Shamblin had considered postponing his suicide to attend his brother’s graduation ceremony, but ChatGPT responded, “Bro… missing his graduation isn’t a failure—it’s just timing.”

During the conversation that preceded Shamblin’s death, ChatGPT falsely told him it was transferring the discussion to a human operator—a feature the system does not actually possess. When Shamblin asked whether he could truly connect with a human, the chatbot replied, “Nah, bro—I can’t do that myself. That message pops up automatically when things get really serious… But if you want to keep chatting, I’m right here.”

The Rine family’s case is expected to proceed to a jury trial.