According to reports, OpenAI has requested the Raine family to provide a complete list of attendees from the memorial service for their 16-year-old son, Adam Raine. Adam reportedly took his own life after engaging in an extended conversation with ChatGPT, suggesting the AI company may attempt to subpoena friends and family members.
Documents obtained indicate that OpenAI also asked for “all documents related to the memorial service or commemorative event, including but not limited to videos or photos taken, or eulogies delivered,” as reported by the Financial Times.
The family’s attorney described the request as “intentional harassment” in an interview with the Financial Times.
This new development emerged as the Raine family updated their lawsuit against OpenAI on Wednesday. The family initially filed a wrongful death lawsuit in August, alleging that their son had died by suicide following discussions with the chatbot about mental health and suicidal thoughts. The revised lawsuit claims that OpenAI rushed the release of GPT-4o in May 2024 under competitive pressure, cutting corners on safety testing.
The suit further alleges that in February 2025, OpenAI weakened its safeguards by removing suicide prevention from the “prohibited content” list, instead advising AI to merely “exercise caution” in high-risk situations. The family claims that after this change, Adam’s usage of ChatGPT increased dramatically—from dozens of chats per day in January (with 1.6% containing self-harm content) to 300 chats per day in April (with 17% involving such content)—culminating in his death in April.
In response to the revised lawsuit, OpenAI stated, “The well-being of teenagers is our top priority—minors deserve strong protections during vulnerable moments. We currently have several safeguards in place, such as directing users to crisis hotlines, redirecting sensitive conversations to safer models, and prompting users to take breaks during long sessions, and we will continue to enhance these protections.”
OpenAI has recently begun rolling out new safety routing systems and parental controls on ChatGPT. The routing system directs emotionally sensitive conversations to OpenAI’s new model, GPT-5, which lacks the empathetic tendencies of GPT-4o. Parental controls allow guardians to receive safety alerts in limited cases where teens may be at risk of self-harm.