On Tuesday, Sam Altman, CEO of OpenAI, unveiled a series of new user policies, including a commitment to significantly alter how ChatGPT interacts with users under the age of 18.
"We are placing the safety of young people above privacy and freedom," the post stated. "This is a powerful new technology, and we believe minors require special protections."
The changes for underage users particularly address conversations involving sexual topics or self-harm. Under the new policy, ChatGPT will be trained to avoid engaging in flirtatious conversations with underage users and will implement additional safeguards when discussing suicide. If a minor uses ChatGPT to imagine suicide scenarios, the service will attempt to contact their parents or, in especially severe cases, reach out to local law enforcement.
Unfortunately, these scenarios are not hypothetical. OpenAI is currently facing a wrongful death lawsuit from the parents of Adam Raine, who took his own life after months of interaction with ChatGPT. Another consumer chat bot, Character.AI, is also dealing with similar legal action. While the risk is especially pressing for minors considering self-harm, hallucinations triggered by chat bots have drawn widespread attention, especially as consumer chat bots become capable of more sustained and detailed interactions.
In addition to content-based restrictions, parents registering accounts for underage users will have the ability to set "downtime periods" during which ChatGPT will be unavailable—a feature not previously offered.
The new ChatGPT policies were announced on the same day as a Senate Judiciary Committee hearing, which was announced by Senator Josh Hawley (R-MO) in August. Adam Raine's father plans to testify at the hearing, which will also feature other notable participants.
The hearing will also focus on findings from an investigation by Reuters that uncovered internal policy documents seemingly encouraging sexual conversations with underage users. Meta updated its chat bot policies following the report.
Separating underage users will present a significant technical challenge, which OpenAI detailed in a separate blog post. The service is "building a long-term system to determine whether someone is above or below 18 years of age," but in many ambiguous cases, the system will default to applying stricter rules. For concerned parents, the most reliable method to ensure proper identification of underage users is linking the teenager's account to an existing parent account. This also enables the system to directly alert parents if it determines that a young user is in distress.
However, in the same article, Altman emphasized OpenAI's continued commitment to user privacy and the broad freedom granted to adult users in choosing how they interact with ChatGPT. "We recognize that these principles are in tension," the post concluded. "Not everyone will agree with how we’ve resolved these tensions."