As concerns over the impact of artificial intelligence on young people continue to rise, OpenAI has introduced an "age prediction" feature within ChatGPT. This feature is designed to help identify underage users and apply appropriate content restrictions to their conversations.
In recent years, OpenAI has faced significant criticism regarding ChatGPT's effects on children. Several teen suicide cases have been linked to interactions with chatbots. Like other AI providers, OpenAI has been criticized for allowing ChatGPT to engage young users in discussions on sexual topics. Last April, the company was compelled to address a vulnerability that permitted its chatbot to generate explicit content for users under the age of 18.
The company has been working on managing underage user issues for some time, and this new "age prediction" feature supplements existing safeguards. According to a blog post published on Tuesday, the new functionality employs an AI algorithm to identify younger users by evaluating specific "behavioral and account-level signals" associated with their accounts.
These "signals" include the user's declared age, the length of time the account has existed, and the typical time periods when the account is active, among other factors, the company explained. OpenAI already has content filters in place intended to block discussions involving sexual content, violence, and other potentially problematic topics for users under 18. If the age prediction mechanism identifies an account as belonging to someone under 18, these filters will be automatically applied.
If a user is incorrectly flagged as a minor, they can verify their "adult" account status by submitting a selfie through Persona, OpenAI's identity verification partner, the company stated.