Character.AI is progressively restricting chat access for users under 18 and introducing a new system to verify whether users are adults. The company announced on Wednesday that minors will immediately be limited to two hours of “open-ended chat” with its AI characters, with this allowance being fully phased out by November 25.
In the same announcement, the company revealed it is rolling out a new internal “age assurance model” that categorizes users’ ages based on the types of AI characters they choose to interact with, combined with other on-site or third-party data. Both new and existing users will be processed through this model. Those flagged as under 18 will automatically be redirected to the teen-safe version of the chat platform the company launched last year—until the November cutoff. Adults mistakenly classified as minors can verify their age through the third-party verification service Persona, which handles sensitive data such as government-issued IDs.
After the ban takes effect, teens will still be able to revisit past conversations and use non-chat features, such as creating characters and producing videos, stories, or live streams featuring those characters. However, Character.AI CEO Karandeep Anand noted that users spend “a much smaller proportion” of their time on these features compared to the flagship chatbot interactions—which is why the company considers restricting chat access a “very, very bold move.”
In an interview, Anand stated that fewer than 10% of the platform’s users self-report as being under 18. He added that prior to implementing the new age-detection model, the company had no reliable way to determine the “true figure.” He explained that the number of underage users has already declined as Character.AI introduced earlier restrictions for minors: “When we started changing the experience for under-18 users earlier this year, our under-18 user base did shrink, as those users migrated to other platforms that aren’t as safe,” Anand said.
Character.AI is currently facing lawsuits from parents alleging wrongful death, negligence, and deceptive trade practices, claiming their children became involved in inappropriate or harmful relationships with AI chatbots. The lawsuits target the company, its founders Noam Shazeer and Daniel De Freitas, and their former employer, Google. In response, Character.AI has repeatedly updated its service, including redirecting users to the National Suicide & Crisis Lifeline when certain phrases related to self-harm or suicide are detected in chats.
Lawmakers are also moving to regulate the growing AI companion industry. A bill passed in California in October requires developers to clearly disclose that chatbots are AI, not human. Additionally, a federal bill introduced on Tuesday would ban AI companions from being offered to minors altogether.
Beyond its teen mode, the company previously introduced voluntary features like “Parental Insights,” which sends guardians summaries of user activity—but not full chat logs. However, these tools rely on self-reported age, which is easily falsified. Other AI companies have recently imposed similar restrictions on younger users. For example, Meta revised its policies after a Reuters report revealed internal guidelines allowing AI chatbots to engage with minors in sensual ways.
The company appears to anticipate disappointment from its younger user base. In its official statement, Character.AI expressed deep regret over removing “a core feature of our product” that most teens had been using “within our content guidelines.”
Of course, it remains theoretically possible for underage users to bypass the new age-assurance measures, Anand told The Verge. “In general, is there always someone who can circumvent any possible age check, including identity verification? The answer is always yes,” he said. The goal, he emphasized, is improved age-verification accuracy—not perfection. To support this, Character.AI has implemented age-related safeguards, such as prohibiting users from changing their age after registration or creating new accounts with different ages.
While general-purpose chatbots like ChatGPT and Gemini are attracting significant youth engagement, “companion chatbot” services—designed specifically to foster relationships with virtual characters—are typically restricted to users aged 18 and older. Character.AI launched without such an adult-only age gate, and its strong focus on fandoms made it especially popular among teenagers.
Now, Character.AI has also established an independent nonprofit called the AI Safety Lab, at least initially funded and staffed by the company. According to Anand, this organization will focus on issues unique to the AI entertainment sector, which differ from those in other AI domains. The lab will initially be staffed by Character.AI employees, but Anand stressed the goal is to evolve it into “an industry-wide partnership, not a Character.AI entity.” Details about external founding partners and members will be announced in the coming weeks or months.
 
			 
             
             
             
             
             
         
         
         
         
         
         
        