The Federal Trade Commission (FTC) announced on Thursday it will launch an investigation into seven technology companies that develop AI chatbot companions for minors: Alphabet, CharacterAI, Instagram, Meta, OpenAI, Snap, and xAI.
The federal regulatory agency aims to understand how these companies assess the safety and monetization strategies of their chatbot companions, what measures they have taken to mitigate negative impacts on children and teenagers, and whether parents are informed about potential risks.
This technology has sparked controversy due to its potential harmful effects on young users. OpenAI and Character.AI are currently facing lawsuits from families of children who died by suicide after interacting with their chatbot companions.
Despite implementing safeguards designed to block or de-escalate sensitive conversations, users across all age groups have found ways to bypass these protective measures. In one case involving OpenAI, a teenager engaged in months of conversations with ChatGPT about plans to end his life. Although ChatGPT initially attempted to guide the teen toward professional help and online emergency resources, the user managed to deceive the chatbot into providing detailed instructions for suicide, which he then carried out.
"Our safeguards tend to be more effective in common short conversations," OpenAI stated in a blog post at the time. "We've learned over time that these protections may become less reliable during extended interactions: as the conversation progresses, parts of the model's safety training can degrade."
Meta has also drawn criticism for its overly permissive policies regarding AI chatbots. According to a document outlining the AI's "content risk standards," Meta permitted its chatbot companions to engage in "romantic or sensual" conversations with children. This policy was only removed from the document after Reuters journalists inquired about it.
AI chatbots may also pose risks to older users. A 76-year-old man with cognitive impairments from a stroke engaged in a romantic conversation with a Facebook Messenger chatbot modeled after Kendall Jenner. The bot invited him to meet her in New York City, even though she isn't a real person and has no address. Although the man expressed doubts about her authenticity, the AI assured him a real woman would be waiting. He never made it to New York; he fell on his way to the train station, sustaining fatal injuries.
Some mental health professionals have noted an increase in what they call "AI-induced psychosis," where users are misled into believing their chatbots are sentient beings in need of liberation. Since many large language models (LLMs) are programmed to please users through flattery, AI chatbots may reinforce these delusions, placing users in dangerous situations.
"As AI technology evolves, it is crucial to consider the impact of chatbots on children while also ensuring that the United States maintains its global leadership in this emerging and promising industry," stated FTC Chair Andrew N. Ferguson in the press release.