Some users of Meta Platforms Inc.'s newly launched artificial intelligence app have unintentionally made their chat records public.
According to reports from TechCrunch and other media outlets, a cybersecurity researcher discovered on Thursday that some publicly shared chat records contained users' addresses, court case information, and other sensitive data. This indicates that users did not intend for these chat records to be widely disclosed.
These records originated from Meta AI, a mobile application released by Meta at the end of April. It is an independent version of the company's previously integrated chatbot of the same name on Facebook, Instagram, and WhatsApp. The AI was also launched in conjunction with its Ray-Ban Meta series of augmented reality glasses.
The chatbot within the Meta AI app not only answers user questions but also searches the web and generates images. According to Meta, it personalizes responses based on account data, such as posts users liked on Facebook. Users can further fine-tune the chatbot’s output by providing customized instructions.
When the Meta AI app responds to a prompt, the share button located in the upper right corner of the chat window can be used to publish the response to a public discovery channel. Other users can then view the AI-generated content. It appears that some users were unaware that the share button would make their chat records public.
PCMag reported that the button "did not specify where the post would be published." It was only after the fact that users realized their posts had been shared on the public discovery channel. Meanwhile, Meta provides several methods to delete these posts.
The impact of privacy issues has been limited, as the Meta AI app has not yet gained a large number of installations. According to research cited by TechCrunch, since its launch at the end of April, the app has been downloaded only about 6.5 million times. In contrast, the version of Meta AI embedded in the company's social media services boasted over 1 billion monthly active users in May.
In recent years, regulatory authorities in the European Union have fined Meta, Facebook's parent company, multiple times due to its privacy practices. At least one of these fines was issued because of Meta's interface design choices. It remains unclear whether the accidental data leaks associated with the share button on the Meta AI app will subject the company to similar scrutiny.
This app is not the only element of Meta's consumer AI strategy facing criticism. Earlier this year, the company removed a batch of AI-driven Facebook and Instagram profiles because they failed to receive positive feedback from users.