Anthropic Users Face New Choice—Opt Out or Share Your Data for AI Training

2025-08-29

Anthropic is making significant changes to how it handles user data, requiring all Claude users to decide by September 28 whether they are willing to allow their conversations to be used for AI model training. While the company directed inquiries about its reasoning to a blog post explaining the policy change, we have our own interpretations. The core change is that previously, Anthropic did not use consumer chat data for training. Now, the company wants to train its AI systems on user conversations and coding sessions and will retain data for five years unless users opt out. This is a major shift. Previously, consumer product users were told their prompts and outputs would be automatically deleted from Anthropic's backend after 30 days unless legal or policy requirements mandated longer retention or their input was flagged as violating the company’s policies. In those cases, data could be retained for up to two years. By “consumer,” we mean this new policy applies to users of Claude Free, Pro, and Max, including those using Claude Code. Enterprise customers using Claude Gov, Claude for Work, Claude for Education, or API access will remain unaffected, similar to OpenAI’s approach, shielding business clients from data training policies. Why is this happening? In its post announcing the update, Anthropic frames these changes as user-driven, stating that those who don't opt out will "help us improve model safety, make our systems more accurate in detecting harmful content, and reduce the chances of incorrectly flagging harmless conversations." Users will also "contribute to improving future Claude models’ skills in coding, analysis, and reasoning, ultimately benefiting all users." In short: help us help you. But the full picture may not be entirely altruistic. Like other large language model companies, Anthropic values data more than public sentiment. Training AI models requires vast amounts of high-quality conversational data, and access to millions of Claude interactions likely provides the real-world content Anthropic needs to compete with rivals like OpenAI and Google. Beyond the pressure of AI development, these changes also reflect a broader industry shift in data policy, as companies like Anthropic and OpenAI face increasing scrutiny over data retention practices. For example, OpenAI is currently contesting a court order requiring it to retain all consumer ChatGPT conversations indefinitely, including deleted chats, due to litigation by the New York Times and other publishers. In June, OpenAI COO Brad Lightcap called the requirement sweeping and unnecessary, saying it “fundamentally conflicts with our privacy commitments to users.” The court order affects ChatGPT Free, Plus, Pro, and Team users, although enterprise customers and those under zero-data retention agreements remain protected. A growing concern is how much confusion these evolving usage policies create for users, many of whom remain unaware. To be fair, everything is changing quickly, so privacy policies naturally evolve alongside technological shifts. However, many of these changes are substantial and only briefly mentioned amid other company announcements. (You wouldn't guess the significance of Anthropic's Tuesday policy change based on how prominently the company placed the update on its news page.) Yet many users aren’t even aware that the guidelines they agreed to have changed, and the design practically ensures this. Most ChatGPT users keep clicking the "delete" toggle, without realizing it doesn’t actually delete anything. Similarly, Anthropic's new policy rollout follows a familiar pattern. How so? New users will select their preferences during registration, but existing users will see a pop-up labeled prominently with "Consumer Terms and Policy Update" and a large black "Accept" button, with a smaller training opt-in toggle underneath—defaulted to "on." As The Verge noted earlier today, this interface design raises concerns that users may quickly click "Accept" without realizing they've agreed to data sharing. Meanwhile, the stakes for user awareness couldn’t be higher. Privacy experts have long warned that the complexity surrounding AI makes meaningful user consent nearly impossible. Under the Biden administration, the Federal Trade Commission has even stepped in, warning AI companies that altering service terms or burying disclosures in hyperlinks, legal jargon, or tiny text could lead to enforcement actions.