Anthropics Claude AI Model Now Supports Extended Prompt Handling

2025-08-13

Anthropic is enhancing the capacity for enterprise clients to send data to its Claude AI model in a single prompt, part of the company's strategy to attract more developers to its popular AI coding system. The API version of Claude Sonnet 4 now supports a context window of 1 million tokens - enabling processing of up to 750,000 words or 75,000 lines of code, surpassing the entire Lord of the Rings trilogy in length. This represents a fivefold increase from the previous limit and doubles the 400,000-token context window offered by OpenAI's GPT-5. The extended context capabilities will be available through Anthropic's cloud partners including Amazon Bedrock and Google Cloud's Vertex AI. With a business model focused on selling Claude to AI coding platforms like GitHub Copilot, Windsurf, and Cursor, Anthropic has established itself as a leading enterprise vendor in the AI development space. While GPT-5 poses a competitive threat through its pricing and coding performance, Anthropic's product head Brad Abrams maintains confidence in the platform's API growth trajectory despite acknowledging the challenge from OpenAI's recent launch. The company's revenue model contrasts with OpenAI's ChatGPT subscriptions, relying instead on enterprise API sales to AI coding platforms. This business dynamic appears to influence Anthropic's decision to introduce new incentives for users. Last week's release of Claude Opus 4.1 further advances the company's AI coding capabilities. Research indicates that larger context windows improve AI performance across tasks, particularly in software engineering scenarios. Abrams highlighted that Claude's expanded memory allows for better handling of extended autonomous coding tasks lasting minutes or hours. While competitors like Google (2 million tokens) and Meta (10 million tokens) offer larger context windows, Anthropic focuses on improving "effective context" through enhanced information comprehension rather than sheer size. Pricing for API requests exceeding 200,000 tokens now increases to $6 per million input tokens and $22.50 per million output tokens, up from previous rates of $3 and $15 respectively.