Anthropic Secures 1 Million Google TPUs, Continues with Amazon as Primary Training Partner

2025-10-24

Anthropic has just inked what may be the largest TPU deal in history—securing up to one million Google Tensor Processing Units by 2026, valued in the tens of billions of dollars and delivering over one gigawatt of computing power. CFO Krishna Rao noted the company is nearing a $7 billion annualized revenue run rate, with enterprise accounts generating over $100,000 annually growing sevenfold over the past year.

Anthropic is on track to hit $9 billion in revenue by year-end, with internal targets ranging from $20 billion to $26 billion by 2026. Achieving this explosive growth demands more computing capacity than any single vendor can supply. Industry estimates suggest building a 1-gigawatt data center costs roughly $50 billion, with about $35 billion allocated to chips alone.

What’s particularly notable is Anthropic’s strategic approach. While most AI labs lock into a single chip supplier—paying premium margins and accepting capacity constraints—Anthropic distributes its workloads across three platforms: Google TPUs for select training runs, Amazon Trainium chips for its massive Project Rainier cluster, and NVIDIA GPUs when needed. According to insiders familiar with the company’s infrastructure strategy, this multi-vendor model delivers greater scalability per dollar spent compared to single-supplier architectures.

Google has invested approximately $3 billion in Anthropic, while Amazon has committed $8 billion and is constructing Project Rainier, which will offer five times the compute capacity of Anthropic’s current largest training cluster. The real question isn’t whether Anthropic shows favoritism—it clearly doesn’t—but whether this diversified approach can truly deliver on its promised cost efficiencies at the scale required for cutting-edge AI models.

For Google, opening TPU access to external clients brings a major AI anchor tenant. Amazon, meanwhile, secures a guaranteed buyer for hundreds of thousands of Trainium2 chips—validating silicon that hasn’t yet been battle-tested in large-scale, complex LLM training. By pitting three suppliers against each other, Anthropic gains negotiating leverage and potentially more favorable pricing.

Project Rainier aims to span multiple buildings with a single compute cluster, linking them to operate as one unified system—an ambitious plan that remains unproven at this scale. TPUs, too, face inherent networking limitations. Coordinating across multiple vendors introduces operational complexity that could erode theoretical cost savings.

Anthropic is betting it can pull this off. Meanwhile, OpenAI reported an annualized revenue exceeding $13 billion as of August and is projected to surpass $20 billion by year-end. Anthropic is closing the gap, but both companies confront the same fundamental challenge: improving their models rapidly enough to justify infrastructure bills now measured in gigawatts.