OpenAI Group PBC has entered into a seven-year partnership with Amazon Web Services (AWS), under which it will lease $38 billion worth of cloud infrastructure.
Less than a week prior, the developer of ChatGPT revised its collaboration terms with Microsoft. For nearly two years, Microsoft had served as OpenAI’s exclusive cloud provider. Under the updated agreement, Microsoft no longer holds a right of first refusal over OpenAI’s cloud contracts.
The new deal with AWS will grant the ChatGPT developer access to hundreds of thousands of NVIDIA graphics processing units (GPUs). According to company statements, OpenAI will utilize NVIDIA’s latest GB200 and GB300 chips—both of which integrate two GPUs with a central processing unit (CPU).
The GB200 also comes in an ultra-large configuration, the GB200 NVL4, which combines four Blackwell GPUs with two CPUs. Meanwhile, the GB300 is built on NVIDIA’s newer Blackwell Ultra GPU architecture, with a single Blackwell Ultra chip delivering approximately 15 petaflops of AI performance.
The chips AWS plans to provide OpenAI will be deployed within Amazon EC2 UltraServers. These systems are powered by AWS’s custom-built Nitro System components, including the Nitro Security Chip, which offloads certain cybersecurity tasks from the main processor of the UltraServer.
OpenAI will begin utilizing AWS infrastructure immediately and aims to deploy all computing capacity outlined in the contract by the end of 2026. Starting in 2027, the company will have the option to further scale its AWS environment.
“Scaling frontier AI requires massive, reliable compute,” said Sam Altman, CEO of OpenAI. “Our collaboration with AWS strengthens a broad compute ecosystem that will drive the next era of innovation and bring advanced AI to everyone.”
OpenAI’s largest cloud infrastructure partner remains Oracle, which recently secured a $300 billion infrastructure deal with the ChatGPT developer. Oracle is constructing a U.S.-based data center network with a total capacity of 4.5 gigawatts—enough to power roughly 750,000 homes per gigawatt.
Central to Oracle’s buildout is a campus in Abilene, Texas, which began coming online earlier this year. At full capacity, the site is expected to house 450,000 NVIDIA GPUs.
OpenAI continues to use infrastructure from its former exclusive cloud provider, Microsoft, having agreed to purchase $250 billion worth of Azure computing capacity as part of its recent restructuring. The company has also secured infrastructure agreements with CoreWeave and Google LLC.
Anthropic PBC, OpenAI’s best-funded startup rival, similarly relies on AWS infrastructure to support its AI models. Last week, AWS opened an $11 billion data center campus dedicated exclusively to running Anthropic’s training and inference workloads. The facility currently houses around 500,000 AWS-customized Trainium2 chips, a number expected to double by year-end.