Oracle Details Upcoming AI Clusters Powered by Nvidia and AMD Chips
Oracle has announced plans to build new artificial intelligence clusters powered by chips from rival companies NVIDIA and AMD.
The first cluster, named OCI Zettascale10, will be hosted on the company's OCI public cloud. It will allow clients to configure AI environments with up to 800,000 NVIDIA graphics processing units. In addition, Oracle is developing a 50,000-GPU cluster based on the upcoming AMD Instinct MI450, the chipmaker's flagship AI accelerator line.
Other major players in the AI market are also adopting GPUs from multiple vendors to reduce overreliance on a single chip supplier. Oracle's main cloud competitors already offer combinations of NVIDIA and AMD GPUs. OpenAI has engaged the database giant to deploy $300 billion worth of AI infrastructure, with plans to implement custom AI chips alongside currently used off-the-shelf silicon.
The architecture behind OCI Zettascale10 currently powers a data center being built by Oracle in Abilene, Texas for OpenAI. The solution will support multi-megawatt clusters capable of housing up to 800,000 NVIDIA GPUs. Oracle estimates that OCI Zettascale10 will deliver a peak AI performance of 16 zettaflops, or 16 quintillion calculations per second.
The company intends to use NVIDIA’s Spectrum-X Ethernet networking suite to interconnect GPUs within the OCI Zettascale10 cluster. This product line centers around two components. The first is the BlueField-3 SuperNIC, a chip that connects GPU servers to the data center network and offloads specific computational tasks from the main processor. It is complemented by the Spectrum SN5000 series of Ethernet switches.
Oracle’s implementation of these networking components will leverage a technology called Acceleron RoCE. Typically, transferring data between GPUs requires passing it through the central processor of the server hosting the GPUs. Acceleron RoCE bypasses this step, enhancing performance.
OCI Zettascale10 is currently accepting orders and is scheduled to launch in the second half of 2026.
"Customers can build, train, and deploy their largest AI models into production with fewer performance-per-watt units and high reliability," said Mahesh Thiagarajan, Executive Vice President of Oracle Cloud Infrastructure. "Additionally, customers will have the freedom to operate across Oracle's distributed cloud with robust data and AI sovereignty controls."
Oracle also plans to release OCI Zettascale10 alongside an AI cluster equipped with 50,000 AMD MI450 graphics cards. These GPUs will operate within racks built around a new design called Helios, which was detailed by the chipmaker today.
Each Helios rack will house 72 MI450 chips, each featuring up to 432GB of HBM4 memory. HBM4 is a high-speed RAM variant not yet in mass production. AMD anticipates that this technology will enable Helios to provide double the memory capacity and bandwidth compared to systems equipped with NVIDIA’s upcoming Vera Rubin chip.
The Helios racks will also integrate other components. They will incorporate AMD’s upcoming Venice server CPU lineup and Vulcano, a future addition to its Pensando data processing unit (DPU) family. The company claims that each Helios rack will be capable of delivering up to 1.4 exaflops of performance when processing FP8 data.
The racks will use liquid cooling to dissipate heat generated by their components. According to AMD, the system is based on a dual-wide design intended to make it easier for technicians to repair faults. The company will allow hardware partners to extend the core Helios feature set to suit their needs.
Oracle plans to install the first Helios racks equipped with MI450 GPUs at its OCI data centers in Q3 2026. Additional systems will begin deployment in 2027.