OpenAI Partners with Broadcom to Deploy 10 Gigawatts of AI Hardware

2025-10-14

Broadcom's stock price surged over 9% today following the announcement of a four-year infrastructure partnership with OpenAI.

The collaboration is set to enable the deployment of 10 gigawatts worth of data center hardware for ChatGPT’s developer over the next four years. According to OpenAI, this infrastructure will be powered by custom AI processors co-developed alongside Broadcom. During a podcast, OpenAI president Greg Brockman revealed that the AI provider had designed the chips using its own neural network architecture.

"We've managed to significantly reduce the chip area," Brockman explained. "You can take components that humans have optimized and feed them into computation, allowing the model to optimize itself."

Off-the-shelf graphics cards cater to a broad customer base, meaning modules important to some users may be irrelevant to others. Custom processors eliminate unnecessary components, saving power and space. These resources can then be redirected toward circuits optimized for the company's specific workloads.

OpenAI intends to deploy its custom processors in racks also built around internal designs. These systems will integrate Broadcom's PCIe and Ethernet networking solutions. PCIe primarily connects internal server components, while Ethernet is used for inter-server communication.

Last Wednesday, Broadcom launched its new AI-optimized Ethernet switch, the TH6-Davisson. The switch can handle 102.4 terabits per second of traffic, with the company claiming its throughput is double that of the closest competitor. The TH6-Davisson features laser transmitters with a field-replaceable design, aimed at simplifying maintenance.

Ethernet switches are typically deployed alongside devices known as pluggable transceivers. These devices convert electronic data into light for fiber optic transmission, and vice versa. The TH6-Davisson incorporates built-in transceivers, eliminating the need for external optical modules and reducing overall costs.

OpenAI has not specified which PCIe products from Broadcom will be utilized in the partnership. The chipmaker offers a range of PCIe switches under its PEX series and also manufactures retimers, which help prevent data errors during PCIe transmission.

"OpenAI and Broadcom have been collaborating for the past 18 months," OpenAI CEO Sam Altman stated during the podcast. "By optimizing the entire stack, we can achieve massive efficiency gains, resulting in better performance, faster models, and more affordable AI solutions."

OpenAI and Broadcom plan to deploy the first set of jointly developed data center racks in the second half of 2026. Additional systems are expected to come online before 2029. The anticipated 10-gigawatt power consumption of these devices equates to the energy usage of millions of households.

No details regarding the project's financial expectations have been disclosed by either party. In August, NVIDIA CEO Jensen Huang estimated that one gigawatt of AI data center capacity costs between $50 billion and $60 billion. He noted that a large portion of that investment typically goes toward NVIDIA hardware, suggesting Broadcom could potentially gain billions in revenue from its new collaboration with OpenAI.