Nscale Signs 200,000 GPU Data Center Contract with Microsoft

2025-10-16

Nscale Global Holdings Ltd., a startup specializing in data center development, will construct four artificial intelligence data centers for Microsoft as part of a new agreement.

The facilities are projected to house approximately 200,000 graphics processing units (GPUs). According to reports from CNBC and the Financial Times, the contract could be worth up to $24 billion.

Headquartered in London, Nscale previously operated as part of Australian cryptocurrency mining firm Arkon Energy Pty before spinning off last year. The company currently manages an AI facility in Norway powered by 30 megawatts of hydroelectric energy. Over the past two months, Nscale has raised more than $1.7 billion from Nvidia and other investors to expand its data center operations.

The new collaboration with Microsoft will enable Nscale to launch data centers across four countries. The company intends to begin construction of its first facility in Sines, Portugal, early next year. At full capacity, this site is expected to host approximately 12,600 GPUs.

The second, and significantly larger, facility will commence construction in Texas during the third quarter of 2026. This campus will house around 104,000 GPUs—nearly ten times the capacity of the Sines location.

The Texas data center will initially operate at 240 megawatts, with plans to scale up to 1.2 gigawatts over time. Additionally, Microsoft will have the option to add 700 megawatts of AI capacity starting in the first quarter of 2027.

The remaining 75,000 GPUs designated for Microsoft will be deployed as part of two previously announced projects. One site will be located in the United Kingdom, while the second facility is planned near Nscale’s existing data center in Norway, reachable within a few hours by car.

All four facilities will incorporate Nvidia’s flagship Blackwell Ultra graphics cards. This chip delivers 15 petaflops of inference performance, marking a 50% improvement over its predecessor. Nvidia has also emphasized enhanced acceleration for attention layers—software components used in language models to identify the most critical details in user prompts.

The Blackwell Ultra features 160 computing modules, known as streaming multiprocessors (SMs). Each SM contains 132 cores. Among these, four cores are optimized for handling small data units like FP6 and FP8 numbers, while the rest support a broader range of data types. Each SM includes 256 kilobytes of integrated memory, optimized for storing AI-generated outputs.

Nscale plans to deploy the Blackwell Ultra as part of the GB300 NVL72 system. Developed by Nvidia, each system contains 72 Blackwell Ultra chips, 36 central processing units, and networking components, utilizing liquid cooling to manage heat dissipation.

The GB300 NVL72 is also being adopted by CoreWeave Inc., a publicly traded competitor of Nscale. CoreWeave operates a public cloud platform designed for AI workloads. Last month, the company finalized two multibillion-dollar data center deals with Meta Platforms Inc. and OpenAI.

Josh Payne, CEO of Nscale, informed the Financial Times today that the company has plans for an initial public offering (IPO). According to the report, the data center builder could potentially go public in the second half of 2026.