In a blog post, Chief Financial Officer Sarah Friar revealed that OpenAI's annualized recurring revenue surpassed $20 billion last year. This figure is higher than the $6 billion in 2024 and $2 billion the year before. During the same period, the scale of OpenAI's data centers grew from 200 megawatts to approximately 1.9 gigawatts.
It is no coincidence that both the company's annualized revenue and computational capacity increased approximately tenfold within three years. In today's blog post, Friar disclosed that OpenAI ties its data center investments to growth milestones. "Capital is deployed in batches based on genuine demand signals," she explained. "This allows us to advance aggressively when growth materializes, without locking in futures the market hasn't yet secured."
According to the executive, OpenAI is not only working to expand its data center infrastructure but also striving to enhance its cost-effectiveness. Friar revealed that the company has reduced inference costs to less than $1 per million tokens. OpenAI achieved this, in part, by mixing and matching different types of data center hardware.
"When capability is paramount, we train cutting-edge models on high-end hardware," Friar wrote. "When efficiency outweighs raw scale, we process high-volume workloads on low-cost infrastructure."
Friar did not specify which chips power OpenAI's low-cost infrastructure. The hardware might utilize the same expensive, top-tier graphics processing units as the company's most advanced training clusters. State-of-the-art GPUs are often the most cost-effective. For instance, Nvidia's flagship Rubin graphics card can run some inference workloads at one-tenth the per-token cost of its predecessor.
Finding ways to reduce hardware expenses may become an even greater priority for OpenAI in the future. In September, sources informed The Information that the company is projected to end 2025 with an $8 billion loss, $1.5 billion more than initially expected. OpenAI's losses are reportedly forecast to more than double this year, reaching $17 billion.
Sources indicated that the company's efforts to develop custom chips and data centers are part of its strategy to lower infrastructure costs. Last year, OpenAI entered into a $10 billion partnership with Broadcom Inc. to co-design AI accelerators. Additionally, it collaborated with SoftBank Group's SB Energy business to construct a Stargate data center based on a custom design.
Friar's blog post offered clues about OpenAI's long-term revenue growth plans. The executive detailed her expectation that new monetization models will emerge in the AI market. "Licensing, IP-based agreements, and outcome-based pricing will share the value created," Friar wrote.
Advertising is another component of OpenAI's growth strategy. On Friday, the AI provider announced plans to display paid promotions below ChatGPT prompt responses. The company will test its ad system with a limited number of users in the U.S. before rolling it out more broadly.
Friar stated that OpenAI's development roadmap also prioritizes AI agents and other workflow automation tools. According to the executive, the company is focused on helping users automate tasks across multiple applications. Another priority is equipping models with the ability to utilize long-context capabilities.