Former OpenAI Researchers Launch Applied Compute with $80M in Funding

2025-10-31

Three former OpenAI researchers have launched a new startup called Applied Compute Inc., aiming to develop custom artificial intelligence models tailored for enterprise use cases.

Applied Compute announced on Wednesday that it has raised $80 million in funding, backed by venture capital firms Benchmark, Sequoia, Lux, and a group of angel investors.

The company did not disclose its valuation. However, last month, The Information reported that it was raising funds at a $500 million valuation. In July, sources indicated that Upstarts Media had completed a $20 million funding round.

Applied Compute was founded by Yash Patil, Rhythm Garg, and Linden Li several months before its first funding round. CEO Patil previously contributed to the development of OpenAI’s Codex programming assistant. Garg and Li played key roles in building ChatGPT’s o1 reasoning model and AI training infrastructure, respectively.

The startup plans to optimize bespoke AI models for each client’s specific use case. According to the company, every custom model will be trained on the client organization’s proprietary data, enabling higher output quality compared to general-purpose language models like GPT-5.

A job posting for an infrastructure engineer reveals that Applied Compute intends to use reinforcement learning (RL) to train its AI models—one of the most widely adopted methods for developing reasoning models. In RL-based training, neural networks receive sample tasks and earn rewards for correct responses, allowing the model to iteratively refine its outputs.

Applied Compute already counts several companies among its clients, including DoorDash Inc., Cognition AI Inc.—a developer of AI programming assistants—and Mercor Inc., an AI training data provider that achieved a $10 billion valuation on Monday. In a blog post, Applied Compute employees stated the company enables clients to build custom AI models and agents in just days, rather than the months typically required.

One of the most common techniques developers use to accelerate AI training is LoRA (Low-Rank Adaptation), which extends a pre-trained model with a small set of additional parameters. During customization, only these new parameters are trained, not the entire model.

The infrastructure engineer job listing also indicates that Applied Compute trains its AI models on a cluster comprising thousands of graphics processing units (GPUs). Operating such a large-scale GPU cluster—even when leased from cloud providers—is extremely costly, suggesting the startup may seek additional funding soon to sustain its development efforts.