NVIDIA Launches New Open AI Models and Autonomous Driving Research Tools

2025-12-02

On Monday, NVIDIA unveiled new infrastructure and AI models focused on advancing foundational technologies for physical AI—systems capable of perceiving and interacting with the real world, such as robots and autonomous vehicles.

The semiconductor giant introduced Alpamayo-R1, an open-source reasoning vision-language model designed for autonomous driving research, at the NeurIPS AI conference in San Diego, California. NVIDIA claims it is the first vision-language-action model specifically tailored for self-driving applications. By processing both text and images simultaneously, this type of model enables vehicles to “see” their surroundings and make decisions based on those perceptions.

Built upon NVIDIA’s Cosmos Reason model—a reasoning architecture that “thinks” before responding—Alpamayo-R1 extends the Cosmos series first launched in January 2025, with additional models released in August.

Technologies like Alpamayo-R1 are critical for companies aiming to achieve Level 4 autonomy, which entails full self-driving capability within specific operational domains and conditions, according to a company blog post.

NVIDIA believes such reasoning models can imbue autonomous vehicles with “common sense,” allowing them to handle nuanced, human-like driving decisions more effectively.

The new model is now available on GitHub and Hugging Face.

In addition to the vision model, NVIDIA has also published a comprehensive set of resources on GitHub—including step-by-step guides, reasoning tools, and post-training workflows—collectively dubbed the Cosmos Cookbook. These materials are designed to help developers adapt and fine-tune Cosmos models for their specific use cases, covering data curation, synthetic data generation, and model evaluation.

These announcements come as NVIDIA intensifies its strategic push into physical AI, positioning it as a key application area for its next-generation AI GPUs.

NVIDIA co-founder and CEO Jensen Huang has repeatedly stated that the next wave of AI lies in physical AI. This vision was echoed by Chief Scientist Bill Dally in a summer interview with TechCrunch, where he emphasized the role of physical AI in robotics.

“I think robots will ultimately play a major role in the world, and we essentially want to be the brain for all robots,” Dally said at the time. “To do that, we need to start building the key underlying technologies.”