Nvidia's latest innovation Jetson AGX Thor, powered by Blackwell architecture, represents the next evolution of the company's robotic intelligence solutions. This cutting-edge platform is designed to elevate physical AI capabilities in autonomous driving, industrial robotics, and human-like machine applications. Three versions of Jetson AGX Thor are currently available, with a specialized DRIVE OS-equipped automotive developer kit scheduled for release in September.
Delivering 2,070 teraflops of FP8 computing capacity, Jetson AGX Thor redefines robotic intelligence.
As the successor to the 2022 Jetson Orin series, this next-generation robotics platform supports advanced AI applications ranging from surveillance video analytics to object manipulation in unstructured environments. The platform enables real-time decision-making for complex robotic systems across multiple industry sectors.
Nvidia's technical advancements include 7.5x AI compute improvements, 3.1x CPU performance gains, and doubled memory capacity compared to the Orin generation.
The Jetson AGX Thor product family comprises four configurations:
- Jetson AGX Thor Developer Kit available through Nvidia partners at $3,499
- Production-grade Thor T5000 at $2,999
- Cost-optimized Thor T4000 for Orin upgrade path at $1,999
- Automotive-focused DRIVE AGX Thor Developer Kit launching in September (price available through retailers)
The flagship Thor T5000 features 2,070 teraflops of AI processing power - among the most powerful edge computing platforms available. Built on a 14-core Neoverse ARM CPU architecture, it offers 128GB LP5 memory at 273GB/s bandwidth, with 40-130W power consumption options. The Arm Neoverse V3AE CPU delivers deterministic performance, ISO 26262 functional safety, and 2,000 teraflops FP4 processing for real-time applications.
Platform integration extends to Arm-based systems including DRIVE OS 7.0.3 for automotive applications, alongside Nvidia's Isaac (robotics), Metropolis (vision AI), and Holoscan (sensor processing) frameworks.
"The Arm Neoverse V3AE is opening doors to scalable, secure, and energy-efficient computing solutions that enable sustainable AI operations for next-generation robotic fleets," stated Dipti Vachani, Senior Vice President and General Manager of Arm's Automotive Line of Business.
Advanced AI support for generative models and multi-agent architectures
The platform features multi-instance GPU virtualization for concurrent processing across two separate workloads. The expanded 128GB memory capacity facilitates simultaneous execution of multiple generative AI models - particularly beneficial for hybrid expert-agent systems.
Compatible with leading AI frameworks like Hugging Face and PyTorch, the platform supports state-of-the-art models from OpenAI, Google, and DeepSeek.
"The historical latency issues in computer vision have been overcome through accelerated model processing and compute performance, enabling robots to tackle increasingly complex tasks," explained Sebastian Scherer, Associate Research Professor at Carnegie Mellon University's Robotics Institute.
Expanding Nvidia's robotics and automotive computing footprint
As part of its strategic diversification beyond generative AI, Nvidia is strengthening its presence in humanoid robotics and autonomous vehicle technologies. While robotics currently represents approximately 1% of company revenue, strategic partnerships drive widespread adoption across manufacturing sectors.
"We don't manufacture robots ourselves, but collaborate with every major robotics company globally," emphasized Deepu Talla, Vice President of Robotics and Edge AI at Nvidia during Friday's press briefing.
Industry adoption spans Volvo, Aurora, GATIK, and Tensor through DRIVE AGX Thor automotive developer kits. Boston Dynamics will deploy Jetson AGX Thor as the computational engine for its Atlas robot - transitioning from previous server-level implementations.