Luma AI Launches Ray3: Next-Generation Cinematic Video Generation Model with Built-in Inference Functionality

2025-09-19


Luma AI, an emerging artificial intelligence startup, has unveiled Ray3, a groundbreaking text-to-video AI model equipped with built-in reasoning capabilities, specifically designed for high-end cinematic visual production by industry professionals.


The company also announced a strategic collaboration with Adobe to integrate this advanced model into Adobe’s AI-powered Firefly application, a comprehensive toolset tailored for creative workflows.


“Ray3 represents our first major step toward building intelligent tools for creative tasks,” said Amit Jain, co-founder and CEO of Luma AI. “Creative work ranks among the most intellectually demanding human endeavors. Yet until now, much of the AI available to creators has significantly lagged behind what’s possible in terms of coding and language model sophistication,” he added.


Ray3's standout feature is its Chain-of-Thought Reasoning engine, enabling it to interpret scene descriptions and follow the nuanced instructions of creative professionals. According to Jain, most video generation models in the market have functioned more like slot machines—showing promise but lacking true intelligence.


With reasoning capabilities, Ray3 can assess its own output and refine results to better align with the user’s artistic intent. It can plan intricate scenes and evaluate the logic of its output before presenting it.


The model mimics the workflow of animators and filmmakers by drafting storyboards before producing the final output. During this draft phase, users can collaborate with the model, providing detailed instructions such as annotations on specific video segments. The model is then able to follow multi-step creative processes and interpret visual annotations like hand-drawn sketches on still frames, enhancing precision in fulfilling user instructions.



Ray3 marks a significant leap from its predecessor, Ray2, with double the model size. It supports true high dynamic range (HDR) video generation in 10, 12, and 16-bit formats using the professional ACES2065-1 EXR standard. Practically speaking, this gives filmmakers and advertisers the same level of color grading, exposure, and lighting control typically associated with footage captured by high-end cameras.


The model also converts standard dynamic range (SDR) video from nearly any source into HDR, delivering richer color depth and enhanced editing flexibility. For instance, Ray3’s HDR conversion can brighten overly dark scenes without washing out colors.


Users can generate video clips up to 10 seconds long from text or image prompts. Adding textual annotations to images allows for greater control over the initial output. Thanks to the model’s robust composition and visual understanding engine, combining multiple scenes is more seamless than ever, with improved consistency across generated sequences.


Beyond the Adobe partnership, Luma AI revealed that Ray3 is already being adopted by Dentsu Digital Inc., one of Japan’s largest full-service digital marketing firms. As a launch partner, Dentsu intends to incorporate Ray3 into its production pipeline to enhance personalization and storytelling for domestic brands.


Creative industry leaders such as digital marketing agency Monks and advertising firm StrawberryFrog LLC are also adopting Ray3 to expand their creative capabilities. Furthermore, Saudi Arabia-based AI firm Humain plans to integrate Ray3 into its enterprise services for creative professionals.


“Ray3 isn’t just an upgrade—it’s a quantum leap forward,” said Steve Plimsoll, Chief Strategy Officer at Humain. “By enabling AI to reason across text, images, and motion, we’re not only accelerating the speed and fidelity of creative output, but also introducing smarter safeguards. This translates to faster delivery of sharper creative work, with content that adheres to ethical, regulatory, and cultural standards,” he concluded.