Generative AI startup Runway launches Gen-4.5 video model

2025-12-02

Runway unveiled a new video generation model on Monday, positioning itself to compete directly with leading generative AI companies offering both video and image creation tools.

According to the startup—founded in early 2018—Runway Gen-4.5 delivers enhanced visual fidelity and greater creative control, building on the company’s track record in generative AI for video, media, and artistic applications.

Users can generate high-definition videos simply by providing detailed prompts describing desired actions and movements. Runway claims the model excels in precise composition, anatomical accuracy, and expressive character rendering, while also supporting diverse stylistic controls and maintaining visual consistency.

The model is built on NVIDIA GPUs, leveraging these chips for pre-training, post-training, and inference tasks.

Enhancements and Use Cases

Gartner analyst Arun Chandrasekaran noted that Runway Gen-4.5 exemplifies the ongoing evolution of AI models. However, it enters a crowded field, facing stiff competition from OpenAI’s Sora and Google’s Veo 3.1.

Although both Gen-4.5 and Veo 3.1 are video-generation models, they target distinct audiences and applications, Chandrasekaran explained. Runway’s output is primarily geared toward social media content.

“Runway has always focused on short-form video,” Chandrasekaran said.

In contrast, Google’s Veo targets longer-form content—such as product marketing videos lasting several minutes—rather than clips just a few seconds long. Gen-4.5 is better suited for platforms like Instagram, where brevity is key.

With this new iteration, Runway has significantly improved its ability to render objects and characters with greater clarity and consistency.

“It appears they’re also placing more emphasis on reconstructing complex video scenes,” Chandrasekaran added.

The Challenge of Realism

He further observed that current models increasingly produce imagery so realistic that distinguishing between real and synthetic content becomes difficult—a challenge not unique to Runway, as many advanced video generators now blur this boundary.

Forrester analyst William McKeen-White highlighted that this indistinguishability has sparked divergent viewpoints across the industry.

“I recommend including a disclaimer at the end of short videos to disclose AI involvement in their creation,” he advised.

McKeen-White pointed out that gaming companies have taken opposing stances on this issue. For instance, Epic Games advocates against labeling AI-generated content, while Valve supports mandatory disclosure.

“There’s currently intense debate among organizations about where to stand on this,” McKeen-White said. While the realism achieved by models like Runway’s raises ethical concerns, it also reveals inherent limitations in current AI video generation technology.

For example, the model still struggles with causal reasoning—such as depicting effects before their causes, like a door opening before the handle is pressed. Another persistent issue is object permanence, where items inexplicably vanish or appear without continuity.

“While memory retention and object interactions are improving, there’s still significant room for advancement in creating more coherent and temporally consistent shots,” McKeen-White concluded.