World

liquid reality — Google unveils Veo, a high-definition AI video generator that may rival Sora Google’s video synthesis model creates minute-long 1080p videos from written prompts.

Benj Edwards – May 15, 2024 8:51 pm UTC Enlarge / Still images taken from videos generated by Google Veo.Google / Benj Edwards reader comments 62

On Tuesday at Google I/O 2024, Google announced Veo, a new AI video synthesis model that can create HD videos from text, image, or video prompts, similar to OpenAI’s Sora. It can generate 1080p videos lasting over a minute and edit videos from written instructions, but it has not yet been released for broad use. Further ReadingOpenAI collapses media reality with Sora, a photorealistic AI video generator

Veo reportedly includes the ability to edit existing videos using text commands, maintain visual consistency across frames, and generate video sequences lasting up to and beyond 60 seconds from a single prompt or a series of prompts that form a narrative. The company says it can generate detailed scenes and apply cinematic effects such as time-lapses, aerial shots, and various visual styles

Since the launch of DALL-E 2 in April 2022, we’ve seen a parade of new image synthesis and video synthesis models that aim to allow anyone who can type a written description to create a detailed image or video. While neither technology has been fully refined, both AI image and video generators have been steadily growing more capable.

In February, we covered a preview of OpenAI’s Sora video generator, which many at the time believed to represent the best AI video synthesis the industry could offer. It impressed Tyler Perry enough that he put his film studio expansions on hold. However, so far, OpenAI has not provided general access to the toolinstead, they’ve limited its use to a select group of testers.

Now, Google’s Veo appears at first glance to be capable of video generation feats similar to Sora. We have not tried it ourselves, so we can only go by the cherry-picked demonstration videos the company has provided on its website. That means anyone viewing them should take Google’s claims with a huge grain of salt, because the generation results may not be typical.

Veo’s example videos include a cowboy riding a horse, a fast-tracking shot down a suburban street, kebabs roasting on a grill, a time-lapse of a sunflower opening, and more. Conspicuously absent are any detailed depictions of humans, which have historically been tricky to for AI image and video models to generate without obvious deformations. Advertisement

Google says that Veo builds upon the company’s previous video generation models, including Generative Query Network (GQN), DVD-GAN, Imagen-Video, Phenaki, WALT, VideoPoet, and Lumiere. To enhance quality and efficiency, Veo’s training data includes more detailed video captions, and it utilizes compressed “latent” video representations. To improve Veo’s video generation quality, Google included more detailed captions for the videos used to train Veo, allowing the AI to interpret prompts more accurately.

Veo also seems notable in that it supports filmmaking commands: “When given both an input video and editing command, like adding kayaks to an aerial shot of a coastline, Veo can apply this command to the initial video and create a new, edited video,” the company says.

While the demos seem impressive at first glance (especially compared to Will Smith eating spaghetti), Google acknowledges AI video generation is difficult. “Maintaining visual consistency can be a challenge for video generation models,” the company writes. “Characters, objects, or even entire scenes can flicker, jump, or morph unexpectedly between frames, disrupting the viewing experience.”

Google has tried to mitigate those drawbacks with “cutting-edge latent diffusion transformers,” which is basically meaningless marketing talk without specifics. But the company is confident enough in the model that it is working with actor Donald Glover and his studio, Gilga, to create an AI-generated demonstration film that will debut soon. Further ReadingGoogle strikes back at OpenAI with Project Astra AI agent prototype

Initially, Veo will be accessible to selected creators through VideoFX, a new experimental tool available on Google’s AI Test Kitchen website, labs.google. Creators can join a waitlist for VideoFX to potentially gain access to Veo’s features in the coming weeks. Google plans to integrate some of Veo’s capabilities into YouTube Shorts and other products in the future.

There’s no word yet about where Google got the training data for Veo (if we had to guess, YouTube was likely involved). But Google states that it is taking a “responsible” approach with Veo. According to the company, “Videos created by Veo are watermarked using SynthID, our cutting-edge tool for watermarking and identifying AI-generated content, and passed through safety filters and memorization checking processes that help mitigate privacy, copyright, and bias risks.” reader comments 62 Benj Edwards Benj Edwards is an AI and Machine Learning Reporter for Ars Technica. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC. Advertisement Channel Ars Technica ← Previous story Next story → Related Stories Today on Ars

Articles You May Like

QB Carson Wentz embracing new role as Patrick Mahomes’ backup
Oil prices fall to snap four-week winning streak as rally stalls out
MRI on Mariners’ Rodriguez comes back clean
U.S. oil rises as inventories fall, OPEC sees solid demand on stronger economic growth
Ford teases its new EV coming this week, a revival of a ‘legend’