GenAI Video Models

GenAI Video Models

I’ve used all the recent GenAI video models extensively & here’s my 2¢.

Runway Gen3 Alpha – best image quality & motion for text-to-video & embedded words. Great at prompt travel changes over the course of 10 sec. And I’m super bullish on how gen3 will evolve, hopefully adopting the features listed below.

Runway GEN-3 Demo

Kling – best quality for image-to-video with prompt control, like eating food. Great clip extension that accounts for character (ie walking stride) & camera movement (speed & angle), rather than just using final frame. But it’s limited availability & Chinese native language is limiting. Used for Spider-Man video below (via Midjourney).

Kling demo

LumaLabs – best for keyframe start & end control (it can not be overstated how important this is. other services should add it ASAP!) and their high dynamic action movements are really fun. Luma was used in my viral Multiverse of Memes video.

Luma demo

PikaLabs – they haven’t gotten as much attention as others lately. But they did update their video model a few weeks ago and it looks great. Also, they are notable for their unique & AWESOME features, like video in-painting & out-painting.

PikaLabs demo

My perfect AI video platform would have the following features:

  1. Gen3’s quality, prompt control & text embedding.
  2. KLing’s image-to-video quality, prompt control & clip extension quality.
  3. Luma’s multi-keyframe control & dynamic movement ability.
  4. Pika’s inpainting & outpainting ability.

And a video-to-video (aka next-gen Runway Gen-4) could be a game changer, too. It’s an exciting time to be alive 🫶

Who will get there first?

Author: Blaine Brown

Read related articles:


Posted

in

by