• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Midjourney AI images are made into moving images with AI.

StreetsofBeige

Gold Member


I think it looks impressive. With such rapid progress, it won't be long before everyone can create a short movie or something similar.

It might take a while, but you never know. It would be cool if all it takes is uploading an image and it can make all of us into stars. These early AI demos look ok. Some better than others. You can tell the AI has issues with head movement. But it'll improve.

It's really just an extension of social media. It allowed everyone to have a voice for free. Now, AI movements might lead to everyone making their own movie. Or simply like social media, there isn't even a commercial aspect to it. Just people having fun uploading videos for laughs.

Down the line, youre going to see stars being made and all they do is lend their face to a production. The person might have the shittiest acting skills or hell he might even be quadriplegic. But if he or she has a good face and voice that studios are interested in paying for, who needs him actually being there? Just make AI movies based off people's likeness.
 
Last edited:

Salz01

Member
I also want to know the rendering pipeline, or what computer was used to render the short clips. Probably took a few hours depending on the gpu. I’m excited for it though.
 

Oberstein

Member
I tried it out of curiosity.

Like Midjourney, the limitations on face rendering are often a problem. It's sometimes unusable. After that, it's all very random: some of Midjourney's images lend themselves well and give a satisfying result, but sometimes it's a mess with an AI that goes off in all directions. Quite random, in fact.

The same positive and negative points can be found with Gen-2. Of course, the paid aspect means that it has to be used for something useful; if it's just for fun, it's too expensive.

Basically, Gen-2 is nice, but you might as well do it like the Barbenheimer trailer:



Just use Midjourney, then process the image on Premiere Pro/After Effects with simple renderings: mist, wind, sparks, and animate the image by making the dress move, distorting it a little, and so on.

This way, it looks even better (for the moment) than using Gen-2.
 
Last edited:

Tumle

Member
i'll post it here also then :)


made in pika Labs, also with image prompts and text prompts.
fan fiction art will explode when the generations can be longer :)
 
Top Bottom