The Greatest AI Video EVER?! (Available Now!)
Vložit
- čas přidán 29. 06. 2024
- Let's play with Runway Gen-3! The video mentions it's coming in a couple weeks but it's actually now available to everyone! They've also released a new prompting guide here to get the best results: help.runwayml.com/hc/en-us/ar...
Discover More From Me:
🛠️ Explore thousands of AI Tools: futuretools.io/
📰 Weekly Newsletter: www.futuretools.io/newsletter
🎙️ The Next Wave Podcast: @TheNextWavePod
😊 Discord Community: futuretools.io/discord
❌ Follow me on X: x.com/mreflow
🧵 Follow me on Instagram: / mr.eflow
Sponsorship/Media Inquiries: tally.so/r/nrBVlp
#AINews #AITools #ArtificialIntelligence - Věda a technologie
Update: Gen-3 is now publicly available. I mentioned a couple weeks... It was only a day! :)
Runway released a prompting guide to help you get better outputs here: help.runwayml.com/hc/en-us/articles/30586818553107-Gen-3-Alpha-Prompting-Guide
YOU ARE THE BEST CZcamsR THAT TALKS ABOUT AI NEWS
Any word on when image to video is coming for it?? That's what I MOST wanted!
Is "no" publically available? Or "now"? Or "not"? Lol
It is available only If you are paid member. But the price is heavy. 100 credits per 10 sec video. You get 625 credits for 15$ subs per month. So you can make at best 1min video. 😮
@@mreflow now*
8:21 He was reloading the baguette with cheese.
By the time Sora finally becomes available we’ll probably already have an open source version or free one that’s just as good
That would imply that OpenAI is putting no effort into improving the model since they showcased it in February, which i doubt...
@@CosmicCells I’m not saying it’s not going to be good I’m just saying they’re taking so long to release it that competitors have time to catch up
@@fast4549 Catch up to the February snapshot we saw. I know what you are saying but I think that Sora, like every other video model, is most likely being improved as we speak so whatever version we get when it comes out will not be what the Sora we saw back in February... More like a Sora 1.5 or even 2.0.
@@CosmicCells We know ClosedAI has been putting a lot of effort to peddle Sora to Hollywood and companies with deep pockets. To improve it or make it available to the public, not so much it seems.
@@CosmicCells Nah, they're putting all their energy and effort into making it totally useless for the exact things that we all want it for.
While it's really impressive how far we've come with AI video, I'd really like to see how specific we can get with it. Your prompts were kinda generic. What happens if I want a character to wear very specific clothing? Colours, material, fit? Can it do a coat with mother-of-pearl buttons, or do we get any old buttons?
Image Gen looks impressive as long as you do generic portraits of people, but breaks if you do anything complicated and out of the ordinary. Try prompting for a handstand or someone dangling from a tree branch heads down.
To be useful for serious production work and not being just a toy, you need to be able to have a granular level of control. I know I'm asking for a lot, since we're only at the beginning stages yet, but it'd be nice if you really stress-tested it, so we know where we're standing at the moment. Still, great work, Matt! 🙂
We need the ability to put an image in, and animate from there
I think it will be better when the image to video option becomes available.
Is it not available yet to creators?
@@lukewilliams7020 Apparently not. Several others answered my question to that effect. 🥲
I find in general with AI video generators that Image to video has better results than prompt to video.
@@lukewilliams7020 seems that way. img2vid is likely on the map, though.
It's available, just go to text to video and upload an image instead.
Low quality clipart video for people who really don't need to have anything particularly good. It'll do because it doesn't really matter because no ones pay much attention to it anyway. This will be great for those adverts at the side of blog posts that everyones trying hard not to accidentally click on so they only focus on the thing they googled in the first place. Marketeers will love it. As long as it moves, it'll do.
bro hit me up. I stayed up 3 days and by day .7 I was getting studio quality shit. I also created the idea for several different AI idea bass fishing on my phone that the companies steal'
The band playing music beneath the ocean was pretty cool! Even the floating microphone looked kinda real lol!
The movement of the singer's hair underwater impressed me the most.
Quite realistic.
next time you see a demo, expect that they generated 100s of videos to make 1 video demo!
These funky artefacts heavily remind me of image gen around DALL-E 2, Midjourney 4 and SD 1.5.
As we've seen those ironed out and improved in later versions, now we're seeing better hand generation and legible text in the newer versions of those image gens - can we expect similar, along with improvements in temporal consistency, in these video gen models? Pretty excited for the next year, if so.
That baguette video blew me away a little bit.
That’s cool, but I wish we had more control over the videos created. The videos are starting to look better, but the control is still lacking, which makes these videos not that useful for filmmaking projects. Also, we need *‘Image to Video’* if we want this tool to work with consistent characters, and to create scenes for films. I want to be the director of the film, rather than letting the AI have all of the control over how a scene looks.
comparing this to year-ago generations, it's a big improvement. Next year vid generations should be even better
So many of these videos look like they’re beyond CGI, they’re really captivating!
I would like a video on why the AI videos do what they do and what the blockers are in making them more realistic - I mean can the output be put back through an AI that corrects all the issues?
I feel like alot of the clunk could be resolved in the future with like a critique AI pass of sorts that can critique object perminance and physical interactions
The color palette it picks is pretty good. Also I like the contrast. I'd say in about another year we'll be looking at some pretty fantastic looking AI videos from all companies. Thank you Matt for putting in the time to show us all!
11:02 Imagine if the man & woman hugging here were ‘consistent characters’. I hope it will become easier to create scenes with *‘multiple consistent characters’.*
Crazy fire 🔥
Thank you.
17:58 the girl on the water is practically perfect, very difficult to see any problems. Impressive!
Gen 3 said ‘not available in your country’. LTX was available today and it was like an interface for the usual crappy vid generators. A waste of time and early signup attention. Great vid as always 👌
Thanks for your video on Udio. I am addicted to it now. The “copyrighted lyrics detected” is pretty annoying. Considering we can easily get cover song licensing and both the AI prompter and rights holder could be making money, it’s kind of dumb to restrict usage of other peoples music.
Huh, another AI company showing off their product and preventing the general public from trying it out.
I know, it’s annoying. Esp when the companies don’t even give a release date to look forward to
AI is the new tech bubble… the bust this time will be bad.
Huh another greedy ass consumer expecting something for nothing,
The current state of the market and what consumers have recreational access to is incredible
@@devonwilliams2423 huh another idiot who fanboys AI to the point of drool. What about my comment offended you sweetheart?
Who said I wasn’t happy to pay?
I pay for GPT and Midjourney right now - this had nothing to do with free access, it was about access and hype.
@devonwilliams2423 We don't necessarily expect something for nothing. Many would be happy to pay, including me. We'd just like a release date, and not get told it releases in "a few days" only to find out that's not true.
Now. What is the compute power used for these Gen 3 vídeos vs the compute power used for the Sora demo videos?
that baguette ops looking good
9:06 He just turned clipping off on his hand for a second 😆
🎶 Runway train never coming back. Wrong way on a one way track 🎶
Thanks for this. I've been using Pika but this looks like it will be useful as well!
Nice to see someone make the greatest AI videos since I arleady make the Best AI songs ever made ^^
Just tried this and it's amazing for music videos. When making a real film you're also creating a lot of takes to cherry pick the best one, so I'm okay rerolling until it looks ok
Awesome thank you for heads up
This is very entertaining to watch.. Thank you 🙂
I'm convinced they are all Comfy UI workflows lol
Call of Bread Restaurant Zone - Loaf and Load
This helped me a lot! I've been waiting for a minimum quality level and I think we're just there by our fingernails. I was going to wait a bit because I couldn't decide on the tools and procedures. I think the best control and quality for me will be midjourney/lumalabs/elevenlabs/synclabs. If anybody has alternate suggestions I'd love to hear them. I'd wait for runway to have image to video but there's always a reason to wait and no guarantee of how it will compare to Luma. Time to make some cool stuff!
Dalle is a fantastic alternative to MJ and within chatGPT you can conversationally alter your images on the fly. Only downside is Dalle is very overly censored. A free alternative is Ideogram, very powerful and excellent at text in images.
Imagine how amazing the next iteration of Walking with Dinosaurs will look with this tech.
Simply imagine the fun video editors are gonna have once a solid AI video generator arrives. Works will be done in a matter of secs. 🤩
Really impressive!
bruh when kling finally becomes available to everyone, it'll wipe the floor with every other video gen out there
Seems like video game footages were used to train this model, look how good it did with all the video game prompts
Runways costs 35$ per less the 8 minutes a month. And this is for Gen 2.
The next plan is 95$ euro per month to have unlimited generations. This is a professional platform, not for end users.
more like an app with pro costs nonetheless n00b outcomes
Great video! Looking forward to trying this and still can't wait for Sora!
The quarterback at 12:30 was sacked so hard he's looking out the earhole of his helmet as he fumbles the ball.
Wow, this sure shows how much further they need to go to get a full 10 seconds that's usable. But it does look like there's a few seconds in most clips that are pretty good.
tbf you did request the rapper to be "signing into his microphone" lol
looks like he was loading the baguette
Marvel is watching runway. The lawyers are properly getting their paperwork ready.
Hi Matt, would you mind saying what kind of computer system you are using to generate your videos. Just wondering how powerful a system you need to keep the generation time to sane levels. Thanks.
I animate and do lip sync from MidJourney images, so I wish I knew how this was for that. I don't need any 1997 screen savers LOL. So far it doesn't look as nice as Luma but I'm only a third of the way into your video and I will finish watching now
And THIS is just an Alpha-Release. ALPHA!! And look how well it is starting off at that Level. We'll be able to get improvements in the days and weeks to come. And this remember is with all those Creator Media 'Restrictions' they impose on AI Scraping as well.
In other words, there is a long marathon left before this is really good... 😮💨
Runaway has suck at it always not putting my $ on this app
As of this date, should one invest time in Luma or Runway. Which one is the best end result for the money/monthly credits?
I'm excited!
Could you please share just a few of your videos so that it would be possible to download them and analyse how they handle or manage the image to image (25 or 30 frames/second), just to see?
I love the game concept videos
Matt - what is that matrix code "lamp" in your bg? I'd love to get one 🥺
However, for some use cases definitely useable! 🎉🎉
I really hope makes one for us who are VJs for concerts/festivals so we can have generative content on the fly
This would be incredible for live VJ sets
@@RaxLakhani yup. i run some stuff i made myself for use on my LED walls when we do fests or concerts but it certainly could be better
Thanks, Matt. I'm glad I subscribed to 'Stripcue To Coeplee Chickk Thenn'.
All I want is to be able to tell Netflix “Generate season 9 of game of thrones” or “generate a thriller about…”
Awesome! Thanks Matt
i want to see you do an AI video for typography if it exists.
On that text test, I think you need to give it a more simple prompt that's a single word or two, and keep the background simple.
Cool!
Just got an email giving access so I’ll get it a test drive(crash?) and post on X because that’s what people do.
Waiting for Pika to make a move as well?
Exciting times .
Thank you, Matt.
18:17 omg that is Twiggy.
Give it 6 months and hands will be solved just like images, give it a year and cohesion will be pretty much complete and length will be extended from seconds into minutes.
As a working video editor / Multimedia designer Ive been using many different video ai tools over the last year. This looks a little better than the current crop but Honestly Video ai is not there yet.. not for professional use anyway. And im yet to be convinced that Sora have not heavily manipulated their sample videos in post to please investors. I hope im wrong.
How long of a clip can you create with the Pro subscription?
Needs image to video (I find that gives better results on most AI Video generators). Result's look ok and have come a long way in short space of time but still a long way to go imo. Any idea on cost of this?
what about image to video? did you try that?
Was waiting eagerly for this video
Thanks Matt 👍
Tried Sora/Keling/Gen3/Pika, have to say those things are not very differentiated, at least from users' perspective
Text-to-video is still relatively barrier for most of mobile users.(And that's why we are founded, making productivity equality by video-to-video method)
14:42 he’s an AI millionaire 🪙
Can it do higher resolution than 720 p?
Don't forget to Chickle Then! 😂
Very interesting so far. AI video and games really tickle my fancy.
very good video and I had similar experience from my first tests: you definitely need 95$ unlimited plan to use it because >90% of results are garbage. Also no image input. I would not recommend at the moment for normal usage. But if you want to experiment a lot and learn which things are working it can generate some interesting and sometimes just amazing output.
It looks promosing - for me the most useful workflow is when I generate imagies and than animate them - I hope they will add this feature soon.
yeah my second generation went to stuck on 5% after getting to 95
14:02 Guys, don't forget to _Stipcile to Cplehlee Chickk Then_
Can Runway Gen 3 take in starter images like Luma?
And continuation of prior clips
is there an ai that can help me generate some basic clip art style for a billboard advertisement?
It's unclear how many Gen-3 videos we can generate under the various paid plans.
3mins in total for $35 a month pro plan while 60 second for the $12 plan. So sadly dont think there will be many usable seconds.
@@bloxyman22 Thanks! So that's 18 Gen-3 videos per month on the $35 plan. Almost $2 per 10 sec video. Not too bad if the videos are good but otherwise it's quite expensive. I see they also have a $95 unlimited plan (relaxed speed).
More interested in the image to video.
When do we get to see you play the banjo, Matt?
He will make an AI video of it 😆
AI will play the banjo
Thanks bro yep still a little while to go.
2:24 What is the name of song and where I can find it?
What is B roll?????????????
mreflow still freaks me out every time lol
Maybe they won't release it until the music industry's lawsuit against the ai music makers is over? These companies gotta be scared that they will get really sued for their copywrited fair use training.
I think it is only a matter of time until copyright laws become obsolete as we know them.
It's just been made available to all 😀😀
Awesome content thank you im only worry about the fact that we will have to generate each video multiple times i dont want to burn my monthly limits in just few days
I assume OpenAI is allowing others to release theirs before they release Sora so they don't catch flak for being the first. Then once the public is accustomed to decent video generation, they release Sora which will be probably better since they have more compute, and receive little to no backlash. At least that's my guess...
They're busy nerfing it so it's useless.
Actually it's better to be first. That way any issues or glitches or problems can be argued with "Well we were first, the others have had time to improve on ours", if Sora comes out last after all these and has issues that these ones don't it's going to kill it instantly. Also Sora will not be able to argue or demand a high pricing if the others are doing as good for cheaper.
@@bigglyguy8429 That. Being 'PC', and it'll only be after the chosen one from the country maximus championship.
The bowling video is so uncanny that it made my legs go cold and filled me with a deep sense of dread 😨 Soon we'll have the name of a new phobia for these kinds of things 😂
I would suggest genophobia, but that one's very much taken... 💀
Yet another great video model.with little to none big commercial application but that's fun to play with if you don't like to play any actual games. 😅
what makes someone qualify as a creator to get early access? I thought you have standard subscription you got early access? I have standard but I don’t see gen 3 option, I am not on desktop right now though only phone
Is dancing another passion of Matt besides A.I? In the other video u used people dancing in Magnific Relight lol
I wonder why hands? Why?