Transform Your Videos into any LoRA Style with Stable Diffusion
Vložit
- čas přidán 18. 05. 2023
- Unleash the magic of Studio Ghibli in your videos with this easy-to-follow tutorial. I guide you through a seamless process of transforming your ordinary footage into captivating Ghibli-style visuals using a LoRA model within Stable Diffusion. This tutorial is perfect for beginners and advanced users alike, covering the usage of both free and premium software. By the end of this video, you'll be equipped with the knowledge to apply this technique swiftly and effectively.
📣📣📣I have just opened a Discord page to discuss SD and AI Art - common issues and news - join using the link: / discord
🤙🏻 Follow me on Medium to get my Newsletter:
- Get UNLIMITED access to all articles: / membership
- Laura: / lauracarnevali
- Intelligent Art: / intelligent
📰 Medium Article:
/ creating-cool-videos-w...
📌 Links:
My Prompt Settings Google Drive: docs.google.com/document/d/15...
DaVinci Resolve: www.blackmagicdesign.com/prod...
Ezgif.com: ezgif.com
CivitAI AnyLora Checkpoint: civitai.com/models/23900/anyl...
CivitAI Ghibli Style: civitai.com/models/6526?model...
ControlNet GitHub: github.com/Mikubill/sd-webui-...
ControlNet MediaPipe: huggingface.co/CrucibleAI/Con...
ControlNet Models: huggingface.co/lllyasviel/Con...
00:38 Requirements to follow this tutorial
06:27 Scope of the tutorial
08:45 From video to frames (1-4)
08:50 (1) DaVinci Resolve - Free
09:02 What is FPS (Frames Per Second)?
11:36 Choose the correct "file name"
12:28 (2) Ezgif.com - Free
12:57 (3) ControlNet m2m - Free
14:10 (4) Photoshop - NOT Free
16:09 AnyLora checkpoint model
16:28 LoRA model (Ghibli Style)
16:39 Working with the img2img tab
18:17 What is CLIP Skip
19:40 What is ENSD
20:52 What is Control Mode in ControlNet
24:16 Why the seed is important
24:54 Generate a batch of images
27:12 Upscale the images using the Extras tab (1-2)
27:47 (1) Upscale a single image
28:27 (2) Upscale a batch of images
29:40 From frames to video
32:06 Deflickering with Davinci Pro - NOT Free
32:55 Deflickering with Free Davinci - Free
#aiart #stablediffusion #generativeart #lora #stablediffusiontutorial #cartooning #cartoonify #controlnet
You just LoRA'ed a Laura. Perfect.
This is one of the best tutorials out there, very easy to follow. Thanks!
Very detailed tutorial! Thank you very mich!!
Awesome videos, soo much help! thankyou for your time and effort
Fantastic tutorial 👍🏻
Great vids- looking forward to your suggested video of setting up a LORA from scratch 🤞
...a brilliant and enjoyable tutorial.....loved it, great work......
This video is in my top 5 (or maybe even 3) most useful videos of 2023.. Thank you Laura! Oh, and just to throw it out there, DaVinci 18.5 Studio(and earlier to an extent) will render out PNG sequences, so you don't have to work about bulk converting from TIFF.
Lets goo love this kind of vids❤❤
Beautiful art.
Nice video detailed and well explained j enjoyed it 🙏🏻
I would love to see you do the same steps but using Deforum extension to make a video (it has controlnet as well) instead of img2img and shoe us the results
With the deforum you can get even more control & smoothness as far as i heared
super cool :)
it should be fun doing that and playing with the prompts
Brava Laura! 👏🏽👏🏽great video! One comment about FPS and Duration @ 13:50, for ControlNet m2m. I'm pretty sure that the duration-slider value/units is in frames, you have to select your videos length/duration in frames, using the slider, in your case 180 for the 6seconds. But it is confusing to not have the units!!!
You are the best!
KEEP ROCKING
good video , thanks
I am here not for knowledge) I am here to watch Laura) Every word is cute)
😂😂😂 thanks
I found about about the whole stable diffusion/AI thing just a couple of days ago and i'm messing around with it, trying/experimenting with it all, and watching tutorials every change i get.
You can be sooooooooo creative with it, it's super awesome!
I hope i'll succeed with this transforming video thing too.
I already have a million ideas in my head, but need to get some sleep first because my brain is like exploding from all the tutorials and figuring out how everything works by trying out a million difference things like non-stop for hours and hours the last couple of days.
Thanks for the great tutorials, you have so much knowledge and you explain everything very clearly and in a pleasant way (for a noob like me😁) 👍
It's so nice to hear that! Good luck with your ideas! :) If you want to share more you can join the discord channel
discord.gg/DsUv2W4a
haha, this is me currently! My brain is all fried watching tutorials, installing SD, Dreambooth and it isn't working, lol! Any tips from your progress? I want to make animation from real videos fast in an art style like Ghibli's.
youre awesome!
Awesome tutorial. I always wanted to know how people did these animations using ControlNet. Finally!🙂Thank you so much👍
Another great tool for extracting video frames (and recompositing them) is Blender.
When you pull it into resolve, make sure all frames are selected then hit control d. That will bring up the duration box. Click on the frames tab then change it to 1 frame. Then click edit and delete gaps.
Thank you!!!
@LaCarnevali using ffmpeg is the easiest way to extract frames from a video file for free
Can you also show us the next update video with the lora training? I'm eager to learn how to create more consistent result
i think iam still gonna use temporal kit and ebsynth with the same result but with the easy way. But yeah thanks a lot it really good info. Thumbs up!!
In 28:10 you didn't set GFPGAN Visibility away from 0. This setting isn't only related to the 2nd Upscaler.
Hi Robin ☺️ Not sure if you watched the all video, but I’m saying you can decide whether to apply it or not depending on the result.
Hi Laura! Thank you for your videos. Helps me very much. I have a little problem. Im running Diffuser over Google drive. The very last part where need to select Batch Input direcrtory is not working for me. Did everything you said but not working. Just cant find the path. What should I do?
I think we sort this on Discord :D
I started with ezgif, but realized that (Since my clip is 60 seconds and I want a single group of numbered images) davinci resolve was the best choice. followed instructions to generate 1907 .tif and then just used Preview app on Mac to convert to .jpg
If I'm using RunPod, does it matter which folder I put all of my batch input files?
did you tried to export in 15 fps or 10 fps and then to reconstruct in DaVinci with time warp, it might make the image more flowid and stable.
woo sounds like a good hint! Will definitely try it :)
I couldn't find the ControlNet model for mediapipe_face, any way to get that?
huggingface.co/CrucibleAI/ControlNetMediaPipeFace/tree/main
Need to download from here and move it in the webui folder
can. we use in deforum ?
yup!
Hi Laura, your specs? In order to apply all those models my 1660 is screaming!!!
I'm using an Nvidia RTX 3090 GPU
@@LaCarnevali
I suspected it ;)
@@invasionecreativa yeah I cannot run it on my mac! The alternative is to use colab/runpod/paperspace etc
Do you do zoom office hours?
No, I don't usually. What do you need help with? You can also contact me via email.
Wouldn't this be much faster to do with the SD-CN-Animation extension instead of wasting time with an outside video to frames app? It's not on your tabs in SD, so I guess you don't have it installed? Should be available in the extensions tab. Has built in Controlnet support, text to video, video to video, should be able to pull off what you did here with less steps.
Hiii! I’ve not tried it, but will do sooner than later!!! Thank you for the hint ✌🏻☺️
Looking forward to a video on it ;D
Can this be used to change the Little Mermaid?
What about the gaze direction? Why nobody is talking about this major failure in controlnet?
This is a very good one, thanks, will look more into it 100%
Laura ho appena installato SD sul pc, dopo averci lavorato per alcune settimane su Mac m1max 32gb ram. Sono rinato! anche una 3070 con 64gb di ram sembra andare bene e proverò anche col video. Grazie e complimenti. Un abbonamento al canale? Spero che cresca, Saluti
My workflow is very different
Stable diffusion not working in Google colab
Only the free version. It works if you upgrade to pro
@@LaCarnevali how much does it cost ?
@@TECH__SHUBHAM colab.research.google.com/signup
vote +