Transform Video to Animation in Stable Diffusion | How to Install + BEST Consistency Settings
Vložit
- čas přidán 11. 05. 2024
- Learn how to use AI to create animations from real videos. We'll use Stable Diffusion and other tools for maximum consistency
📁Project Files:
bit.ly/3HdoT67
How To Install Stable Diffusion WebUI on MAC:
• Mac: Easy Stable Diffu...
Stable Diffusion 1.5_ Install, Comparison, Guide - Everything you need to know:
• Stable Diffusion 1.5: ...
Best Custom Stable Diffusion Models:
bit.ly/43sJWLm
Disclaimer: Some links in the description are affiliate links. If you make a purchase through them, I may earn a small commission at no extra cost to you.
🔗 Software & Plugins:
A1111 Webui Launcher: bit.ly/3Js98dY
Arcane Model: bit.ly/3j2vFDL
Adobe Media Encoder: prf.hn/l/b3XZ8kG
After Effects: prf.hn/l/ZYg0GWV
Revisionfx DeFlicker: bit.ly/3kKE8M8
Topaz Video AI: bit.ly/3t04Otl
💻My Setup:
ConceptD 7: acer.co/MDMZ-ConceptD7
Web: conceptd.acer.com/
Social: acer.co/ConceptD-Social
©️ Credits:
Card Trick: www.pexels.com/video/man-perf...
Man under snowfall: www.pexels.com/video/studio-s...
⏲ Chapters:
0:00 Intro
0:26 How to Install Stable Diffusion - A1111 Easy method
1:09 What's a Stable Diffusion Model
1:37 Automatic1111 WebUI Installation
2:46 Stable Diffusion Interface
3:02 How to Download & Install Models
3:38 Stable Diffusion Face Restoration
3:54 Inpainting Conditioning Mask
4:16 How to Export Frames From Video
5:00 How to Use img2img in Stable Diffusion
6:05 Denoising Strength: Explained
6:23 CFG Scale: Explained
6:34 Stable Diffusion Consistent Video Settings
8:18 Img2Img Batch Processing
8:55 Turn Frames Into an Animation
9:37 How to Reduce Flicker
10:34 Color Grading
11:01 Export Animation
11:20 Upscale Animation: Enhance Quality
12:05 Why Subscribe?
🎵 Where I get my Music:
bit.ly/3boTeyv
🎤 My Microphone:
amzn.to/3kuHeki
🔈 Join my Discord server:
bit.ly/3qixniz
Join me!
Instagram: / justmdmz
Tiktok: / justmdmz
Twitter: / justmdmz
Facebook: / justmdmz
Website: medmehrez.com/
#stablediffusion #animation #ai
Who am I?
-----------------------------------------
My name is Mohamed Mehrez and I create videos around visual effects and filmmaking techniques. I currently focus on making tutorials in the areas of digital art, visual effects, and incorporating AI in creative projects. - Krátké a kreslené filmy
Follow me on twitter for all new stuff: twitter.com/JustMDMZ
Join our Discord for more help and behind the scenes: bit.ly/3qixniz
Damn, I finally found the tutorial, I've been looking for it for 4 days. Really thank you so much. And thanks youtube for recommending this video.
Thanks for sharing this is one of the more comprehensive tutorials. Really great thanks!
Glad it was helpful!
와우 매우 상세한 튜토리얼!!
비슷한 류의 stable diffusion의 i2i batch기능을 이용해서 동영상을 만드는 영상은 많은데
그 특유의 이글거리는 잔상을 해결하는 방법이랑
Denoising Strength때문에 이미지가 조금씩 변하는 그 부분을
해결못해서
SD로 만든 영상은 이글이글거리는 그 특유의 영상미가 남게 된다는 단점이있었는데
이런 부분을 해결한 튜토리얼은 이것이 처음
Q
You have to set the seed to a fixed number (like 0) and the variation seed to the same fixed number (also 0) if you want to get consistent results without randomization. Also, check the settings page for even more options, if you want to eliminate the randomization completely.
youre video explained this whole process the best so far for me. thanks
Great to hear!
Excellent tutorial, even better if we use OpenSource applications instead of proprietary ones.
That's the best tutorial that I ever found! Thank you dude! and good luck with your CZcams channel
🙏
bro i saw many videos to understand this tool...and a way u explain seem very easy..thanks again looking forward to see more videos on AI
Glad it helped
Amaazzing video!.. thanks so much for this. I just created my first AI art - Video! Really grt explanatino!
Highly needed!
Wow! everyone is happy with this tutorial even though I can't download the basic model! So lovely! I see only "Waiting for file to be created..." for 100 times in this black window.
you can skip the basic model, I noticed it happening with many people lately, it used to work.
when it did that i just closed and reopened it and pressed no and it was fine
Corridor crew also used this method to remove flickering in their latest anime rock paper scissor video
Great vid btw
Thank you for interesting details to make a video better!
Glad you liked it!
Excellent, thanks for sharing this, Bro!! 😎
My pleasure!
Very nice, thanks a lot from Brasil !
Very high quality content! Thank you so much!
Glad you enjoyed it!
Cool one, chum. Thanks and keep up the good work.
Thank you too!
Appreciate!!!Thanks for the detailed explanation~
My pleasure!
This is kinda the future of creative post-production. Leveraging AI capabilities to produce high quality animation
Yess
It's not an animation mate. It's a picture filter. Nothing more
the best video of internet!!!
this is incredible THANK YOU!!! so damn easy
U r welcome :)
Waoo 😅very easy thank you
Thank-you for great tutorial!
Thanks for watching!
@@MDMZ I got stuck
Fetching updates for Taming Transformers...
Error code: 128
stdout:
stderr: fatal: reference is not a tree:
what to do now?
Good to see a non toxic tutorial,I like it,I kept seeing those annoying people making thumbnails like: you must use stabled diffusion now and make 1bilion $ in 1 second or you are a coward.Honestly I wouldn't ever click on something like that
😂
best tut so far
Great tutorial. Off to the races !
Glad it helped!
Thank you so much!
You're welcome!
i would try this rn!
gooood bro. thak you
thank you, nice tutorial.
You are welcome!
راك معلم
Very Nice tutorial !
Thank you! Cheers!
So helpful
Thank you soo much! You are great
You're welcome!
Hello man ! Really good video thanks a lot !
My problem is my preview image does not work, so difficult to see the changes, I tried to change settings but nothing to do... is it possible to help me to have this preview?
Thanks for the tutorials, it was the only one I managed to learn some things. Spiderman arm video tutorial please (Did you use disco or stable on it?)
I used stable diffusion for that
תודה!
THANK you so much 😍
subbed!!!!!
Hey, MDMZ thx it worked and thx for helping in the process.
Just a quick question ,what should i do when i press interrogate clip and it doesn't work?
log_vml_cpu .error
try with different input and see if it works, if it persists regardless of the image I would try re-installing.
Thank you, amazing 🙏🙏🙏🙏
You're welcome 😊
thanks bro
Nice one bro! thanks!, Never heard of Deflicker and just found out is also on linux.
Happy to help
Nice
Thanks for the detailed explanation. I think CZcams has some rules for using this type of thing, can I use the result in my videos? Wondering.
Yes you can!
Thanks ❗
No problem
Thanks for the video, but NOTE, for security reasons...do not use .CKPT formatted models. Use the .safetensor ones instead whenever possible. They cannot contain malicious code, whereas CKPT it able to contain code...this is rare, but why take a chance? .safetensors also tend to load faster and take up slightly less memory.
Cool, thanks for that. What exactly does the img2img alternative test do? I can not find any particular information on this unfortunately...
I find it helps with consistency, tbh I haven' dived deeper into the technical functionality of it, I'll do some research
Good job brother
Thanks
Thanks for the detailed and explanatory video! Unfortunately, I am getting this error trying to launch webUI and the page doesn't open in Chrome eventually.
"RuntimeError: Cannot add middleware after an application has started"
Would you help me with this please?
hi, is this the same error you're getting ? www.reddit.com/r/StableDiffusion/comments/10yurxl/help_with_error_please/
Thanx bro now I can make a video like jalex Rosa easily
Great 👍 this can be employed in a similar workflow for sure
Thanks
tq :)
nice!!!
Thank you! Cheers!
how to fix this error? "NansException: A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check."
Its not easy but worth the process
Is it free?
very nice
Thanks
Davinci Resolve has a Deflicker plugin effect as well
That's pretty cool
Nick Video thx
Cheers
thanks! Just wondering, what's the difference with the A1111 webui launcher you linked and the "official" github for stable diffusion?
this one is easier to install
Nice video, will you add some grain or noise in the topaz video ai?
you can definitely add grain in topaz video ai if you think that serves the look you're after
I wish you did tthe Disney filter instead. ArcaneGAN + Ebysnth is cool. But I'd love to see other styles.
I wanted to keep this as general as possible, it makes it a great start for beginners, I might make other videos for specific styles
@@MDMZ please do
Video Rất hay
I am getting this error
raise NansException(message)
modules.devices.NansException: A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.
Can this work on a MacBook?
First! ❤
So fast!!! Haha
What's the recommended python version? Could you please share the download link
Hello first of all thank you for this beautiful tutorial. I have an error when I try to interogate clip or genarate, ''NansException: A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.'' can you please help me to solve it?
same here bro coundt figure it out
I noticed a few are getting this recently, might be a temporary thing, but are you sure your GPU is supported and has enough VRAM?
Amazing!! but I need ask question. If I make summary for match then I will convert to animation and make new voice and add ot to animation. Is this eligible for CZcams monetization please answer me. Thanks
not sure
is the stable diffusion folder where i download the rev.animated.safetensors or is that stable diffusion folder throgugh patron?
why am i facing a error i copied the exact same settings as you its showing:
NansException: A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type.
I can not lounch it, problem: C:\Users\AppData\Local\Programs\Python\Python310\python.exe: No module named pip
Up
Hey great vid! does this work on amd gpu?
check this out: www.reddit.com/r/StableDiffusion/comments/ww436j/howto_stable_diffusion_on_an_amd_gpu/
will importing the image sequence as a exr help or dmg stable difusions capabilities?
I suggest you try it out and see, I haven't tried that myself :)
Hi I was wondering what PC you use for topaz. Do you have any recommendations for a PC that can handle it?
Hi there, you can find my setup in the video description, checkout the FAQ on topaz's website for recommended specs :)
It is quite interesting whether it is possible to synthesize very high resolution images using only the stable diffusion model itself without extraneous upscalers?
That's a question no one dares to answer!
But I hope for a logical and correct answer.
Seems like it depends on your hardware, mine often crashes when pushing the resolution too high
Hey man, I have a problem. I followed all the steps for Mac and I managed to install Stable diffusion, but when I get to the "txt2img" and click "Generate" it keeps telling me "RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'" where the picture shuold be... Any solutions? Thank you very much (btw the img2img doesn't work either, it tells me )
hi, I think there are a few helpful comments here: www.reddit.com/r/StableDiffusion/comments/xt3s7p/runtimeerror_layernormkernelimpl_not_implemented/
Is there a workflow/way to seperate on object from the rest of the picture? Like the hand in the spiderman animation from the background? So that only the hand is stylized? Maybe this is kind of impossible or very hard to achieve but that would be really cool :)
im guessing you can rotoscope the hand and put the original in the back
@@TheRandomego Thank you. May ChatGPT help me.. rotoscope. 😅
@@InfectedChild u need adobe after effects to roto scope
If you use masks the AI will only change the black parts of the mask, so if you could make the hand a deep black you could possibly do it. Try to find some videos about masking in stable diffusion, it could help you with what you need!
@@tracklimits2716 wow thanks for mentioning this, using black part of the image would allow all kinds of applications that are controlled and merged with the original footage. 🌊
Thank you so much for this amazing way of animating! But can anyone help, i am trying to install the first app and it says "Waiting for file to be created". What can i do? Please help!
try again and say No when it asks if you want to download the base model
I was trying combining this method with controlnet using batch, but it isn t working, that s the next step I think. Great channel btw.
thanks for sharing, I will be testing it in the coming weeks
@@MDMZ great
Hi, i am not able move ahead of the administrator. In the video you said that after the administrator window the stable diffusion interface opens automatically, but in my case the administrator windows says "press any key to continue" and when i do so nothing happens. I also tried to open it manually by clicking on WEBUI batch file, but couldnt find any URL in the command window, which again showing "press any key to continue" PLEASE HELP :-)
Same thing with me
Same with me
Same thing with me
the administrator A1111 webui showing waiting for the file to be created for 1hr what to do
I suggest you restart the process
Hey man, I really appreciate what you do, your detailed instructions! However, I am going through the process step by step and when i get to the "Do you want to download the stable defusion model" I get a repeated text that keeps saying " Waiting for file to be created..." It just keeps going and going its been a looong while now.
Hey, try again and dont download that model, u dont need it for this video anyways
@@MDMZ same here it says files are adding
@@MDMZ clutch reply. Thanks bro
Note - this is for GPU only, it won't work using CPU.
Hi thanks for sharing I select the batch files and click generate, but it does not give me the output files. Is there any other parameter I should set?
check what error you're getting
Your speaking tone sounds like you are about to end the tutorial but as they say....NOT YET🤣🤣🤣🤣🤣 I love your videos
Oh no! sorry for the confusion
@@MDMZ naah don't worry...just do your thing I'm watching
everytime i try to do a batch run i get this error: TypeError: can only concatenate list (not "int") to list. It only outputs 1 frame to the output folder, but 2 copies of that 1 frame. Any way to figure out how to fix it? thanks
Hi! I followed the installation tutorial but get stuck on this error "ModuleNotFoundError: No module named 'jsonmerge'". Pressing any key closed the window and nothing happened. Is there any solution to this?
same issue did u figure it out?
is it only for nvidia gpus as it says for me that there is no compatible GPU found
hey, while installing i am getting error " safetensors_rust.SafetensorError: Error while deserializing header: MetadataIncompleteBuffer
Stable diffusion model failed to load, exiting " Could you please help to resolve this
great tutorial! i have an issue, how should i write the path on macbook for img2img batches, (im using google colab), cause its says this error: FileNotFoundError: [Errno 2] No such file or directory:
I'm not so familiar with running Stable diffusion on google collab, but I believe you need to upload the files first and use the path from there
@@MDMZ thank you!
@@zonnen Did you figure this out? I have the same issue
Unfortunately I followed all the steps, and when I insert the image AI describes it to me but won't generate it and I get this note below: AttributeError: 'NoneType' object has no attribute 'cond_stage_key', I tried with many images but I'm getting the same result, any help?
i have installed python ver. 3.10 but, program say: "that is wrong version", can help me?
for Mac (intel) is possible?
Thanks man
No problem
can you get this on mac i7
Will it works without graphics card
on the user interface, how come i can not add anything. It will not allow me to add the ,inpainting_mask_weight
how do you launch it for the second time?
I have already installed, do I have to go through the same process each time I want to edit something?
no, just go to your stable diffusion folder, and launch webui-user.bat
Anybody else getting a: raise RuntimeError(
RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check ? when launching webui (windows Batch File)
how to add a CIVIT model to stable diffusion while working on google collab, I don't have a GPU in my PC that's why I use stable diffusion on google collab. So please make a video on it as well.
you do have a GPU, every PC has one, perhaps not a dedicated one? I suggest you try running stable diffusion and see if it works