Animatediff LCM Lora in ComfyUI for Faster Render Times and Superior Results
Vložit
- čas přidán 25. 06. 2024
- Learn how to structure nodes with the right settings to unlock their full potential and discover ways to achieve great video quality with AnimateLCM Lora in ComfyUI. Take your animation abilities to new heights.
Complete Workflow Breakdown (Animatediff video to video): • Complete Workflow Brea...
(How to Use Detailer) for Better Animation: • Animatediff Comfyui Tu...
Mastering Video to Video in ComfyUI (Without Node Skills): • Mastering Video to Vid...
-----------------------------------------------------------------------------------------------------------------------------
Workflow Download: goshnii.gumroad.com/
Animate LCM Page: github.com/dezi-ai/ComfyUI-An...
Animate LCM Models: huggingface.co/wangfuyun/Anim...
AnimateDiff Evolved: github.com/Kosinkadink/ComfyU...
Prompt animation credit: civitai.com/images/3044633
Disney Pixar Checkpoint Model: civitai.com/models/65203/disn...
HelloYoung25d Checkpoint Model: civitai.com/models/134442?mod...
-------------------------------------------------------------------------------------------------------------------------------
ComfyUI + Animatediff Tutorials: • ComfyUI Tutorials / An...
#stablediffusion #comfyui #animatediff #lcm - Jak na to + styl
Great stuff mate
I appreciate your support.
@@goshniiAI just bought your workflow off gumroad, I will wait for your vid 2 vid, I know you're going to nail it, I can get very good consistency with inner reflections workflow but using marigold depth map with terrain setting into zeo depth sdxl controller, but I'm excited to see what you come up with.
@@sudabadri7051 Thank you very much for the endorsement. I will prioritize producing a vid2vid tutorial for you and others interested in exploring the potential. Stay tuned for the few videos to come.
Really Great and really fast, great work!
thank you for the compliments
Great tutorial, gonna try this out!
Thank you lots, and happy creating.
You’re truly THE BEST in this game that I know know! Your knowledge and teaching are in point and the community is so really grateful for your contribution. Thanks a lot!! 🙏🏽 I was wondering if making a tutorial about creating a LORA (either under Comfy or SD) is something you might consider? I know there’s a lot of ressources available about the topic but didn’t find something working for me so far! Cheers
Thank you so much for your very sweet opinions and encouragement! I am pleased to hear that you find the materials beneficial. A video on creating a LORA using Comfy or SD sounds brilliant! I'll check into it and see if I can come up with anything useful.
wow man, ufffff outstanding
thank you for the high praise
you are aweesome
Gracias! I appreciate you.
Great! still the best I managed to find. Thank you so much.
Quick question please, any reason you didn`t used the kl-f8-anime2 VAE like you suggested in your other video?
Hello there, it honestly skipped me using the general MSE-840 000 VAE, however, it could have still worked better by applying kl-f8-anime2 VAE.
Great video, to really enhance this video I would suggest putting the audio you make in your video into a ai audio enhancer for better quality.
Thank you for sharing the concern you have and the awesome suggestion! i will look into that.
Thank you for sharing ! I was just thinking , Is that possible to use extra any lora model ?
Hello...Absolutely! It is possible to add an unlimited number of Lora models.
Great stuff! Was it 808 seconds?
Thank you for tuning in! Yes, it was about 808 seconds, however, your outcome may vary depending on your setup.
Hey there, I love your videos, thank you very much! For audio/voice recording I would recommend you Adobe Enhance Speech. You just need an Adobe Account for it, no subscription. Makes your voice more clear :)
You are welcome and I sincerely appreciate your support and suggestions. I will absolutely look into that.
do you know how to add motion lora to this workflow? or any other workflow img2vid with motion lora?
To use any motion Lora, search for the (Load Animatediff LORA) node and then choose the "motion lora model" from the list. The node is then connected to the (motion lora) of the (Apply Animatediff Node).
the screen shot here may help for a good understanding. tinyurl.com/44udfvsk
It took longer for the custom sampler one right? I'm confused why you are saying it's faster when it's really not? On my machine it's a lot slower compared to the other workflows.
Hello there, I'm sorry to hear that. however I suppose it could be a difference in setups
Hello, I was wondering can we use batch prompt +control net ?
Hello there, it sounds like an excellent idea. I have yet to try it, so I can't be certain of the outcomes, but I appreciate you sharing the thought and am curious to try it out.
bro thank you itcraizy tips, can you give us the tutor from img2vid ?
You are welcome, and I'm glad you found the tips useful.
Regarding the img2vid tutorial, I will definitely consider it for future content.
With your same nodal I get the error
"Error occurred when executing SamplerCustom:
'NoneType' object has no attribute 'size'"
Hello there, it's possible that one of the nodes is not receiving the expected input. Kindly Double-check your node connections to confirm that everything is properly linked. Also, choosing a lower frame size can help if you run out of VRAM.
Does it works on Mac M2?
I am not too sure since i don't have any specific information about compatibility with the Mac M2 chipset. i believe Stable Diffusion and other AI workflows typically rely on GPU acceleration for efficient processing. However If your Mac M2 meets the other system requirements for running Stable Diffusion, you should be able to use it.
what about background consistency? feels like we take a step foward in one aspect but one step back in some other aspects...
Background consistency is attractive for creating a cohesive animation, and I am hoping to dedicate some time to that aspect.
Use an ai video background remover then composite in video editor.
@@PrincessSleepyTV Thank you for the direction and valued information. cc @kleber1983
please tell me why I have only undefined in the Lora name space. I can't change that. This is the answer that I got after clicking the queue prompt Prompt outputs failed validation
LoraLoader:
- Required input is missing: lora_name
LoraLoaderModelOnly:
You may be missing the Lora that was used during the workflow. This means you may need to download it . Please ensure that you have downloaded and re-selected the LCM Lora from your directory, also I have provided the links in the description to assist you.
i hope this helps.
does it require more than 6gb vram... coz im always getting oom(out of memory) error
You can still take advantage of the LCM with a 6GB Vram. I frequently encounter out-of-memory errors as well. Try optimising your workflow by changing settings, using lower-resolution inputs, or even closing other apps that may be operating heavily on your graphics card at the same time.
@@goshniiAI ohh.. i see
I'll try
Is that with 4gb graphics cards
I have a 32GB graphics card, but it should be sufficient for lesser tasks.
Getting this error module 'comfy.samplers' has no attribute 'calculate_sigmas'. Updated all nodes, specially the ComfyUI-sampler-lcm-alternative but still getting this error. What could it be? Thank you!!!!
Hello there, since you took the correct step to update the nodes, I'm not sure why you're getting an error module. As a suggestion, you could also download the free workflow to compare or install any missing nodes that may appear.
@@goshniiAII’m getting that same error. I used your workflow, updated comfy and all nodes.
@@deastman2 Hello there, If you're still getting the same error after updating comfy and all nodes,
It could be due to differences in setups or Python versions.
I hope this helps, but it might be worth going over the exact steps to create the workflow.
@@goshniiAI maybe it does come down to python versions. Which version are you successfully using?
@@deastman2 My version installed is ( Python 3.11.6 )
can you modify it so we can do vid2vid? THANKS!
Thank you for your suggestion.
@@goshniiAI thank you! I tried your workflow and its really good! Looking forward to to vid2vid ❤️
@@AgustinCaniglia1992 You are welcome and thank you so much i am glad to hear that
Yes please! Would love to see this with vid2vid as you did in your other video! ❤
@@NERDDISCO Absolutely! Thank you for sharing your interest.
Awesome! Except If I try to do anything else except Anime girls, the output is very flickery. For the love of god, enough Anime girls. What a colossal waste of technology.
I appreciate your feedback! However, you may always experiment with other themes to achieve unique results.
@@goshniiAI It's the experimenting with other styles that has brought me to the realisation that all these models have way too much Anime in their training data sets. Every single example on CZcams is a dancing Asian woman or Anime girl. Why? Because if you try it with other subject matter, everything breaks. Appreciate you sharing this nonetheless.
@@TijuanaKez you are right , it is quite a challenge for now, however, continue experimenting with various types and concepts. A good prompt inspiration can also be helpful from @Civitai, Midjourney, and more.