Animatediff LCM Lora in ComfyUI for Faster Render Times and Superior Results

Sdílet
Vložit
  • čas přidán 25. 06. 2024
  • Learn how to structure nodes with the right settings to unlock their full potential and discover ways to achieve great video quality with AnimateLCM Lora in ComfyUI. Take your animation abilities to new heights.
    Complete Workflow Breakdown (Animatediff video to video): • Complete Workflow Brea...
    (How to Use Detailer) for Better Animation: • Animatediff Comfyui Tu...
    Mastering Video to Video in ComfyUI (Without Node Skills): • Mastering Video to Vid...
    -----------------------------------------------------------------------------------------------------------------------------
    Workflow Download: goshnii.gumroad.com/
    Animate LCM Page: github.com/dezi-ai/ComfyUI-An...
    Animate LCM Models: huggingface.co/wangfuyun/Anim...
    AnimateDiff Evolved: github.com/Kosinkadink/ComfyU...
    Prompt animation credit: civitai.com/images/3044633
    Disney Pixar Checkpoint Model: civitai.com/models/65203/disn...
    HelloYoung25d Checkpoint Model: civitai.com/models/134442?mod...
    -------------------------------------------------------------------------------------------------------------------------------
    ComfyUI + Animatediff Tutorials: • ComfyUI Tutorials / An...
    #stablediffusion #comfyui #animatediff #lcm
  • Jak na to + styl

Komentáře • 65

  • @sudabadri7051
    @sudabadri7051 Před 3 měsíci

    Great stuff mate

    • @goshniiAI
      @goshniiAI  Před 3 měsíci +1

      I appreciate your support.

    • @sudabadri7051
      @sudabadri7051 Před 3 měsíci

      @@goshniiAI just bought your workflow off gumroad, I will wait for your vid 2 vid, I know you're going to nail it, I can get very good consistency with inner reflections workflow but using marigold depth map with terrain setting into zeo depth sdxl controller, but I'm excited to see what you come up with.

    • @goshniiAI
      @goshniiAI  Před 3 měsíci +1

      @@sudabadri7051 Thank you very much for the endorsement. I will prioritize producing a vid2vid tutorial for you and others interested in exploring the potential. Stay tuned for the few videos to come.

  • @yuSun333
    @yuSun333 Před 22 dny

    Really Great and really fast, great work!

    • @goshniiAI
      @goshniiAI  Před 22 dny

      thank you for the compliments

  • @nitinburli7814
    @nitinburli7814 Před 3 měsíci

    Great tutorial, gonna try this out!

    • @goshniiAI
      @goshniiAI  Před 3 měsíci

      Thank you lots, and happy creating.

  • @KDashHoward
    @KDashHoward Před 3 měsíci +1

    You’re truly THE BEST in this game that I know know! Your knowledge and teaching are in point and the community is so really grateful for your contribution. Thanks a lot!! 🙏🏽 I was wondering if making a tutorial about creating a LORA (either under Comfy or SD) is something you might consider? I know there’s a lot of ressources available about the topic but didn’t find something working for me so far! Cheers

    • @goshniiAI
      @goshniiAI  Před 3 měsíci

      Thank you so much for your very sweet opinions and encouragement! I am pleased to hear that you find the materials beneficial. A video on creating a LORA using Comfy or SD sounds brilliant! I'll check into it and see if I can come up with anything useful.

  • @Xavi-Tenis
    @Xavi-Tenis Před měsícem

    wow man, ufffff outstanding

    • @goshniiAI
      @goshniiAI  Před měsícem

      thank you for the high praise

  • @AkshayTravelFilms2
    @AkshayTravelFilms2 Před 3 měsíci

    you are aweesome

    • @goshniiAI
      @goshniiAI  Před 3 měsíci

      Gracias! I appreciate you.

  • @KirillD-fk6ml
    @KirillD-fk6ml Před měsícem

    Great! still the best I managed to find. Thank you so much.
    Quick question please, any reason you didn`t used the kl-f8-anime2 VAE like you suggested in your other video?

    • @goshniiAI
      @goshniiAI  Před měsícem +1

      Hello there, it honestly skipped me using the general MSE-840 000 VAE, however, it could have still worked better by applying kl-f8-anime2 VAE.

  • @Financein10
    @Financein10 Před 2 měsíci +1

    Great video, to really enhance this video I would suggest putting the audio you make in your video into a ai audio enhancer for better quality.

    • @goshniiAI
      @goshniiAI  Před 2 měsíci

      Thank you for sharing the concern you have and the awesome suggestion! i will look into that.

  • @elifmiami
    @elifmiami Před 2 měsíci

    Thank you for sharing ! I was just thinking , Is that possible to use extra any lora model ?

    • @goshniiAI
      @goshniiAI  Před 2 měsíci +1

      Hello...Absolutely! It is possible to add an unlimited number of Lora models.

  • @MrPer4illo
    @MrPer4illo Před 3 měsíci

    Great stuff! Was it 808 seconds?

    • @goshniiAI
      @goshniiAI  Před 3 měsíci +1

      Thank you for tuning in! Yes, it was about 808 seconds, however, your outcome may vary depending on your setup.

  • @ValleStutz
    @ValleStutz Před 3 měsíci

    Hey there, I love your videos, thank you very much! For audio/voice recording I would recommend you Adobe Enhance Speech. You just need an Adobe Account for it, no subscription. Makes your voice more clear :)

    • @goshniiAI
      @goshniiAI  Před 3 měsíci

      You are welcome and I sincerely appreciate your support and suggestions. I will absolutely look into that.

  • @MisterCozyMelodies
    @MisterCozyMelodies Před měsícem

    do you know how to add motion lora to this workflow? or any other workflow img2vid with motion lora?

    • @goshniiAI
      @goshniiAI  Před 28 dny

      To use any motion Lora, search for the (Load Animatediff LORA) node and then choose the "motion lora model" from the list. The node is then connected to the (motion lora) of the (Apply Animatediff Node).
      the screen shot here may help for a good understanding. tinyurl.com/44udfvsk

  • @TyHoudinifx
    @TyHoudinifx Před měsícem

    It took longer for the custom sampler one right? I'm confused why you are saying it's faster when it's really not? On my machine it's a lot slower compared to the other workflows.

    • @goshniiAI
      @goshniiAI  Před měsícem

      Hello there, I'm sorry to hear that. however I suppose it could be a difference in setups

  • @elifmiami
    @elifmiami Před 3 dny

    Hello, I was wondering can we use batch prompt +control net ?

    • @goshniiAI
      @goshniiAI  Před 2 dny +1

      Hello there, it sounds like an excellent idea. I have yet to try it, so I can't be certain of the outcomes, but I appreciate you sharing the thought and am curious to try it out.

  • @rezahasny9036
    @rezahasny9036 Před 3 měsíci

    bro thank you itcraizy tips, can you give us the tutor from img2vid ?

    • @goshniiAI
      @goshniiAI  Před 3 měsíci +1

      You are welcome, and I'm glad you found the tips useful.
      Regarding the img2vid tutorial, I will definitely consider it for future content.

  • @pabloapiolazza4353
    @pabloapiolazza4353 Před měsícem

    With your same nodal I get the error
    "Error occurred when executing SamplerCustom:
    'NoneType' object has no attribute 'size'"

    • @goshniiAI
      @goshniiAI  Před měsícem

      Hello there, it's possible that one of the nodes is not receiving the expected input. Kindly Double-check your node connections to confirm that everything is properly linked. Also, choosing a lower frame size can help if you run out of VRAM.

  • @fabiotgarcia2
    @fabiotgarcia2 Před 2 měsíci

    Does it works on Mac M2?

    • @goshniiAI
      @goshniiAI  Před 2 měsíci +1

      I am not too sure since i don't have any specific information about compatibility with the Mac M2 chipset. i believe Stable Diffusion and other AI workflows typically rely on GPU acceleration for efficient processing. However If your Mac M2 meets the other system requirements for running Stable Diffusion, you should be able to use it.

  • @kleber1983
    @kleber1983 Před 3 měsíci

    what about background consistency? feels like we take a step foward in one aspect but one step back in some other aspects...

    • @goshniiAI
      @goshniiAI  Před 3 měsíci +1

      Background consistency is attractive for creating a cohesive animation, and I am hoping to dedicate some time to that aspect.

    • @PrincessSleepyTV
      @PrincessSleepyTV Před 3 měsíci +1

      Use an ai video background remover then composite in video editor.

    • @goshniiAI
      @goshniiAI  Před 3 měsíci

      @@PrincessSleepyTV Thank you for the direction and valued information. cc @kleber1983

  • @tanyasubaBg
    @tanyasubaBg Před měsícem

    please tell me why I have only undefined in the Lora name space. I can't change that. This is the answer that I got after clicking the queue prompt Prompt outputs failed validation
    LoraLoader:
    - Required input is missing: lora_name
    LoraLoaderModelOnly:

    • @goshniiAI
      @goshniiAI  Před měsícem

      You may be missing the Lora that was used during the workflow. This means you may need to download it . Please ensure that you have downloaded and re-selected the LCM Lora from your directory, also I have provided the links in the description to assist you.
      i hope this helps.

  • @saymyname4325
    @saymyname4325 Před 3 měsíci

    does it require more than 6gb vram... coz im always getting oom(out of memory) error

    • @goshniiAI
      @goshniiAI  Před 3 měsíci

      You can still take advantage of the LCM with a 6GB Vram. I frequently encounter out-of-memory errors as well. Try optimising your workflow by changing settings, using lower-resolution inputs, or even closing other apps that may be operating heavily on your graphics card at the same time.

    • @saymyname4325
      @saymyname4325 Před 3 měsíci

      @@goshniiAI ohh.. i see
      I'll try

  • @Dwoz_Bgmi
    @Dwoz_Bgmi Před 3 měsíci

    Is that with 4gb graphics cards

    • @goshniiAI
      @goshniiAI  Před 3 měsíci

      I have a 32GB graphics card, but it should be sufficient for lesser tasks.

  • @olegmikheles2576
    @olegmikheles2576 Před 2 měsíci

    Getting this error module 'comfy.samplers' has no attribute 'calculate_sigmas'. Updated all nodes, specially the ComfyUI-sampler-lcm-alternative but still getting this error. What could it be? Thank you!!!!

    • @goshniiAI
      @goshniiAI  Před 2 měsíci

      Hello there, since you took the correct step to update the nodes, I'm not sure why you're getting an error module. As a suggestion, you could also download the free workflow to compare or install any missing nodes that may appear.

    • @deastman2
      @deastman2 Před 2 měsíci

      @@goshniiAII’m getting that same error. I used your workflow, updated comfy and all nodes.

    • @goshniiAI
      @goshniiAI  Před 2 měsíci +1

      @@deastman2 Hello there, If you're still getting the same error after updating comfy and all nodes,
      It could be due to differences in setups or Python versions.
      I hope this helps, but it might be worth going over the exact steps to create the workflow.

    • @deastman2
      @deastman2 Před 2 měsíci

      @@goshniiAI maybe it does come down to python versions. Which version are you successfully using?

    • @goshniiAI
      @goshniiAI  Před 2 měsíci

      @@deastman2 My version installed is ( Python 3.11.6 )

  • @AgustinCaniglia1992
    @AgustinCaniglia1992 Před 3 měsíci

    can you modify it so we can do vid2vid? THANKS!

    • @goshniiAI
      @goshniiAI  Před 3 měsíci

      Thank you for your suggestion.

    • @AgustinCaniglia1992
      @AgustinCaniglia1992 Před 3 měsíci

      @@goshniiAI thank you! I tried your workflow and its really good! Looking forward to to vid2vid ❤️

    • @goshniiAI
      @goshniiAI  Před 3 měsíci

      @@AgustinCaniglia1992 You are welcome and thank you so much i am glad to hear that

    • @NERDDISCO
      @NERDDISCO Před 3 měsíci +1

      Yes please! Would love to see this with vid2vid as you did in your other video! ❤

    • @goshniiAI
      @goshniiAI  Před 3 měsíci +1

      @@NERDDISCO Absolutely! Thank you for sharing your interest.

  • @TijuanaKez
    @TijuanaKez Před 2 měsíci

    Awesome! Except If I try to do anything else except Anime girls, the output is very flickery. For the love of god, enough Anime girls. What a colossal waste of technology.

    • @goshniiAI
      @goshniiAI  Před 2 měsíci

      I appreciate your feedback! However, you may always experiment with other themes to achieve unique results.

    • @TijuanaKez
      @TijuanaKez Před 2 měsíci

      @@goshniiAI It's the experimenting with other styles that has brought me to the realisation that all these models have way too much Anime in their training data sets. Every single example on CZcams is a dancing Asian woman or Anime girl. Why? Because if you try it with other subject matter, everything breaks. Appreciate you sharing this nonetheless.

    • @goshniiAI
      @goshniiAI  Před 2 měsíci

      @@TijuanaKez you are right , it is quite a challenge for now, however, continue experimenting with various types and concepts. A good prompt inspiration can also be helpful from @Civitai, Midjourney, and more.