Will AnimateDiff v3 Give Stable Video Diffusion A Run For It's Money?

Sdílet
Vložit
  • čas přidán 25. 05. 2024
  • AnimateDiff v3 gives us 4 new models - include sparse ControlNets to allow animations from a static image - just like Stable Video Diffusion. The motion module currently works in both Automatic1111 and ComfyUI, but what sort of animations does it generate?
    Update! As predicted, 10 mins after the video release, the sparse control net is now supported 🎉
    github.com/Lightricks/LongAni...
    == More Stable Diffusion Stuff! ==
    * Video-to-Video AI using AnimateDiff - • How To Use AnimateDiff...
    * Installing ComfyUI - • How to Install ComfyUI...
    * ComfyUI Workflow Essentials - • ComfyUI Workflow Creat...
    * Faster Stable Diffusions with the LCM LoRA - • LCM LoRA = Speedy Stab...
    * How do I create an animated SD avatar? - • Create your own animat...
    * Installing Anaconda for MS Windows Beginners - • Anaconda - Python Inst...
    * Add anything to your AI art in seconds - • 3 Amazing and Fun Upda...
    * One image Gets You a Consistent Character in ANY pose - • Reposer = Consistent S...
    Want to support the channel?
    / nerdyrodent
    Thanks for watching! :)
  • Jak na to + styl

Komentáře • 68

  • @Sp00kyBedHair
    @Sp00kyBedHair Před 5 měsíci +14

    Happy Approximate Birthday Dearest Nerdy! 👋🎂🎉🎈

  • @jameshughes3014
    @jameshughes3014 Před 5 měsíci +9

    I don't get stability can claim ownership of the data their model generates. How can they expect you to pay them to use images/video you make with it if they don't own the copyright to the images it creates? It would be like adobe trying to say they own what ever you make with photoshop. I'm sure they think of it differently, but it just seems like .. the wrong way to monetize, to me.
    Either way I'm really glad to see things like this

  • @ChameleonAI
    @ChameleonAI Před 5 měsíci +7

    It's good to see that AnimateDiff is still improving. SVD produces neat stuff but screw that license.

  • @aa-xn5hc
    @aa-xn5hc Před 5 měsíci

    Thank you! 🙏🏻😊🎉
    And merry Christmas!

  • @ronnykhalil
    @ronnykhalil Před 5 měsíci +4

    yes yes yes! also, thanks for the safetensors tip

  • @AIwaysUploading
    @AIwaysUploading Před 5 měsíci +4

    Around the same time this was uploaded, SparseCtrl support has now been added into ComfyUI-Advanced-ControlNet

    • @NerdyRodent
      @NerdyRodent  Před 5 měsíci +6

      My prediction was correct! 😆

    • @Ethan_Fel
      @Ethan_Fel Před 5 měsíci

      "Soft weights to replicate "My prompt is more important" feature from sd-webui ControlNet extension, and also change the scaling." that's a great update there.

  • @ZeroIQ2
    @ZeroIQ2 Před 5 měsíci +4

    Very cool! Merry Christmas Nerdy Rodent!

  • @MrSporf
    @MrSporf Před 5 měsíci +2

    Brilliant! I've been waiting for this one. Thank you

  • @landmonitor-lsd5634
    @landmonitor-lsd5634 Před 5 měsíci

    Awesome as always - happy holidays to you as well!

  • @tstone9151
    @tstone9151 Před 5 měsíci +7

    This is what I’ve been looking for, something to guide my animation using multiple still images. I’m a 3D Technical Artist so this is a BIG deal

  • @autonomousreviews2521
    @autonomousreviews2521 Před 5 měsíci

    Always enjoy your shares! Happy Holidays :)

  • @djzigoh
    @djzigoh Před 5 měsíci +2

    Nerdy you are such a great content creator !! I love your videos and of course your comfyui workflows!!! also, your accent is quite easy to understand for us... non-English native speakers... UK accent?

  • @Art0691p
    @Art0691p Před 5 měsíci

    Great video. Nice to see a bit of A1111 love too :)

  • @LIMBICNATIONARTIST
    @LIMBICNATIONARTIST Před 5 měsíci +2

    Incredible!

  • @bentp4891
    @bentp4891 Před 5 měsíci

    Merry Christmas Nerdy

  • @ExplicityDesigns
    @ExplicityDesigns Před 5 měsíci +5

    Please Could you link long animated diff? i can not find it...

  • @kariannecrysler640
    @kariannecrysler640 Před 5 měsíci +2

    Happy shortest day of the year! According to Sp00ky, it was recently your birthday too. So happy birthday 💋… (us December babe’s have to stick together) 😉💗

  • @vchewbah
    @vchewbah Před 4 měsíci +3

    1:14 I hope the CEO of StabilityAI comes across your video and takes notes. Their current license doesn't offer even half of the benefits that other closed, paid, non-open-source models provide for the price they ask for.

  • @JanKowalski-ie6nw
    @JanKowalski-ie6nw Před 5 měsíci +3

    Hello, what do you think about making a video about Dreamcraft 3d (an image to 3d model/methon), as nobody has yet tested it? Merry Christmas!

  • @Rulemer
    @Rulemer Před 4 měsíci

    Thanks for the vid! How long did those take to render? I’m struggling with render times, with AnimateDiff, a couple controlnets, and IPadapter - 80 frames of vid2vid takes over 20 mins on a powerful runpod machine. Is that the kind of ballpark you’re in?

    • @NerdyRodent
      @NerdyRodent  Před 4 měsíci +1

      I’d say usually around 5 mins for 80 frames, but extras like ipadapter will impact performance

    • @Rulemer
      @Rulemer Před 4 měsíci

      @@NerdyRodent Thanks! 🙏

  • @christianblinde
    @christianblinde Před 5 měsíci +1

    Where did you download the long versions? i am not able to find them online

  • @USBEN.
    @USBEN. Před 5 měsíci

    Getting there slowly, with sparse control and input keyframes we will have something actually useful.

  • @Ethan_Fel
    @Ethan_Fel Před 5 měsíci +4

    v3 is great (way better than SVD imo and no license), gen with it+lora have a tendency to make the skin in the "orange" side though, it's also visible on your video. I'm looking for a way to reduce this while keeping the lora. The lora is very usefull to reduce background flickering.

    • @half_real
      @half_real Před 5 měsíci +1

      Try putting "sepia" in the negatives? Is it just the skin or the whole image color tone?

    • @lpnp9477
      @lpnp9477 Před 5 měsíci +2

      Add "colorful" and "saturated" to the positive prompt. Or use a post process to cure the red and green gamma

  • @Syzygyyy
    @Syzygyyy Před 4 měsíci

    So have you tried the sparse control models ? They seem to have been released now

  • @amnzk08
    @amnzk08 Před 3 měsíci

    6:33 why do you pop the lora loader into the model node? Shouldn't it be in a lora output / won't the result be altered if the original model is replaced?

  • @KriGeta
    @KriGeta Před 5 měsíci

    Till date I am not able to see a image to image conversion with loosing the minimum object shape and basic lines like the face structure into different style using lora, if there is anything I am not aware of please share

  • @tr1pod623
    @tr1pod623 Před 5 měsíci

    Yo nerdyrodent can you explain how to do the rgb sparse control does becuase i cannot get it to work

  • @electronicmusicartcollective

    10 min after your video ROFL... Merry Christmas

  • @kaziahmed
    @kaziahmed Před 5 měsíci +1

    How do I do simple single image input to animated gif using animatediff v3?

    • @NerdyRodent
      @NerdyRodent  Před 5 měsíci

      You can use the recently released sparse control net, which as predicted did indeed come out 10 minutes after the video 😆

  • @keepitshort4208
    @keepitshort4208 Před 4 měsíci

    Great work, wish I was able to do what you do with coding and everything,
    Is there a guide to learn coding as in a proper roadmap that you could recommend ? Would really appreciate it

    • @NerdyRodent
      @NerdyRodent  Před 4 měsíci +1

      As an autodidact, I would suggest just getting in there and doing it! While I started out on things like pascal and basic, nowadays I’d suggest python as starting point. For a more structured method, look for classes in your area.

    • @keepitshort4208
      @keepitshort4208 Před 4 měsíci

      @@NerdyRodent thank you for replying, keep it up and hope you achieve many success along the way 👍🏼

  • @BryanHoward
    @BryanHoward Před 5 měsíci

    We need some SDXL models

  • @the_one_and_carpool
    @the_one_and_carpool Před 5 měsíci +1

    i like that in not here to line my pockets...i made that comment on this one guy who showed the best ai but it was all paid sites

  • @PumpiPie
    @PumpiPie Před 5 měsíci +2

    It is possibal to rotate a object 360 degrees?? 🤔

    • @NerdyRodent
      @NerdyRodent  Před 5 měsíci

      Yes! Things like this will do full 360 - czcams.com/video/j9-W1F7Dcdo/video.html

  • @user-nq3tx2iz2z
    @user-nq3tx2iz2z Před 5 měsíci +1

    Excuse me, big guy, where is the workflow of this video?

    • @NerdyRodent
      @NerdyRodent  Před 5 měsíci +1

      If you like, I can pop the comparison flow up on www.patreon.com/NerdyRodent

  • @sadshed4585
    @sadshed4585 Před 5 měsíci

    anywhere I could just download the workflow

    • @NerdyRodent
      @NerdyRodent  Před 5 měsíci +1

      I can pop the comparison workflow up on www.patreon.com/NerdyRodent if you like?

  • @JavierGarcia-td8ut
    @JavierGarcia-td8ut Před 5 měsíci +2

    Why still SD1.5? Why not to use SDXL Turbo?

    • @MicahYaple
      @MicahYaple Před 5 měsíci +9

      Because SD1.5 is a free license

    • @Sergatx
      @Sergatx Před 5 měsíci

      I think regular SDXL is free license too
      @@MicahYaple

    • @JavierGarcia-td8ut
      @JavierGarcia-td8ut Před 5 měsíci +1

      @@MicahYaple Can you develop the answer further?

    • @DavidSeguraIA
      @DavidSeguraIA Před 5 měsíci +1

      @@JavierGarcia-td8ut Hello, SDXL Turbo is also don't allow commercial use in other words the license dont allow monetize in any means that includes you tube video monetization

    • @zacharyshort384
      @zacharyshort384 Před 2 měsíci

      @@DavidSeguraIA So just using images / animations using a SDXL model cannot be done if the CZcams channel is monetized? I mean, not selling the generative art itself... just using it in a video?

  • @pragmaticcrystal
    @pragmaticcrystal Před 5 měsíci +2

    Much love to you my favorite rodent 🫶