How To Use AnimateDiff for Video To Video in ComfyUI

Sdílet
Vložit
  • čas přidán 25. 05. 2024
  • Want to use AnimateDiff for changing a video? Video Restyler is a ComfyUI workflow for applying a new style to videos - or to just make them out of this world! Simply select an input video, pick a style of face image and generate :) AnimateDiff Vid to Vid fun.
    Grab your AnimateDiff Video to Video workflow for FREE now!
    Workflows - github.com/nerdyrodent/AVeryC...
    Beginner? Start here! - • How to Install ComfyUI...
    ComfyUI Zero to Hero - • ComfyUI Tutorials and ...
    == More Stable Diffusion Stuff! ==
    * Installing Anaconda for MS Windows Beginners - • Anaconda - Python Inst...
    * How do I create an animated SD avatar? - • Create your own animat...
  • Jak na to + styl

Komentáře • 76

  • @NerdyRodent
    @NerdyRodent  Před 7 měsíci +9

    How much fun is styling videos? 🎉😊

    • @tartwinkler1711
      @tartwinkler1711 Před 7 měsíci +5

      Styling videos is more fun than walking naked in a strange place, but not much.

    • @LouisGedo
      @LouisGedo Před 7 měsíci +1

      👋

  • @kacperskyy5652
    @kacperskyy5652 Před 7 měsíci +17

    I just wanted to say that You are an absolute genius with these workflows, that AND the fact that You're sharing them for free is just amazing. YOU ARE A LEGEND!!!

  • @Andro-Meta
    @Andro-Meta Před 7 měsíci +4

    I've gone from barely understanding how to run ComfyUI to modifying, creating my own work flows, and creating my own custom nodes and I am so grateful that you're so thorough with your guides and offer such great workflows! Thank you so much!

    • @NerdyRodent
      @NerdyRodent  Před 7 měsíci

      Great to hear! It's fun one you get used to it :)

  • @autonomousreviews2521
    @autonomousreviews2521 Před 7 měsíci +1

    You get smoother and smoother - Great share :)

  • @deastman2
    @deastman2 Před 7 měsíci +1

    This is just the help I needed to get started processing my video. Thanks!

  • @banzai316
    @banzai316 Před 7 měsíci +1

    Thanks for the workflow! 👏

  • @aa-xn5hc
    @aa-xn5hc Před 7 měsíci +1

    Thank you, brilliant!!

  • @ChameleonAI
    @ChameleonAI Před 7 měsíci +1

    Wow, I'm impressed with the temporal consistency displayed here. Thanks and well done.

  • @ShoreAllan
    @ShoreAllan Před 6 měsíci +1

    Hello Nerdy,
    many greetings from Berlin - Germany. Thank you very much for your great work, which helped me a lot with the realisation of my ideas. Do you see a possibility to create two characters - for example in the "Reposer". You then have one pose - but with two people who are then replaced?

  • @michalgonda7301
    @michalgonda7301 Před 6 měsíci

    Thank you for what are you doing ;) ... its great keep it up :) ... I wonder tho what is the name of the workflow where is removing background? I would love to try that but cant find it in workflows :/

  • @Hooooodad
    @Hooooodad Před 6 měsíci +1

    Amazing

  • @ImAlecPonce
    @ImAlecPonce Před 7 měsíci +1

    I love reactor…. But it just does work on my new 4060 comp… works great on old 2060 though.
    Love your vids

  • @stan-zm3ep
    @stan-zm3ep Před 6 měsíci

    dear Nerdy Rodent, is there any similar free tools similar to deepmotion? besides the faceswap - will be good to swap the entire 3d character... pls advise if there is any

  • @aivideos322
    @aivideos322 Před 7 měsíci +5

    Upscaling with animate diff uses much too much memory IMO. It's great for making an initial video to use but upscaling with it... ya good luck. If you use tile/temporaldiff,lineart control models you can separate the frames and upscale each individually with almost no change in the consistency, and it allows unlimited upscale size, full 1.0 de-noise, and it renders 3x faster because you are not doing frames all together. I use Impact pack for the "Batch to List" node that allows you to separate batches for individual processing.

    • @NerdyRodent
      @NerdyRodent  Před 7 měsíci +1

      I’ve not tried upscaling via AnimateDiff as yet, but just using a plain upscaling model would probably be fine on the base output too

  • @user-ph1ir8mb7w
    @user-ph1ir8mb7w Před 5 měsíci

    hey nerdy nice work... just a question is it all just 3 seconds limited?

    • @NerdyRodent
      @NerdyRodent  Před 5 měsíci

      Nope, you can do much longer videos!

  • @FortniteJams
    @FortniteJams Před 6 měsíci

    Here once again for the cutting edge.

  • @tron77777
    @tron77777 Před 7 měsíci

    Is there a reason your using random seeds and not fixed? In other animated diff protects I see fixed seeds

  • @T3MEDIACOM
    @T3MEDIACOM Před 7 měsíci

    How can I just do images with this? I would like the faceswap only for SDXL... just curious.

  • @JustGimmesomeShelter
    @JustGimmesomeShelter Před 3 měsíci

    hey, great tutorial, 1 question. i'm missing the load ipadapter module, and its not in the missing links. i have the ipadapter plus installed. thanks

    • @NerdyRodent
      @NerdyRodent  Před 3 měsíci

      You can drop me a dm on patreon for support!

  • @throttlekitty1
    @throttlekitty1 Před 5 měsíci

    What are you using to show the labels for the custom node origins?

    • @throttlekitty1
      @throttlekitty1 Před 5 měsíci

      Turns out it's a feature in the Manager, but I had to do a git reset hard despite having had pulled the latest commit.

  • @Smashachu
    @Smashachu Před 7 měsíci +1

    Any idea when/if there's going to be a tensorRT for XL? I'm enjoying the doubled generation speed but i feel like it would be most useful on longer to generate images like idk XL 1024x1024 images that just pound my poor 3080 into a puddle of it's own excrement and tears. The tears are mine

    • @NerdyRodent
      @NerdyRodent  Před 7 měsíci

      Maybe a few months? *rubs crystal ball*

    • @Smashachu
      @Smashachu Před 6 měsíci

      @@NerdyRodent We can't rub our balls in public like that. I learned that the hard way.

  • @tripitakai
    @tripitakai Před 2 měsíci

    Hi I'm getting an error : SyntaxError: Unexpected non-whitespace character after JSON at position 4 (line 1 column 5) could you tell me how to fix it please?

  • @MexicanWawix
    @MexicanWawix Před 6 měsíci

    So just to be clear, 12gb VRAM are enough to run this workflow or 18gb are needed?..

  • @TBjunk25
    @TBjunk25 Před 7 měsíci

    Could you speed up videos to make rendering faster?

  • @spenzakwsx4430
    @spenzakwsx4430 Před 6 měsíci

    great video. but where can i find the "Video Restyler" workflow. i have checked on your website, but nothing

    • @NerdyRodent
      @NerdyRodent  Před 6 měsíci

      Currently the next to last one in the list as I added the SDXL Reposer after this

  • @pragmaticcrystal
    @pragmaticcrystal Před 7 měsíci +1

    💛

  • @IntiArtDesigns
    @IntiArtDesigns Před 7 měsíci +1

    I've installed IP adapter and run the 'install missing custom nodes', but i still seem to be missing some requirements for your workflow.
    PrepImageForClipVision
    IPAdapterModelLoader
    IPAdapterApply
    Where can i get these and how do i install them? Thanks.

    • @vtchiew5937
      @vtchiew5937 Před 7 měsíci +2

      I got the same problem, using ComfyUI manager and performing an "update all" does the trick for me

    • @IntiArtDesigns
      @IntiArtDesigns Před 6 měsíci

      Well, that added the ones that were missing, but now new ones are now missing that weren't before. wtf?
      CheckpointLoaderSimpleWithNoiseSelect
      ADE_AnimateDiffUniformContextOptions
      ADE_AnimateDiffLoRALoader
      ADE_AnimateDiffLoaderWithContext
      I don't understand. @@vtchiew5937

    • @RonnieMirands
      @RonnieMirands Před 6 měsíci

      I am missing some nodes and cant find a solution

  • @d1agram4
    @d1agram4 Před 6 měsíci

    I wish comfyui had a way to swap the spline in/outputs with straight/angles so I can see where stuff is plugging in easier.

    • @NerdyRodent
      @NerdyRodent  Před 6 měsíci

      You can… just change your settings. However, during an in-depth and incredibly scientific study I did, 75% of people considered Spline to be superior to the other 3 options…

  • @galaxyvulture6649
    @galaxyvulture6649 Před 6 měsíci

    Is there a way to use stable diffusion without using my gpu? It just takes too long to generate, but I like the workspaces.

    • @NerdyRodent
      @NerdyRodent  Před 6 měsíci

      Yup, a Huggingface Space won't use your GPU :)

  • @bwheldale
    @bwheldale Před 7 měsíci

    Although I have 'ReActor Node 0.1.0 for ComfyUI' installed I'm still getting 'ReActorFaceSwap node' missing error! It's working without Reactor but how do I fix this error? I NEED to try all those nodes!

    • @NerdyRodent
      @NerdyRodent  Před 6 měsíci

      Did you restart after the node install?

    • @bwheldale
      @bwheldale Před 6 měsíci

      @@NerdyRodent Yes, I did but it was installing the prebuilt Insightface package that was missing which solved it. I'm not sure why having Visual Studio 2022 in my case didn't suffice. PS: My previous reply was deleted, I guess the link to the 'troubleshooting' section for the 'comfy-reactor-node' is the reason. PPS: I love the workflow content I'm still fiddling with it all and have been for the last few days. I need to fix faces after so it can see them is what I'm working on learning how to do now.

  • @ooiirraa
    @ooiirraa Před 6 měsíci

    I have tried a lot of workflows, but always the video changes drastically every 2 seconds (every 16 frames). Why it might be the case?

    • @NerdyRodent
      @NerdyRodent  Před 6 měsíci +2

      This one doesn’t do that, have you tried it? 😀

  • @ehsankholghi
    @ehsankholghi Před 3 měsíci

    ur gpu?

  • @twilightfilms9436
    @twilightfilms9436 Před 7 měsíci

    Is it possible to do the same with A1111?

    • @NerdyRodent
      @NerdyRodent  Před 7 měsíci

      More than likely! Just do each step manually along the way

  • @ltcshow6175
    @ltcshow6175 Před 4 měsíci

    Thanks took me a while to figure out where to get your workload at your git lol but once I did well it is almost 2am and the wife went to bed hours ago and I usually join her so ya. I have an issue though and it is driving me crazy because A) If it can work the way that I think it can then well damn I've got the best damn process to make an animated video B) Same as "A)" it must be possible because I've had 16 frames of pure awesome then I went to the whole video and wow still awesome but those first 16 frames were completely different I was like damn different seed so I go back redo with the seed I figure it was and nope I do this twice on different seeds then I'm like okay so I am using the right seed on one of these here so I change the frame cap to 16 again and bam best damn same 16 frames of pure awesome but if I change the frame cap I get a different generation. C) Is their a solution to this and if so how can I implement this in the workflow. If you don't know but think you have enough knowledge to do a workaround and think you can help me out here then that is amazing because I feel like I'm on the edge of making something kick ass and pure awesome. Could also do a screen sharing session just to show you what I'm getting and or something.

  • @luclaura1308
    @luclaura1308 Před 7 měsíci

    Can we add LORAs to this workflow?

  • @pablocastillopalomino3536
    @pablocastillopalomino3536 Před 6 měsíci

    NO ENTIENDO PORQUE DICE QUE ESTA HECHO EN STABLE DIFUSSION SI EL SOFTWARE QUE VEO ES OTRO. ALGUIEN ME EXPLICA?

    • @ltcshow6175
      @ltcshow6175 Před 4 měsíci

      Puedes usar stable diffusion en cualquiera software con razón y SD=stable diffusion.

  • @NorsemanAIArt
    @NorsemanAIArt Před 7 měsíci +1

    I wish I could get over the ComfyUI barrier......I am stuck in a1111 :/// LOVE your videos though 😍😍

    • @NerdyRodent
      @NerdyRodent  Před 7 měsíci +5

      I thought the same, now I’m addicted and A1111 feels clunky 😆

    • @velvetjones8634
      @velvetjones8634 Před 6 měsíci

      I was loving Comfy until I bashed my head against a wall every day for a week trying to get Reactor to work.
      I’ve since gone back to A1111.

  • @studioGZ
    @studioGZ Před 7 měsíci +1

    😂❤🎉 WOW;

  • @Democratese
    @Democratese Před 7 měsíci

    Has anyone tested this workflow in Google colab?

    • @NerdyRodent
      @NerdyRodent  Před 7 měsíci +1

      I haven’t, but don’t see why it wouldn’t work 😀

    • @Democratese
      @Democratese Před 7 měsíci

      @@NerdyRodent I've had some trouble with dependencies in colab. Will give it a try though.

  • @keisaboru1155
    @keisaboru1155 Před 7 měsíci

    I did a video like this but it was fine without awful flicker n stuff . On a1111 . Just recently . But . Idk . Seems like no one wants a simple solution

  • @dkontey6421
    @dkontey6421 Před 6 měsíci

    this is not working, loadipadapter and clipvision are in some error so the video combine is not working!

    • @NerdyRodent
      @NerdyRodent  Před 6 měsíci +1

      You can work through these steps to fix your ComfyUI setup - github.com/nerdyrodent/AVeryComfyNerd#troubleshooting

  • @Iancreed8592
    @Iancreed8592 Před 7 měsíci +3

    We are just a skip and a hop away from Hollywood becoming irrelevant. Finally we'll get decent shows and movies without political bs.

    • @NerdyRodent
      @NerdyRodent  Před 7 měsíci

      Home videos are making a comeback 😉

  • @WanderlustWithT
    @WanderlustWithT Před 7 měsíci

    Still looks god awful but let's allow this technology to improve, it's going to be amazing someday.

  • @reggaemarley4617
    @reggaemarley4617 Před 4 měsíci

    Would 8 gigs of VRAM be okay? 🥹