ComfyUI: Master Morphing Videos with Plug-and-Play AnimateDiff Workflow (Tutorial)

Sdílet
Vložit
  • čas přidán 16. 06. 2024
  • Push your creative boundaries with ComfyUI using a free plug and play workflow! Generate captivating loops, eye-catching intros, and more! This free and powerful tool is perfect for creators of all levels.
    Chapters:
    00:00 Sample Morphing Videos
    01:15 Downloads
    02:09 Folder locations
    02:14 Workflow Overview
    04:10 Generating first Morph
    04:40 Running the Workflow
    04:47 Quick bonus tips
    06:35 Supercharge the Workflow
    08:58 Getting more variation in batches
    10:31 Scaling up
    10:59 Scaling up with model
    11:35 This is pretty cool
    I'll show you how to make morphing videos and use images to create stunning animations and videos,
    You'll also learn how to use text prompts to morph between anything you can imagine!
    Plus there are some valuable tips and tricks to streamline the comfyui morphing video workflow and save time while creating your own mind-bending visuals.
    #########
    Links:
    ########
    Workflow: Morpheus Modified workflow for text to image to video
    openart.ai/workflows/abeatech...
    Tutorial for Batch Generating Text to Image using external text file:
    • ComfyUI: Batch Generat...
    Workflow: ipiv's Morph - img2vid AnimateDiff LCM:
    civitai.com/models/372584?mod...
    Note: See 02:09 of the video for Model folder locations
    AnimateDiff:
    huggingface.co/wangfuyun/Anim...
    VAE:
    huggingface.co/stabilityai/sd...
    AnimateLCM LORA:
    huggingface.co/wangfuyun/Anim...
    Clip Vision Model ViT-H:
    CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors download and rename:
    huggingface.co/h94/IP-Adapter...
    Clip Vision Model ViT-G:
    CLIP-ViT-bigG-14-laion2B-39B-b160k.safetensors download and rename -
    huggingface.co/h94/IP-Adapter...
    IPADAPTER MODEL:
    huggingface.co/h94/IP-Adapter...
    Control Net (QRCode):
    huggingface.co/monster-labs/c...
    Motions Animations for AnimateDiff: civitai.com/posts/2011230
    ################
    Music: Bensound.com/royalty-free-music
    License code: LU8J6ZAOXHXNOAI4
  • Věda a technologie

Komentáře • 96

  • @ted328
    @ted328 Před měsícem +1

    Literally the answer to my prayers, have been looking for exactly this for MONTHS

  • @alessandrogiusti1949
    @alessandrogiusti1949 Před měsícem

    After following many tutorial, you are the only one gettin to me the results in a very clear way. Thank you so much!

  • @SylvainSangla
    @SylvainSangla Před měsícem

    Thanks a lot for sharing this, very precise and complete guide ! 🥰
    Cheers from France !

  • @AlvaroFCelis
    @AlvaroFCelis Před měsícem +1

    Thank you so much! Very clear, and organized. Subbed..

  • @MSigh
    @MSigh Před měsícem

    Excellent! 👍👍👍

  • @mcqx4
    @mcqx4 Před měsícem +1

    Nice tutorial, thanks!

    • @abeatech
      @abeatech  Před měsícem +1

      Glad it was helpful!

  • @popo-fd3fr
    @popo-fd3fr Před měsícem

    Thanks man. I just subscribed

  • @TechWithHabbz
    @TechWithHabbz Před měsícem +1

    You about to blow up bro. Keep it going. Btw, I was subscriber #48 😁

  • @SF8008
    @SF8008 Před měsícem +1

    Amazing! Thanks a lot for this!!!
    btw - which nodes do I need to disable in order to get back to the original flow? (the one that is based only on input images and not on prompts)

  • @velvetjones8634
    @velvetjones8634 Před 2 měsíci

    Very helpful, thanks!

  • @MariusBLid
    @MariusBLid Před měsícem +1

    Great stuff man! Thank you 😀what are your specs btw? I only have 8gb vram

  • @zarone9270
    @zarone9270 Před měsícem

    thx Abe!

  • @hoptoad
    @hoptoad Před 2 dny

    this is great!
    do you know if there is a way to "batch" many variations where you can give each of the four guidance images a folder and it will run through and do a new animation with different source images multiple times?

  • @paluruba
    @paluruba Před měsícem +2

    Thank you for this video! Any idea what to do when the videos are blurry?

  • @user-yo8pw8wd3z
    @user-yo8pw8wd3z Před měsícem

    good video. where can i find the link to the additional video masks? I don't see it in the description

  • @petertucker455
    @petertucker455 Před 11 dny

    Hi Abe, I found the final animation output is wildly different in style & aesthetic from the initial input images. Any tips for retaining overall style? Also have you got this workflow to work with SDXL?

  • @gorkemtekdal
    @gorkemtekdal Před 2 měsíci +1

    Great video!
    I want to ask that can we use init image for this workflow like we do on Deforum?
    I need the video starts with a specific image on the first frame of the video, then it should changes through the prompts.
    Do you know how does it possible on ComfyUI / AnimateDiff?
    Thank you!

    • @abeatech
      @abeatech  Před měsícem

      I haven't personally used deforum, but it sounds like its the same concept. This workflow uses 4 init images at different points during the 96 frames to guide the animation. The ipadapter and control net nodes do most of the heavy lifting so prompts aren't really needed, but i've used them to fine tune outputs. I'd encourage you to try it out and see if it gives you the results you're looking for.

  • @Injaznito1
    @Injaznito1 Před měsícem

    NICE! I tried and it works great. Thanx for the tut! Question though. I tried changing the 96 to a larger number so the changes between pictures takes a bit longer but I don't see any difference. Is there something I'm missing? Thanx!

  • @BrianDressel
    @BrianDressel Před měsícem

    Excellent walkthrough of this, thanks.

  • @TheNexusRealm
    @TheNexusRealm Před měsícem

    cool, how long did it take you?

  • @cabb_
    @cabb_ Před měsícem

    ipiv did an incredible job with this workflow!. Thanks for the tutorial.

  • @MACH_SDQ
    @MACH_SDQ Před 28 dny

    Goooooood

  • @ComfyCott
    @ComfyCott Před měsícem

    Dude I loved this video! You explain things very well and I love how you explain in detail as you build out strings of nodes! subbed!

  • @aslgg8114
    @aslgg8114 Před měsícem +1

    What should I do to make the reference image persistent

  • @amunlevy2721
    @amunlevy2721 Před měsícem +6

    Getting these errors that nodes are missing even when installed IP Adapter Plus... missing nodes: IPAdapterBatch and IPAdapterUnifiedLoader

    • @white_friend
      @white_friend Před 17 hodinami

      try to 'update all' in Manager Menu

  • @rowanwhile
    @rowanwhile Před měsícem

    Brilliant video. thanks so much for sharing your knowledge.

  • @evgenika2013
    @evgenika2013 Před 11 dny

    Everything is great, but i have blurry result on my horizontal artwork. Any suggestion what to check on it?

  • @Halfgawd_Halfdevil
    @Halfgawd_Halfdevil Před měsícem

    Managed to get this running. It does okay but I am not seeing much influence from the control net motion video input. Any way to make that more apparent? Also have notice a Shutterstock overlay near the bottom of the clip. it is translucent but noticeable. kind of ruins everything. anyway, to eliminate that artifact?

  • @SapiensVirtus
    @SapiensVirtus Před 8 dny

    hi! beginners question. So if I run a software like ComfyUI locally, does that mean that all AI art, music, works that I generate will be free to use for commercial purposes?or am I violating terms of copyright? I am searching more info about this but I get confused, thanks in advance

  • @Caret-ws1wo
    @Caret-ws1wo Před 25 dny +1

    Hey, my animations come out super blurry and are no where near as clear as yours. I can barely make out the monkey, it's just a bunch of moving brown lol. Is there a reason for this?

  • @wagmi614
    @wagmi614 Před měsícem

    can could one add some kind of ip adaptar to add your own face to transform?

  • @MichaelL-mq4uw
    @MichaelL-mq4uw Před měsícem

    why do you need controlnet at all? can it be skipped and morph without any mask?

  • @unemployed9665
    @unemployed9665 Před 9 dny

    How i can get progress bar like you on top of the screen? I must reainstall full comfy UI for this workflow. I instaled crystools but progress bar doesn't appear on top :/ Thank you for your video you are a god!

  • @GiancarloBombardieri
    @GiancarloBombardieri Před 7 dny

    it worked so fine. but now it sends an error at the Load video Path, is there any update??

  • @chinyewcomics
    @chinyewcomics Před 16 dny

    Hi, does anybody know how to add more images to create a longer video?

  • @pro_rock1910
    @pro_rock1910 Před měsícem

    ❤‍🔥❤‍🔥❤‍🔥

  • @produccionesvoid
    @produccionesvoid Před 19 dny

    when i put on manager install missing nodes i cant do it and said: To apply the installed/updated/disabled/enabled custom node, please RESTART ComfyUI. And refresh browser... what can do that?

  • @tetianaf5172
    @tetianaf5172 Před měsícem

    Hi! I have this error all the time: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument mat1 in method wrapper_CUDA_addmm). Though I use 1.5 checkpoint. Please help

  • @saundersnp
    @saundersnp Před měsícem

    I've encountered this error : Error occurred when executing RIFE VFI:
    Tensor type unknown to einops

  • @Ai_Gen_mayyit
    @Ai_Gen_mayyit Před měsícem

    Error occurred when executing VHS_LoadVideoPath:
    module 'cv2' has no attribute 'VideoCapture'

  • @frankiematassa1689
    @frankiematassa1689 Před měsícem

    Error occurred when executing IPAdapterBatch:
    Error(s) in loading state_dict for ImageProjModel:
    size mismatch for proj.weight: copying a param with shape torch.Size([3072, 1280]) from checkpoint, the shape in current model is torch.Size([3072, 1024]).
    I followed this video exactly and am only using SDL 1.5 checkpoints. I cannot find anywhere how to fix this

  • @Ai_Gen_mayyit
    @Ai_Gen_mayyit Před měsícem

    Error occurred when executing VHS_LoadVideoPath:
    module 'cv2' has no attribute 'VideoCapture'
    your video timestep: 04:20

  • @TinyLLMDemos
    @TinyLLMDemos Před měsícem

    where do i get your input images

  • @kwondiddy
    @kwondiddy Před měsícem

    I'm getting errors when trying to run... a few items that say "value not in list: ckpt_name:" "value not in list: lora_name" and "value not in list: vae_name:"
    I'm certain I put all the downloads in the correct folders and name everything appropriately.... Any thoughts?

  • @ImTheMan725
    @ImTheMan725 Před měsícem +1

    Why can't your morph 20/50 pictures?

  • @CoqueTornado
    @CoqueTornado Před 2 měsíci +1

    great tutorial, I am wondering... how many vram does this setup need?

    • @abeatech
      @abeatech  Před měsícem +1

      i've heard of people running this successfully on as little as 8gb VRAM, but you'll probably need to turn of the frame interpolation. you can also try running this on the cloud at openart (but your checkpoint options might be limited): openart.ai/workflows/abeatech/tutorial-morpheus---morphing-videos-using-text-or-images-txt2img2vid/fOrrmsUtKEcBfopPrMXi

    • @CoqueTornado
      @CoqueTornado Před měsícem

      @@abeatech thank you!! will try the two suggestions! congrats for the channel!

  • @brockpenner1
    @brockpenner1 Před měsícem

    ComfyUI threw an error in the VRAM Debug node of Frame Interpolation:
    Error occurred when executing VRAM_Debug:
    VRAM_Debug.VRAMdebug() got an unexpected keyword argument 'image_passthrough'
    Any help would be appreciated!

  • @AI-Efast
    @AI-Efast Před měsícem

    Why my generated animation very different from the reference images

  • @user-vm1ul3ck6f
    @user-vm1ul3ck6f Před měsícem +2

    Help! I encountered this error while running it
    Error occurred when executing IPAdapterUnifiedLoader:
    Module 'comfy. model_base' has no attribute 'SDXL_instructpix2pix'

    • @abeatech
      @abeatech  Před měsícem

      Sounds like it could be a couple of things:
      a) you might be trying to use an SDXL checkpoint - in which case try using a SD1.5. The AnimateDiff model in the workflow only works with SD1.5
      or
      b) an issue with your IPAdapter node. you can yry making sure the ipadapter model is downloaded and in the right folder, or reinstalling the ComfyUI_IPAdapter_plus node (delete the custom node folder and reinstall from manager or github)

  • @TinyLLMDemos
    @TinyLLMDemos Před měsícem

    how do i kick it off?

  • @AlexDisciple
    @AlexDisciple Před 20 dny

    Thanks for this. Do you know what could be causing this error : Error occurred when executing KSampler:
    Given groups=1, weight of size [320, 5, 3, 3], expected input[16, 4, 64, 36] to have 5 channels, but got 4 channels instead

    • @AlexDisciple
      @AlexDisciple Před 20 dny

      I figured out the problem, I was using the wrong ControlNet. I am having a different issue though, where my initial output is very "noisy", as if ther was latent noise all over it. Is it imporant for the source images to be in the same aspect ratio as the output?

    • @AlexDisciple
      @AlexDisciple Před 20 dny

      Ok found the solution here too, I was using a photorealistic model, which somehow the workflow doesn't seem to like. Switching to juggernaut fixed it

  • @cohlsendk
    @cohlsendk Před měsícem

    Is there an way to increase frames/batch size for FadeMask?? Everything over 96 is messing up the Facemask -.-''

  • @axxslr8862
    @axxslr8862 Před měsícem +1

    in my comfy UI there is no manager option ...... help please

  • @yakiryyy
    @yakiryyy Před 2 měsíci

    Hey! I've managed to get this working but I was under the impression this workflow will animate between the given reference images.
    The results I get are pretty different from the reference images.
    Am I wrong in my assumption?

    • @abeatech
      @abeatech  Před 2 měsíci

      You're right - it uses the reference images (4 frames vs 96 total frames) as a starting point and generates additional frames, but the results should still be in the same ball park. if you're getting drastically different results, it might be a mix of your subject + SD1.5 model. I've had the best results by using a similar type of model (photograph, realism, anime, etc) for both the image generation and the animation

    • @AI-Efast
      @AI-Efast Před měsícem

      @@abeatech Is there any way to make the result more like reference images

  • @WalkerW2O
    @WalkerW2O Před měsícem

    Hi Abe aTech, very informative and i like your work very much.

  • @devoiddesign
    @devoiddesign Před měsícem

    Hi! any suggestion for missing IPAdapter? I am confused because i didn't get an error to install or update and I have all of the IPAdapter nodes installed... the process stopped on the "IPAdapter Unified Loader" node.
    !!! Exception during processing!!! IPAdapter model not found.
    Traceback (most recent call last):
    File "/workspace/ComfyUI/execution.py", line 151, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
    File "/workspace/ComfyUI/execution.py", line 81, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
    File "/workspace/ComfyUI/execution.py", line 74, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
    File "/workspace/ComfyUI/custom_nodes/ComfyUI_IPAdapter_plus/IPAdapterPlus.py", line 453, in load_models
    raise Exception("IPAdapter model not found.")
    Exception: IPAdapter model not found.

    • @tilkitilkitam
      @tilkitilkitam Před měsícem

      same problem

    • @tilkitilkitam
      @tilkitilkitam Před měsícem +1

      ip-adapter_sd15_vit-G.safetensors - install this from the manager

    • @devoiddesign
      @devoiddesign Před měsícem

      @@tilkitilkitam Thank you for responding.
      I already had the model installed but it was not seeing it. I ended up restarting Comfy completely after I updated everything from the manager instead of only doing a hard refresh and that fixed it.

  • @Adrianvideoedits
    @Adrianvideoedits Před měsícem

    you didnt explain most important part, which is how to run same batch with and without upscale. It generates new batches everytime you queue prompt so preview batch is waste of time. I like the idea though.

    • @7xIkm
      @7xIkm Před 10 dny

      idk maybe a seed? efficiency nodes?

  • @rooqueen6259
    @rooqueen6259 Před měsícem

    Guys who have come across the fact that the loading 2 new models stops at 0% or I also had an example - the loading 3 new models is 9% and no longer continues. What is the problem? :c

  • @creed4788
    @creed4788 Před měsícem

    Vram required?

    • @Adrianvideoedits
      @Adrianvideoedits Před měsícem

      16gb for upscaled

    • @creed4788
      @creed4788 Před měsícem

      @@Adrianvideoedits Could you make the videos first and then close and load the upscaler to improve the quality or does it have to be all together and it can't be done in 2 different workflows?

    • @Adrianvideoedits
      @Adrianvideoedits Před 29 dny

      @@creed4788 I dont see why not. But upscaling itself takes most vram so you would have to find upscaler for lower vram cards

  • @ErysonRodriguez
    @ErysonRodriguez Před měsícem

    noob question: why my results more different from my output

    • @ErysonRodriguez
      @ErysonRodriguez Před měsícem

      i mean, what images i loaded have different output instead transitioning

    • @abeatech
      @abeatech  Před měsícem

      The results will not exactly be the same, but they should still be in the same ball park. if you're getting drastically different results, it might be a mix of your subject + SD1.5 model. I've had the best results by using a similar type of model (photograph, realism, anime, etc) for both the image generation and the animation. Also worth double checking that you have the VAE and LCM lora selected in the settings module

  • @3djramiclone
    @3djramiclone Před měsícem

    This is not for beginners, put that on the description mate

    • @kaikaikikit
      @kaikaikikit Před měsícem

      what are you are crying about...go find a beginner class when it's too hard to understand...

  • @zems_bongo
    @zems_bongo Před 21 dnem

    i don't understand why its doesnt work with me, i get this type of messages
    Error occurred when executing CheckpointLoaderSimple:
    'NoneType' object has no attribute 'lower'
    File "/home/ubuntu/ComfyUI/execution.py", line 151, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
    File "/home/ubuntu/ComfyUI/execution.py", line 81, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
    File "/home/ubuntu/ComfyUI/execution.py", line 74, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
    File "/home/ubuntu/ComfyUI/nodes.py", line 516, in load_checkpoint
    out = comfy.sd.load_checkpoint_guess_config(ckpt_path, output_vae=True, output_clip=True, embedding_directory=folder_paths.get_folder_paths("embeddings"))
    File "/home/ubuntu/ComfyUI/comfy/sd.py", line 446, in load_checkpoint_guess_config
    sd = comfy.utils.load_torch_file(ckpt_path)
    File "/home/ubuntu/ComfyUI/comfy/utils.py", line 13, in load_torch_file
    if ckpt.lower().endswith(".safetensors"):

  • @miukatou
    @miukatou Před 23 dny

    I'm sorry, I need help. I'm a complete beginner. I can't find any sd 1.5 model any . Where to download it? ipadapter,I cannot find this folder in my model path. Do I need to create a folder named ipadapter myself?🥲🥲

  • @user-vm1ul3ck6f
    @user-vm1ul3ck6f Před měsícem +1

    Help! I encountered this error while running it

    • @user-vm1ul3ck6f
      @user-vm1ul3ck6f Před měsícem +1

      Error occurred when executing IPAdapterUnifiedLoader :
      module 'comfy.model base’ has no attribute 'SDXl instructpix2pix

    • @abeatech
      @abeatech  Před měsícem

      Sounds like it could be a couple of things:
      a) you might be trying to use an SDXL checkpoint - in which case try using a SD1.5. The AnimateDiff model in the workflow only works with SD1.5
      or
      b) an issue with your IPAdapter node. you can yry making sure the ipadapter model is downloaded and in the right folder, or reinstalling the ComfyUI_IPAdapter_plus node (delete the custom node folder and reinstall from manager or github)

    • @Halfgawd_Halfdevil
      @Halfgawd_Halfdevil Před měsícem

      @@abeatech it say s in the note to install it in the clip vision folder. but that is not it as none of the preloaded models are there and the new one installed there does not appear in the dropdown selector. so if it is not that folder then where are you supposed to install it? if the node is bad why is it used in the work flow in the first place? shouldn't it just have the ipadapter plus node?