CONSISTENT VID2VID WITH ANIMATEDIFF AND COMFYUI

Sdílet
Vložit
  • čas přidán 3. 06. 2024
  • Get 4 FREE MONTHS of NordVPN: nordvpn.com/enigmatic
    Topaz Labs Affiliate: topazlabs.com/ref/2377/
    ComfyUI and AnimateDiff Tutorial on consistency in VID2VID.
    HOW TO SUPPORT MY CHANNEL
    -Support me by joining my Patreon: / enigmatic_e
    _________________________________________________________________________
    SOCIAL MEDIA
    -Join my discord: / discord
    -Twitch: / 8bit_e
    -Instagram: / enigmatic_e
    -Tik Tok: / enigmatic_e
    -Twitter: / 8bit_e
    - Business Contact: esolomedia@gmail.com
    ________________________________________________________________________
    My PC Specs
    GPU: RTX 4090
    CPU: 13th Gen Intel(R) Core(TM) i9-13900KF
    MEMORY: CORSAIR VENGEANCE 64 GB
    Stabilized Models: huggingface.co/manshoety/AD_S...
    My Workflow:
    mega.nz/file/uFxXCDQJ#eN_laa1...
    IP-ADAPTER MODELS:
    huggingface.co/h94/IP-Adapter...
    CLIP VISION MODELS:
    huggingface.co/openai/clip-vi...
    huggingface.co/comfyanonymous...
    Folders
    Clip model goes in the Comfyui/models/clip_vision folder
    IPAdapter model goes in the comfyui/custom_nodes/comfyui_ipadapter_plus/models folder
    0:00 Intro
    0:36 Nord VPN
    1:16 Worflow Setup
    5:16 FreeU
    06:59 IPadapter
    10:59 Face Restore
    11:45 Controlnets
    13:45 Generate
    17:21 Upscaling
  • Zábava

Komentáře • 134

  • @enigmatic_e
    @enigmatic_e  Před 6 měsíci

    Get 4 FREE MONTHS of NordVPN: nordvpn.com/enigmatic

  • @FCCEO
    @FCCEO Před 5 měsíci

    dude! this is exactly what I have been looking for! I love the way you explain and the stuff you are covering. Thank you so much for sharing this valuable info. Subscribed right away!

  • @BrandonFoy
    @BrandonFoy Před 6 měsíci +1

    Thanks for making the time to explain and share your workflow, dude! Super appreciate it!

  • @reyniss
    @reyniss Před 4 měsíci

    Great stuff, super helpful, finally got to where I wanted in ComfyUI and vid2vid thanks to you!

  • @RealitySlipTV
    @RealitySlipTV Před 6 měsíci

    Results look great. So many softwares, so little time. Workflow looks nice. Looks like I'll need to deep dive on this at some point.

  • @user-nk4ov2xh4h
    @user-nk4ov2xh4h Před 6 měsíci +1

    Dude, you’re the best! Glad to see you have more sponsors and advertisers :) All the best to you and your channel 💪

  • @conniehe2912
    @conniehe2912 Před 3 měsíci

    Wow great work flow! Thanks for sharing!

  • @simonzapata1636
    @simonzapata1636 Před 6 měsíci +4

    Your videos are so helpful. Thank you for sharing your knowledge with us. Gracias!

  • @Lorentz_Factor
    @Lorentz_Factor Před 6 měsíci +1

    You can also skip the LoRas by selecting and CTRL-B, it'll send the signal without the LoRa Loader processing its execution step

  • @petersvideofile
    @petersvideofile Před 6 měsíci

    Awesome video, thanks so much!

  • @ysy69
    @ysy69 Před 4 měsíci

    Great tutorial, thank you

  • @mynameisChesto
    @mynameisChesto Před 6 měsíci

    I cannot wait to start playing around with this. Putting together a PC build for this reason.

  • @StillnessMoving
    @StillnessMoving Před 6 měsíci

    Hell yeah, this is amazing!

  • @GoodArt
    @GoodArt Před 5 měsíci

    Thanks dude, you rule.
    Best tute out.

  • @Distop-IA
    @Distop-IA Před 6 měsíci

    amazing stuff!

  • @FullOfHabits
    @FullOfHabits Před 2 měsíci +1

    i love you. thank you so much

  • @graylife_
    @graylife_ Před 6 měsíci

    great work man, thank you

  • @digital_magic
    @digital_magic Před 6 měsíci

    Awesome great Video,..learned a lot 🙂

  • @VairalKE
    @VairalKE Před měsícem

    I liked MDMZ ... till I found this channel. Lovely work. Keep it up.

  • @JoeMultimedia
    @JoeMultimedia Před 6 měsíci +1

    Amazing, thanks a lot.

  • @risasgrabadas3663
    @risasgrabadas3663 Před 6 měsíci

    What is the folder where the FaceRestoreModelLoader node models are placed?

  • @andyguo554
    @andyguo554 Před 6 měsíci

    Great video! Could you also share the input video and images? Thanks a lot.

  • @mauriciogianelli1573
    @mauriciogianelli1573 Před 6 měsíci

    Is there a way to see a frame of Ksampler progress before ending? I mean, in A1111 you could enter to output and see batch progress before ending. Thanks!

  • @user-uv4qe2zs1z
    @user-uv4qe2zs1z Před 4 měsíci

    thank you

  • @keepitshort4208
    @keepitshort4208 Před 5 měsíci

    What's the learning curve for comfy ui or someone you recommend that teaches comfy ui

  • @esferitasoy
    @esferitasoy Před 6 měsíci

    thx

  • @nicocro00
    @nicocro00 Před 5 měsíci

    where do you run your workflows? do you use your own desktop with what GPUs? or services like runpod etc.?

  • @abovethevoid653
    @abovethevoid653 Před 6 měsíci

    In the video, your OpenPose preprocessor (titled "DWPose Estimation") has more options than in the workflow, where it's called "OpenPose Pose Recognition", and doesn't have the bbox_detector and post_estimator options. Did you get that preprocessor from a custom node?

    • @enigmatic_e
      @enigmatic_e  Před 6 měsíci

      Does the workflow not have DWPose?

    • @abovethevoid653
      @abovethevoid653 Před 6 měsíci

      @@enigmatic_e It does, but it's not the same node I think. The one in the video has more options.

    • @enigmatic_e
      @enigmatic_e  Před 6 měsíci

      @@abovethevoid653 mmm not sure why it’s different. I wonder if it’s an updated or outdated version maybe.

  • @zensack7310
    @zensack7310 Před 6 měsíci

    By the way, I see that everyone places the width and height in external nodes? I change the values directly in the upscale node, does that alter anything?

    • @enigmatic_e
      @enigmatic_e  Před 6 měsíci

      I would just try it the way you have it setup and see how it looks.

  • @wpahp
    @wpahp Před 6 měsíci

    When I try to open your workflow I get a bunch of missing nodes, how do I install/add those on a mac? :/ thanks
    ControlNetLoaderAdvanced
    CheckpointLoaderSimpleWithNoiseSelect
    OpenposePreprocessor
    VHS_VideoCombine
    LeReS-DepthMapPreprocessor
    ADE_AnimateDiffLoaderV1Advanced
    IPAdapterModelLoader
    IPAdapterApply
    PrepImageForClipVision
    HEDPreprocessor
    FaceRestoreModelLoader
    FaceRestoreCFWithModel
    VHS_LoadVideo
    Integer

  • @user-jl4ps7qw4p
    @user-jl4ps7qw4p Před 6 měsíci

    amazing

  • @lei.1.6
    @lei.1.6 Před 5 měsíci

    Hey, I get this error and i've been trying my best to troubleshoot to no avail :
    Error occurred when executing KSampler:
    Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument mat1 in method wrapper_CUDA_addmm)

  • @fanyang2492
    @fanyang2492 Před 18 dny

    Could you explain why you set resolution to 704 in the node of HED Soft-Edge Lines?

    • @enigmatic_e
      @enigmatic_e  Před 17 dny

      I might have been playing around with parameters, it makes the hed resolution higher but can’t remember if it makes much of a difference.

  • @zensack7310
    @zensack7310 Před 6 měsíci

    Hello thanks for de video, I have been fighting with this for several days, I removed the background of the character leaving a black background, then I created HED and Openpose, both were perfectly backgroundless, I also added ip2p, when creating the video the character appears perfect but the background is dark with stripes and lights which have nothing to do with the prompt, I want to do it outdoors with sunlight but it is dark like a dark room. (If I bypass animatediff it makes the exact image at the prompt I'm writing.)

    • @enigmatic_e
      @enigmatic_e  Před 6 měsíci

      You will always have stuff generate in the background. You need to use some external software to remove the background with like a mask or rotoscoping.

  • @SadPanda449
    @SadPanda449 Před 6 měsíci

    Where do you get the safetensors model for CLIP Vision on your IPAdapter? I can't seem to get it to work. Thanks for this video! It's helped a ton.

    • @enigmatic_e
      @enigmatic_e  Před 6 měsíci

      I just included it in the description. Sorry about that.

    • @SadPanda449
      @SadPanda449 Před 6 měsíci

      Ahhh! You're the best. Thank you!@@enigmatic_e

    • @SadPanda449
      @SadPanda449 Před 6 měsíci

      @@enigmatic_e Have you gotten this before with IPAdapter? I'm thinking my issue isn't CLIP Vision related now, but thank you so much for adding the file to the description!
      Error occurred when executing IPAdapterApply:
      Error(s) in loading state_dict for ImageProjModel:
      size mismatch for proj.weight: copying a param with shape torch.Size([3072, 1024]) from checkpoint, the shape in current model is torch.Size([3072, 1280]).

    • @louprgb5711
      @louprgb5711 Před 5 měsíci

      @@SadPanda449 Hey got the same problem, did you find the solution? Thanks

    • @Kontaktfilms
      @Kontaktfilms Před 4 měsíci

      @@enigmatic_e I'm getting the same error as SadPanda... Error(s) in loading state_dict for ImageProjModel:
      size mismatch for proj.weight: copying a param with shape torch.Size([3072, 1024]) from checkpoint, the shape in current model is torch.Size([3072, 768]).
      Any way to fix this EniE?

  • @sassy-penguin
    @sassy-penguin Před 6 měsíci +1

    Quick note - I installed the IP Adapter models directly from Comfy, and it put it in the wrong folder. I found the folder and moved the contents over, then it worked.
    Overall - phenomenal work, I am running the flow right now. Does anyone know what CRF does? it doesn't seem to be affecting anything.

  • @danielo9827
    @danielo9827 Před 6 měsíci +1

    I've found that when you're trying to replace a subject with a specific style (vid2vid), using Controlnet ip2p helps with the style transfer, whether it's from IP Adapter, or LORA.
    I have a question, about something I haven't tried yet: it would seem that you can use IP Adapter to influence an image's background. Would that be possible in vid2vid?

  • @user-sd1yy7rn7g
    @user-sd1yy7rn7g Před 5 měsíci

    Is there a way to process 2-3 minute videos? like anything more than 150-200 frames crashes my comfyui. Like is there a way to do in batches maybe? Im already using low aspect ratio and everythhing.

    • @enigmatic_e
      @enigmatic_e  Před 5 měsíci

      have you tried using lcm?

    • @the_one_and_carpool
      @the_one_and_carpool Před 4 měsíci

      set the load image cap to 150 on first run then on second set the load image cap to 300 run skip the first 150 then skip first 300 skip ect or break your images up in multiple folders and run each folder

  • @Sergatx
    @Sergatx Před 6 měsíci +1

    Chingon. I'm noticing more and more people diving into comfyui.

    • @enigmatic_e
      @enigmatic_e  Před 6 měsíci

      Yeah I feel it’s the best thing at the moment.

    • @maxpaynestory
      @maxpaynestory Před 6 měsíci +1

      Only rich people with expensive GPUs are diving into it.

  • @themightyflog
    @themightyflog Před 4 měsíci

    Part 2 please.

  • @MultiBraner
    @MultiBraner Před 6 měsíci

    subscribed

  • @batuhansardas3651
    @batuhansardas3651 Před měsícem +1

    Thanks for the tutorial but How can I find "apply IPadapter". I tried for load but I couldn't find it.

    • @What-If12
      @What-If12 Před 20 dny +1

      You can replace "Apply IPadapter" with "IPadapter Advance" node.

  • @Freezasama
    @Freezasama Před 6 měsíci

    Which model is that on Load Clip model? "model.safetensors" !? where to get it?

    • @enigmatic_e
      @enigmatic_e  Před 6 měsíci

      You could probably google it. I think that’s I found a few by searching load clip models for comfyui or something like that.

  • @Disco_Tek
    @Disco_Tek Před 6 měsíci

    Anyone know a controlnet to prevent colorshift on something like clothing with vid2vid?

  • @williambal9392
    @williambal9392 Před 3 měsíci

    I have that error :
    Error occurred when executing IPAdapterApplyFaceID:
    InsightFace: No face detected.
    Any solutions please ? :)

  • @MisterCozyMelodies
    @MisterCozyMelodies Před 11 dny

    there is a problem these days, when you update the ipadapter, so this workflow doesn`t work anymore with the workflow, do you know how to fix or get the new workflow with updated ipadapter?

    • @MisterCozyMelodies
      @MisterCozyMelodies Před 11 dny

      never mind!! i follow some of the comments here and found the answer, works great, nice tuto, thanks a lot

    • @enigmatic_e
      @enigmatic_e  Před 10 dny +1

      I’ll try to upload an updated version today

  • @kleber1983
    @kleber1983 Před 6 měsíci

    My animatediff loader is not working, it doesn´t recognize the mm-Stabilized_high.pth that is in the proper folder...

    • @enigmatic_e
      @enigmatic_e  Před 6 měsíci

      There might be another folder you have to put it. Do you have more than 1 animated diff folder in your custom node folder?

  • @Gardener7
    @Gardener7 Před 2 měsíci

    Does animate diff work with SDXL sizes?

    • @enigmatic_e
      @enigmatic_e  Před 2 měsíci +1

      it does but sdxl doesnt give the best results at the moment.

  • @wagmi614
    @wagmi614 Před 29 dny

    any new workflow?

  • @joselitogonzalezgeraldo3286
    @joselitogonzalezgeraldo3286 Před 6 měsíci

  • @theindiephotographs
    @theindiephotographs Před 6 měsíci

    Best in Bizz

    • @enigmatic_e
      @enigmatic_e  Před 6 měsíci

      I appreciate that. 🙏🏽🙏🏽

  • @leo.leon__
    @leo.leon__ Před 5 měsíci

    How was the 3D video you are uploading created?

  • @shyvanatop4777
    @shyvanatop4777 Před 6 měsíci +1

    I am so confused, I am getting this error: Error occurred when executing IPAdapterApply:
    Error(s) in loading state_dict for ImageProjModel:
    size mismatch for proj.weight: copying a param with shape torch.Size([3072, 1024]) from checkpoint, the shape in current model is torch.Size([3072, 768]). Any idea on how to fix this?

    • @enigmatic_e
      @enigmatic_e  Před 6 měsíci

      Hmm are you missing model for ipadapter?

    • @shyvanatop4777
      @shyvanatop4777 Před 6 měsíci

      @@enigmatic_e had the wrong model! its solved now. ty

    • @RickyMarchant
      @RickyMarchant Před 6 měsíci

      I have the same issue, i have the model shown in the video. Do you think the CLIP model is causing this? I cant find that one, so I am using clipg

    • @risasgrabadas3663
      @risasgrabadas3663 Před 6 měsíci

      I have the same problem for both models, investigating...

    • @luciogiolli
      @luciogiolli Před 5 měsíci

      same here

  • @macadonards1100
    @macadonards1100 Před 3 měsíci

    will this work with 11gb of vram?

  • @knicement
    @knicement Před 6 měsíci

    What PC Specs do you use?

    • @enigmatic_e
      @enigmatic_e  Před 6 měsíci +1

      I have an RTX 4090. Sorry I will make sure to put that information on my videos from now on. Thank you!

    • @knicement
      @knicement Před 6 měsíci

      @@enigmatic_e thank you

  • @hartdr8074
    @hartdr8074 Před 6 měsíci +1

    Could you adjust your videos volume to be even lower so that only ants can hear it? THanks

    • @enigmatic_e
      @enigmatic_e  Před 6 měsíci

      I’m recording at a standard volume for video. I try to peak at -6db averaging -12db

  • @choboruin
    @choboruin Před 3 měsíci

    Swear u gotta be a genius to understand this stuff lol

  • @aarvndh5419
    @aarvndh5419 Před 6 měsíci

    Can I do this on stable diffusion

    • @enigmatic_e
      @enigmatic_e  Před 6 měsíci

      Do you mean in automatic 1111 the webui? If so I would say that it's more limited than comfyui. ComfyUI allows for way more customization.

    • @aarvndh5419
      @aarvndh5419 Před 6 měsíci

      @@enigmatic_e okay I'll try on ComfyUI

  • @zorilov_ai
    @zorilov_ai Před 6 měsíci +1

    nice, thanks.

  • @theairchitect
    @theairchitect Před 6 měsíci +1

    as young people say... first! _o/
    😅

  • @attentiondeficitdisorder
    @attentiondeficitdisorder Před 6 měsíci +4

    That UI isn't looking so comfy anymore. How the hell are people keeping track of all these nodes 0.0

    • @enigmatic_e
      @enigmatic_e  Před 6 měsíci +2

      😂😂 I guess it takes some getting used to.

    • @attentiondeficitdisorder
      @attentiondeficitdisorder Před 6 měsíci

      Node editor spaghetti is my kryptonite, I commend anyone able to keep track. You also can't argue with the results. Probably the best consistency I have seen yet. Good stuff!@@enigmatic_e

    • @AIPixelFusion
      @AIPixelFusion Před 6 měsíci +1

      The workflow sharing and reuse is comfy AF tho!!

    • @sebaccimaster
      @sebaccimaster Před 6 měsíci +2

      Its called continuously learning. If it sounds like hard work its because it is 😅…

    • @blender_wiki
      @blender_wiki Před 6 měsíci

      Is even not that a big workflow 🤷🏿‍♀️

  • @portl3582
    @portl3582 Před 2 měsíci

    When it gets to ControlNet, it seems that the DWpose Estimation node is not available? I also get this message:
    Error occurred when executing LeReS-DepthMapPreprocessor:
    LeresDetector.from_pretrained() missing 1 required positional argument: 'pretrained_model_or_path'
    File "C:\----COMFY-UI-APPS+FILES\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "C:\----COMFY-UI-APPS+FILES\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "C:\----COMFY-UI-APPS+FILES\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "C:\----COMFY-UI-APPS+FILES\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux
    ode_wrappers\leres.py", line 21, in execute
    model = LeresDetector.from_pretrained().to(model_management.get_torch_device())
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  • @musyc1009
    @musyc1009 Před 3 měsíci

    anyone got an error at the KSampler part ?
    Error occurred when executing KSampler:
    Unknown context_schedule 'uniform'.
    File "D:\ComfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "D:\ComfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "D:\ComfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "D:\ComfyUI\ComfyUI_windows_portable\ComfyUI
    odes.py", line 1375, in sample
    return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "D:\ComfyUI\ComfyUI_windows_portable\ComfyUI
    odes.py", line 1345, in common_ksampler
    samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "D:\ComfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 346, in motion_sample
    latents = wrap_function_to_inject_xformers_bug_info(orig_comfy_sample)(model, noise, *args, **kwargs)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "D:\ComfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\utils_model.py", line 360, in wrapped_function
    return function_to_wrap(*args, **kwargs)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "D:\ComfyUI\ComfyUI_windows_portable\ComfyUI\comfy\sample.py", line 100, in sample
    samples = sampler.sample(noise, positive_copy, negative_copy, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "D:\ComfyUI\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 713, in sample
    return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "D:\ComfyUI\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 618, in sample
    samples = sampler.sample(model_wrap, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "D:\ComfyUI\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 557, in sample
    samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "D:\ComfyUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
    ^^^^^^^^^^^^^^^^^^^^^
    File "D:\ComfyUI\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\sampling.py", line 154, in sample_euler_ancestral
    denoised = model(x, sigmas[i] * s_in, **extra_args)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "D:\ComfyUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch
    n\modules\module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "D:\ComfyUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch
    n\modules\module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "D:\ComfyUI\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 281, in forward
    out = self.inner_model(x, sigma, cond=cond, uncond=uncond, cond_scale=cond_scale, model_options=model_options, seed=seed)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "D:\ComfyUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch
    n\modules\module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "D:\ComfyUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch
    n\modules\module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "D:\ComfyUI\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 271, in forward
    return self.apply_model(*args, **kwargs)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "D:\ComfyUI\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 268, in apply_model
    out = sampling_function(self.inner_model, x, timestep, uncond, cond, cond_scale, model_options=model_options, seed=seed)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "D:\ComfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 385, in evolved_sampling_function
    cond_pred, uncond_pred = sliding_calc_cond_uncond_batch(model, cond, uncond_, x, timestep, model_options)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "D:\ComfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 461, in sliding_calc_cond_uncond_batch
    context_windows = get_context_windows(ADGS.params.full_length, ADGS.params.context_options)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "D:\ComfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\context.py", line 296, in get_context_windows
    raise ValueError(f"Unknown context_schedule '{opts.context_schedule}'.")

    • @tasticad58
      @tasticad58 Před 3 měsíci

      I've got the same error (both on macOS and windows)
      Have you found how to solve it by any chance..?

    • @jonathanbeaton6984
      @jonathanbeaton6984 Před 2 měsíci

      Same here! Any luck figuring it out?

    • @enigmatic_e
      @enigmatic_e  Před 2 měsíci +1

      fixed it and updated link, check description

    • @enigmatic_e
      @enigmatic_e  Před 2 měsíci +3

      fixed it and updated link, check description

    • @musyc1009
      @musyc1009 Před 2 měsíci

      thanks for fiixing it bro ! @@enigmatic_e