STABLE VIDEO DIFFUSION | COMFYUI
Vložit
- čas přidán 24. 11. 2023
- Get 4 FREE MONTHS of NordVPN: nordvpn.com/enigmatic
Topaz Labs BLACK FRIDAY DEAL: topazlabs.com/ref/2377/
Stable Video Diffusion is finally compatible with ComfyUI
HOW TO SUPPORT MY CHANNEL
-Support me by joining my Patreon: / enigmatic_e
_________________________________________________________________________
SOCIAL MEDIA
-Join my discord: / discord
-Twitch: / 8bit_e
-Instagram: / enigmatic_e
-Tik Tok: / enigmatic_e
-Twitter: / 8bit_e
- Business Contact: esolomedia@gmail.com
________________________________________________________________________
My PC Specs
GPU: RTX 4090
CPU: 13th Gen Intel(R) Core(TM) i9-13900KF
MEMORY: CORSAIR VENGEANCE 64 GB
Stable Video Diffusion Models:
comfyanonymous.github.io/Comf...
Workflow:
mega.nz/file/aQAzkA4L#n43K8o6...
Water Lora by joachim:
civitai.com/models/210754/aet... - Zábava
If you’re new to ComfyUI watch my beginners tutorial here czcams.com/video/WHxIrY2wLQE/video.htmlsi=HV61VB9nt4wxn18L
👏MASSIVE thanks. Soooooooo glad I found your channel! You've got a new sub.👊
Yooo! Thank you!! 🙏🏽 🙏🏽
super cool. cant wait to see what this does in a month. longer videos would be amazing.
Exciting times!
Omg the advancements in video/animation are crazy lately 🤯. I think i still like having control on the animation with motion loras more but hey this setup is so easy. I can also imagine it being used in conjunction with other methods
Its insane how fast we're moving with this! I can't wait till we get even more control.
this thanksgiving I'm grateful for enigmatic_e's tutorials! 🙌🙌
shouts outs to Kijai as well!
🎉
Thanks. Just starting to fiddle around with this.
I didn't know this was a thing. Thank you!
Im getting Boring animations! It always happens to me when I use animatediff with comfy or automatic...but... I LOVE CLI with animate diff! Ill keep at this and tweaking my settings! THANK YOU FOR YOUR HELP!!!
Wow, this is a game changer my brother! Is there a frame cap similar to AnimateDiff? Thank you so much for putting this awesome tutorial out there!
You can add frames but I think it gets destroyed after some time
It’s insane how much progress ai art has made in the last 6 months alone…
facts
Great video as always! Keep it up!
Thanks 🙏🏽
So great that you set this up. Thank you so much! It’s working great, but I do see this error after the job is run “exception in callback Proactor BasePipeTransport.callconnection lost” at asyncio\events.py line 80 in run… and then “an existing connection was forcibly closed by the remote host”… I think there are processes hanging around after the run than need to be cleaned up
I have a problem finding where and how to install the RIFE VFI node. Could you tell us where we can find it? Otherwise thanks for the video!
this is beautiful
wow looks great man.thx
No problem!
Interesting. Curious to see if you can combine this with text prompt conditioning to guide the video output. I'll certainly be doing some mad science experiments. Thanks!
Remove the if. Of course we can do that already, we always could. Prompts influencing the generation of pixels is the whole purpose of Stable Diffusion...
Thanks, haha! Yeah, I've gotten there. Love the freedom & flexibility ComfyUI offers.
you can conneect the output from VAE decoder to SVD, and make promp to video
Have you gotten good results?
sweet, thanx for sharing :-) Cool stuff!!!
I am getting a lot of help from your video. I have one question. Among rife VFI ckpts, which repo contains files such as rife40.pth, ....., sudo_rife4_269.662_testV1_scale1.pth? No matter how much I searched, I couldn't find it. I'm looking forward to your smart answer to my stupid question.
Strangely it always never works, because of conflicts, not able to install the missing nodes, not able to install missing libraries, etc
Do you know which nodes are missing?
thanks for sharing, man! 😘
🙏🏽🙏🏽🙏🏽
thanks for the workflow! can you tell me where the YHS module sends your exports? cant figure it out.
it send here: ComfyUI_windows_portable\ComfyUI\output
Always getting:
Error occurred when executing KSampler:
Conv3D is not supported on MPS
on M2 :/
Thank you. Your videos are really great and helpful. But is it possible to do something similar if we have an AMD card?
Hello!, unfortunately I don't know. I don't own an AMD card to test this. Technically you could run ComfyUI through CPU but its very slow.
@@enigmatic_e thank you. It always scared me. That's why I haven't tried it yet; I just watch videos about it.
Really cool, Thanks
I followed your steps to install comfy but whenever I want to let it run.. it says request to download models and I have to wait for a while.. how can I solve that ?
Thanks
Is it possible to add controlnets like openpose then have this animate an image using the controlnet information?
Not at the moment, at least not anything that looks good.
how do you get smoother natural skin?
Great tutorial as always! Can we increase the output video duration?
You can but it starts to degrade over time.
@@enigmatic_e Yes I have realized that. Hopefully there will be a workaround like there was for Animatediff with prompt travel + IP Adapter. Do keep us updated
@@tdfilmstudioyea I’m sure we will get new tools soon! Can’t wait!!
need some help, im getting Error occurred when executing KSampler: input must be 4-dimensional, when trying to do the stable difusion animate in comfy ui, i have an AMD 7900XT.
What is the specific file to download from the huggingface site?
It seems not working on mac, I have ComfyUI & all custom nodes installed, but still got tons of error when calculation reaches KSampler
Is it able to work with any image or does it need to be a stable diffusion generated image?
No, it could be any image but I’ve noticed that some images work better than others.
so sick
My GTX 1080 does not like this. Very, very cool though! Maybe when Nvidia GPU's do not cost the same as a half decent used car, I can get in on it too. I actually just built a new pc days ago, but there wasn't enough money for a new GPU..
Did you try it with both models?
I've only tried the XT one (25fps).
It does actually work, it's just slow :) Using an external model, "BB95" and getting some slightly weird results, but it's not like AnimateDiff that just generated noise.. This actually works! Can't find the saved clips though.. Thanks! @@enigmatic_e
When I import your workflow I get When loading the graph, the following node types were not found: RIFE VFI Seed (rgthree) Nodes that have failed to load will show as red on the graph. the seed and RIFE VFI panels are errored out any advice?
nevermind fix for this is installing missing nodes^
Glad you found a solution!
is it possible for batch process?
Where is it saved though? Just cannot find the file!
Can confirm it works on GTX1080 with 8GB Vram. But, it takes a while. I've succesfully generated 25frames of 1024x768 video.
Where is what saved? The video?
the best maan
🔥 🔥 🔥
Is there a way to extend a video after it has been created?
Do you mean like add frames to allow slow motion?
@@enigmatic_e good ideia
cheers mate, hitting that ai spot once again! ;)
😎👍🏽
i get this error in in the rife node "Prompt outputs failed validation
RIFE VFI:
- Value not in list: ckpt_name: 'sudo_rife4_269.662_testV1_scale1.pth' not in ['rife40.pth', 'rife41.pth', 'rife42.pth', 'rife43.pth', 'rife44.pth', 'rife45.pth', 'rife46.pth', 'rife47.pth', 'rife48.pth', 'rife49.pth']"
click on refresh and pick again model
@@vlada9740 Please elaborate. Same error. No missing custom nodes. Refreshed. Model picked again and server restarted. Thanks.
nice 😍
How many frames can i generate i mean can i create a video of 2 to 4 minutes long ?
Not really. I think some people find word arounds like getting the last frame of a 25 frame video and rerun it and at then edit them together.
Where do I install the sudo_rife4_269.662_testV1_scale1.pth in ComfyUI ??
Bumping this - running into the same issue @enigmatic_e
hi, how can i install automatic1111 on confyUI
Where does we have to put the svd files ?
In your checkpoint folder
On an 4090, how long for each video?
It takes about 1 minute or so for a 24 frame video.
What is the minimum requirement of VRAM?
I’ve heard some say 10-12 vram works but I haven’t tested that.
At least with using my 3060 12gb it runs around 8gb vram
excelent, now it's time to throw L2d into trashcan
Error occurred when executing SVD_img2vid_Conditioning:
'NoneType' object has no attribute 'encode_image'
File "C:\comf 2\ComfyUI_windows_portable\ComfyUI\execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "C:\comf 2\ComfyUI_windows_portable\ComfyUI\execution.py", line 83, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "C:\comf 2\ComfyUI_windows_portable\ComfyUI\execution.py", line 76, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "C:\comf 2\ComfyUI_windows_portable\ComfyUI\comfy_extras
odes_video_model.py", line 45, in encode
output = clip_vision.encode_image(init_image)
wont let me generate. i updated and restarted and downloaded models and put them into the checkpoints folder in the models folder
do you think you can help me out further
running the run_gpu_updates.bat fixed everything for me, i was having same issue, idk if its same for you
@@Paracast where is this found?
what foilder is that in?
@@Paracast
i loaded everything clicked update all, but i dont get the new video nodes :(
Did you restart everything?
@@enigmatic_e yes, restarted everything
@@Zippo08 then I would just check to see if downloading missing nodes through manager works.
running the run_gpu_updates.bat fixed everything for me, i was having same issue, idk if its same for you@@Zippo08
@@enigmatic_e hmm its says all updated ;( but thank you anyway
Can't get it to work. Keep getting errors :(
At what point does the process stop? Which nodes has a red outline?
Can't get why but my video is oversaturated every time...
how VHS ?
Ask me for VHS_VideoCombine ? I could not find it in the manager
Sometimes I lose track of where I get the nodes from, either manager or just google it and and install it into the custom node folder.
Comfy Ui? nope. I'll wait for Auto 1111
Comfy is not so bad 😂
@@enigmatic_e I just dont enjoy having to create a whole workflow for something auto 1111 does with a single switch, it's slower than auto1111 in most cases too.
Bro you need to go into better detail about how to install the required nodes. I literally just installed the basic ComfyUI, not everyone has the same nodes as you do.
Sorry about that. Are you new to comfyui? If so I just pinned a comment with a link to my beginners tutorial. You need to install manager and there’s an option to install missing nodes automatically.
Thx for the awesome Video! I made a shoutout to you in my video. Hope you get a bunch of additional subs from this :)
Hey Olivio! Big fan of your channel! Thank you for the shout out! 🙏🏽🙏🏽🙏🏽
@@enigmatic_e 🥰
I'm getting the following errors and don't know where to start
Prompt outputs failed validation
ImageOnlyCheckpointLoader:
- Value not in list: ckpt_name: 'svd_xt_image_decoder.safetensors' not in ['shendan v2.safetensors', 'svd.safetensors', 'svd_xt.safetensors', 'v1-5-pruned-emaonly.safetensors']
RIFE VFI:
- Value not in list: ckpt_name: 'sudo_rife4_269.662_testV1_scale1.pth' not in ['rife40.pth', 'rife41.pth', 'rife42.pth', 'rife43.pth', 'rife44.pth', 'rife45.pth', 'rife46.pth', 'rife47.pth', 'rife48.pth', 'rife49.pth']