Stable Diffusion ComfyUI And Diffutoon Create AI Videos - Domo AI Alternative?
Vložit
- čas přidán 25. 06. 2024
- Stable Diffusion Animation - ComfyUI And Diffutoon Create Deflickering Videos - Domo AI Alternative?
We're diving into the world of Stable Diffusion's animation and exploring a fascinating new project called Diffutoon. This project takes video-to-video transformations to a whole new level by turning dance videos into anime or cartoon-style videos.
Related Video:
Stable Diffusion Video To Anime Style • How To Make Stable Dif...
Resources :
For Patreon Supporters: / 106877649
Diffutoon Page: ecnu-cilab.github.io/Diffutoo...
Github: github.com/modelscope/DiffSyn...
For ComfyUI: github.com/AInseven/ComfyUI-f...
Throughout this video, I'll be explaining how Diffutoon functions as a pipeline, linking up various diffusion models like ControlNet and AnimateDiff Motion model. We'll also discuss the integration of ComfyUI extensions, which allow for smooth video generation and rebranding of images.
I'll guide you through the process of setting up and running Diffutoon in Google Colab, highlighting the required diffusion models, line art, depth maps, and soft edges. We'll also explore the benefits of using ComfyUI's custom nodes like "smooth videos" and "My OpenPose" for creating stunning animations with low VRAM consumption.
Moreover, I'll showcase the DiffSynth pipeline, which transforms original videos into alternative styles using diffusion models. We'll delve into the concept of image-to-image techniques and how they can be utilized in the Diffutoon project.
To provide you with a comprehensive understanding, I'll break down the workflow steps and demonstrate the differences between using the smooth video JSON file and the smooth video with batch size JSON file. We'll also touch upon the significance of ControlNet and K-sampler in achieving desired results.
So, if you're interested in exploring the world of video-to-video transformations and creating captivating anime or cartoon-style videos, this video is a must-watch for you! Don't forget to hit that subscribe button and turn on the notification bell, so you never miss an update on all things AI-related.
If You Like tutorial like this, You Can Support Our Work In Patreon:
/ aifuturetech
Discord : / discord - Věda a technologie
Imma try it tomorrow. Cheers
I get error at realistic line art for some reason? Do you know how to resolve this? Thanks
Looks fun Diffutoon, will try it
yup it is. :)
Thank you! Inspiring.
Welcome
Cool! Tiktok trending video style.
Yes, and I know agencies are doing it in Tiktok.
This is incredible! What is the input-video resolution size limit and length? Is there any point in capturing my videos in 4K?
I don't think you can generate 4K, its SD1.5 he's using. You can upscale your video after process.
@@TheEconoVision Ok, so down-res my 4K footage to 768x512 before running it in ComfyUI. Is there a limit on the length of the footage?
@@SFzip actually , you don't need to down-res your video file.
In Comfyui, use load video, connect images to Image Resize node. So you can set the dimension. Way faster.
This is wonderful. I assume this is not SXDL ready yet, right?
02:10 - they have list SDXL. 😉 Img2img in ComfyUI and Diffutoon script Good to go
@@TheFutureThinker really!!!! I will check it out!!! thank you!!!!
This is fun , 1 code can do anime video. I will try it.
Yup 👍
Cool stuff. Would have been good for dance videos but sadly will get a copyright strike on CZcams.
Thats why the better way to change character and background, only use the movment.
Fantastic work, like always
✌️✌️
thanks! where was the comfyui node/workflow?
In the example folder
@@TheFutureThinker ok its paywalled?
@@digitalflick in the Github, for freebie geeks.
I watched the video and tried to follow along.
I saw this error message on the video combine node side.
Error occurred when executing VHS_VideoCombine:
Cannot handle this data type: (1, 1, 512, 3), |u1
The only thing I changed the settings to is
The SD1.5 version checkpoint I have and
This is about the size of a sample video.
The rest of the settings were the same, but an error occurred.
Try changing the size as it was in the original example file.
How can I fix this error?
Fortunately, I changed the example file and it worked.
Great you can do trouble shooting 👍
will get blurry result if the stylize image pattern (edges, contrast) is different from the original image due to the blending algorithm; Need extra steps to deblur
Good test👍
@@huichan5140 thanks for telling
Is it possible to make it up to 30 seconds?
depends how many VRam u have. usually, we can set a frame cap, then skip frame to continue generate frames.
does Google Colab approve usage of stable diffusion notes?
Why not. Its only limited vram for free account, and gradio library for webui public link
@@TheFutureThinker what do u mean?
GTX 1650 super work ?😢😢
2 hours and smooth video node has not moved an inch, with vid 2 vid , 60 frames cap , lcm cp and lora, tried again and again
Recent comfyui need update, I have experienced that 2 days ago
tried diffutoon, it is not that good & very slow
You mean the Python code itself, Comfy or the Colab Notebook?
Can 8gb vram handle this😅
i have 4090 24gb doesnt handle properly
That‘s a waste for the 4090 😅
@@TheFutureThinker 😂😂
@@TheFutureThinker what you mean? I tried to use it, that plugin use all my vram and is very slow!!
I am not sure... From my 4090 setup it works. evidence showing in the video. 😉