![Jerry Davos AI](/img/default-banner.jpg)
- 63
- 599 434
Jerry Davos AI
India
Registrace 11. 09. 2023
AI Animation, Tutorials, ComfyUI, AnimateDIff, Stable Diffusion
● AI Generated Art | Tutorials
● Portraits | Concepts | Fun Stuff | Animation
● Stable Diffusion | Automatic 1111 | ComfyUI
I'm Jerry Davos, 22 years old,
● 2D and 3D Artist
My Discord Server : discord.gg/z9rgJyfPWJ
#ai #aiart #aiartwork #aiartworks #artificialintelligence #stablediffusion #midjourney #fantasyartwork #aiartistry #aiartists #airartist #portraits #portraitpainting
● AI Generated Art | Tutorials
● Portraits | Concepts | Fun Stuff | Animation
● Stable Diffusion | Automatic 1111 | ComfyUI
I'm Jerry Davos, 22 years old,
● 2D and 3D Artist
My Discord Server : discord.gg/z9rgJyfPWJ
#ai #aiart #aiartwork #aiartworks #artificialintelligence #stablediffusion #midjourney #fantasyartwork #aiartistry #aiartists #airartist #portraits #portraitpainting
IC Light Changer For Videos With AnimateDiff and ComfyUI
IC Light Changer For Videos With AnimateDiff and ComfyUI
zhlédnutí: 8 542
Video
Tutorial - AnimateDiff Animation v5.0 [ComfyUI]
zhlédnutí 9KPřed 2 měsíci
Tutorial - AnimateDiff Animation v5.0 [ComfyUI]
AnimateDiff Legacy Animation v5.0 [ComfyUI]
zhlédnutí 6KPřed 2 měsíci
AnimateDiff Legacy Animation v5.0 [ComfyUI]
Yad (Sped up) - Looping AI Animation [4K] - FanArt - ComfUI AnimateDiff
zhlédnutí 1,4KPřed 3 měsíci
Yad (Sped up) - Looping AI Animation [4K] - FanArt - ComfUI AnimateDiff
How to AI Upscale Video using Stable Diffusion inside ComfyUI
zhlédnutí 6KPřed 4 měsíci
How to AI Upscale Video using Stable Diffusion inside ComfyUI
AnimateDiff ControlNet Animation v2.1 [ComfyUI]
zhlédnutí 62KPřed 7 měsíci
AnimateDiff ControlNet Animation v2.1 [ComfyUI]
[Part 2] Tips and Tricks - AnimateDiff ControlNet Animation in ComfyUI
zhlédnutí 7KPřed 8 měsíci
[Part 2] Tips and Tricks - AnimateDiff ControlNet Animation in ComfyUI
AnimateDiff ControlNet Animation v1.0 [ComfyUI]
zhlédnutí 188KPřed 9 měsíci
AnimateDiff ControlNet Animation v1.0 [ComfyUI]
PROMPT MISTAKES that DESTROY YOUR GROWTH !!
zhlédnutí 2,3KPřed 10 měsíci
PROMPT MISTAKES that DESTROY YOUR GROWTH !!
🥰❤️🥰❤️
omg it works! such a com[lex process, but very well organized and it actually works! thank you!
Great to hear! <3
Detailed tutorial please
Tutorial please
ßei
help. !!! Exception during processing!!! Allocation on device error msg
💗💗💗🌷🌷✨✨✨🖐️🖐️🖐️🖐️
Need more interaction between blending layers on feet's
Yes, I am try to improve it. Thank you <3
It's so cool. However, IC Raw Ksampler is experiencing an error. "KSamplerAdvanced: The size of tensor a (20) must match the size of tensor b (10) at non-singleton dimension 0" How can I solve it?
The light map should also have same number of frames or greater than the source video. Example 1 Source Video = 5 seconds Light Map video = 1 second Result : Error - The size of tensor a (20) must match the size of tensor b (10) at non-singleton dimension 0" Example 2 Source Video = 5 seconds Light Map video = 5 second Result : Successful Render Hope this make it clear
@@jerrydavos I am using the source file you provided. helenpeng.mp4 and LightMap.mp4 The two are equal to 20 seconds. Do I need to set frame_load_cap? To zero?
Какая приличная❤
Bagus ❤🎉 😊
❤❤❤❤❤❤❤❤
အစိမ်းရောင်
Thanks bro it actually worked out. Can I know why your are using the 4xFaceUpDAT model? I tried another model 4xUltraSharp, it creates strange patterns on clothes and shows inconsistancy on videos.
i downloaded your "Version 4.0 - May 24" files. I took "1_0) ControlNet_Passes_Export_v4.4.json" and dragged it into Comfy, and it immediately says "Error: Set node input undefined. Most likely you're missing custom nodes", so I press ok, but it wont let get past that. it just keeps repeating the error and not allowing me to get to Manager to even install the missing nodes.
Yes, it's a sort of bug, I also face it sometimes ... What you can do is first reload the page and then spam the Ok button of the error dialogue box till it disappears. Reload page again and spam ... mostly 2-3 time it will work Else you can install the nodes manually here: 1. github.com/daxcay/ComfyUI-JDCN.git 2. github.com/bronkula/comfyui-fitsize.git 3. github.com/ltdrdata/ComfyUI-Impact-Pack.git 4. github.com/kijai/ComfyUI-KJNodes.git 5. github.com/mcmonkeyprojects/sd-dynamic-thresholding.git 6. github.com/cubiq/ComfyUI_essentials.git 7. github.com/giriss/comfy-image-saver.git 8. github.com/M1kep/ComfyLiterals.git 9. github.com/theUpsider/ComfyUI-Logic.git 10. github.com/Kosinkadink/ComfyUI-VideoHelperSuite.git 11. github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved.git 12. github.com/Kosinkadink/ComfyUI-Advanced-ControlNet.git 13. github.com/shiimizu/ComfyUI_smZNodes.git 14. github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes.git 15. github.com/Nourepide/ComfyUI-Allor.git 16. github.com/WASasquatch/was-node-suite-comfyui.git 17. github.com/Fannovel16/ComfyUI-Video-Matting.git 18. github.com/Fannovel16/comfyui_controlnet_aux.git 19. github.com/comfyanonymous/ComfyUI_experiments.git
The music is so distracting!
why devil horns in dance?
excellent works
Thank you!
front and back flips randomly ;(
is dancing noodle video made using comfyui?
Yes, OpenPose is the name of the Colorful Noodles
@@jerrydavos Thanks for your response. Have you created any video tutorial on that?
@@MOTIvaTIoN-wh8qy First 2.5 mins of the above video is the steps on how to extract OpenPose.
did it have capped frame?
Yes, It is capped to 12 frames as default... so The user may not render all frames mistakenly and freeze their computer.
@@jerrydavos i guess its still isnt possible to do remastered 1-2 hour full length video with one click? been trying to find this workflow for a while
@@kilikilio5321Unfortunately no, the consumer GPUs we are using are not capable yet to hold 1-2 hours of data in Ram and run in just one click It is still Possible but very tedious with the above workflow... using the Vid2Img or Img2Img workflow and rendering in short batches and combining all the frames when done. But it's a huge headache for 1-2 hour long. You can research on Topaz Video AI Enhancer in that case, maybe it can do long renders.
Dude,how do you keep the background so stable???
I used my Background changer workflow and added a snow landscape image in the background. Workflow Reference: www.patreon.com/posts/v4-0-background-104651162
How much size will consume after adding all the elements like animatediff models, control net models
Around 8 GB GPU Vram for 50-60 frames is needed minimum. Disk Size required is also 6-8GB... around same.
Wow
Good result 😍
невыносимо это смотреть
Want to ask, which light source material are from where
it can be made using simple shapes and animating with after effects. Else you can search more "Contrasting" geometric pattern animation videos on stock websites, like shutterstock, gettimages, pexels, pixabay etc... Also I've included some samples light maps already in the workflow link folder here: drive.google.com/drive/folders/1bFfBs8mkN1HLtT1Xy6wsuOV4jl2WqiO4
Ệdodjđjdrirjdrur🎉🎉🎉🎉jeejurrjr7h4hyrryrururu🎉🎉🎉🎉🎉🎉yeyryduru🥳🥳🥳🥳🥳💐🌷🪷⚘️💮🌸🌸⚘️ydhrurruhrudhrurudu🎉🎉🎉ủudhd😊😊eudududrirjdudu
Why don’t you consider changing the clothing style completely and the background? It's similar to the original video, but it's still a cool video though.👍
Yes, I am experimenting on changing cloths and still preserving the motion of the character. Thanks!
I cant 5 or 10 second videos only allow to under 1 second why?
Set the frame_load_cap from 10 to 0 inn the load source video node to render all frames
💞🌹💗
💗🌹✨️✋️
thanks, helpful for beginners
Набор движений
The horror show undead/ghost head turn at 0:16 seconds 😱
😂😂
The checkpoint clearly needs to be looked for another))
Atghvg cgfregvh cgfr yhtvhvg cgfr
SD1.5? 👍🏼
Yes It's made in comfyui
@@jerrydavos I have your workflow , nice! Goodjob 👍🏼 Soon with Sdxl ?
hello!! anyone knows what this error does mean? thanks!!!! Prompt outputs failed validation: Failed to convert an input value to a INT value: quality, false, invalid literal for int() with base 10: 'false' Failed to convert an input value to a INT value: quality, false, invalid literal for int() with base 10: 'false' Failed to convert an input value to a INT value: quality, false, invalid literal for int() with base 10: 'false' Failed to convert an input value to a INT value: quality, false, invalid literal for int() with base 10: 'false' Failed to convert an input value to a INT value: quality, false, invalid literal for int() with base 10: 'false' Failed to convert an input value to a INT value: quality, false, invalid literal for int() with base 10: 'false' Failed to convert an input value to a INT value: quality, false, invalid literal for int() with base 10: 'false' Image Save: - Failed to convert an input value to a INT value: quality, false, invalid literal for int() with base 10: 'false' Image Save: - Failed to convert an input value to a INT value: quality, false, invalid literal for int() with base 10: 'false' Image Save: - Failed to convert an input value to a INT value: quality, false, invalid literal for int() with base 10: 'false' Image Save: - Failed to convert an input value to a INT value: quality, false, invalid literal for int() with base 10: 'false' Image Save: - Failed to convert an input value to a INT value: quality, false, invalid literal for int() with base 10: 'false' Image Save: - Failed to convert an input value to a INT value: quality, false, invalid literal for int() with base 10: 'false' Image Save: - Failed to convert an input value to a INT value: quality, false, invalid literal for int() with base 10: 'false'
Yes, Please recreate the save image node at the end and connect the lines like before.
Утренняя разминка перед выходом на работу
❤😂❤❤❤❤❤❤❤❤❤❤❤❤❤❤❤
"Rebatch" doesn't work when loading long videos. "Load video VHS" still loads all frames into RAM and then it run out of memory. I have tried "Meta Batch Manager" with "Load video VHS" and "Video Combine VHS" which only generated discontinuous scenes. By the way, I have 32G RAM which can only load 20~24 frames to process. I'm still figuring out how to generate long videos.
Hey, You have to follow the video from 7:43 to extract frames. If still you are facing ram issues while extracting the passes, then you can use the passes exporter workflow from here: drive.google.com/drive/folders/1hLU5MhikUe6SnEnEPQc3tKTaNGmFT6p2 and how it works is here: www.patreon.com/posts/v4-0-controlnet-98846295 Extract the passes you need for IC light batch workflow, which is depth, mask and frames. Then Follow as normal in the video from 11:00
❤❤❤❤😊💝
😊😊😊😊😊😊😊😊😊😊😊😊😊😊😊😊😊😊😊😊😊😊😊😊😊😊😊😊😊😊😊😊😊😊😊😊😊😊😊😊😊😊😊😊😊😊😊😊
This is a life saving tutorial, thanks man!!! May I know is it possible to change openpose to lineart controlnet? and how can i do it? thanks a lot
You can Ignore the "OpenPose Groups" and 1) Connect the line art pre-processor with the images (with a load image node or so..) you want as inputs to the controlnet node Input "image" 2) Change the load controlnet model to lineArt as well Just play around with strength and end percent till you get your results.
Real team comentar Anime team like
YAU
the background changes too much even when it's off. I am not using a girl but a tennis shoe (I bypassed the face fix nodes), could that be the reason?
FaceFix don't change the scene much... you can try changing the "Depth" controlnet model and it's processing node to LineArt Controlnet Model and Lineart Preprocessor .... and play with the strength and end percent, maybe it can help your situation.
show
Manager Button is not shown for me , how i install missing node ?
Hey, Sorry if I missed out the manager... Download it from here: github.com/ltdrdata/ComfyUI-Manager and Put it in ComfyUI > Custom nodes
@@jerrydavos Work Thanks
I don't get why the file output node has a # symbol. Can I change it with a normal save path?
Yes, you can. Just copy and paste your folder path where you want to save the video or the images.