CG Pixel
CG Pixel
  • 163
  • 163 437
Comfyui Tutorial: Tile Controlnet For Image Upscaling #comfyui #comfyuitutorial #controlnettile
In this tutorial i am gonna show you how to use tile controlnet for upscaling your images and obtain good and consistent results at 4K resolution. #comfyui #comfyuitutorial #controlnet #sdxllightning #controlnetunion
Chapitres
00:00 Intro
00:37 Installation Part
02:00 Workflow Overview
05:25 Workflow Test
12:00 Fixing tile issue
12:33 Outro
My Upwork Profile
www.upwork.com/freelancers/~01047908de30a2c349
Reddit Profile
www.reddit.com/user/cgpixel23
1-Workflow
openart.ai/workflows/0KJbMVq0N7dMaCBB19Yg
2-ControlNet Tile :
huggingface.co/xinsir/controlnet-tile-sdxl-1.0
3-Ultrasharp model
huggingface.co/stabilityai/stable-diffusion-x4-upscaler/tree/main
4-AI Video tutorial playlist
czcams.com/video/hr77a6otZ_0/video.html
zhlédnutí: 530

Video

Comfyui Tutorial: SDXL Controlnet Union All In One #comfyui #comfyuitutorial #controlnet
zhlédnutí 2,8KPřed 14 dny
In this tutorial i am gonna show you how to use the new version of controlnet union for sdxl and also how to change the style of an image using the IPadapter. #comfyui #comfyuitutorial #controlnet #sdxllightning #controlnetunion Chapitres 00:00 Intro 00:27 Installation Part 01:42 Workflow Presentation 03:17 Workflow Test 06:05 Controlnet Union & IPadapter 07:15 Conclusion & Outro My Upwork Prof...
ComfyUI Tutorial: Depth Anything V2 & Controlnet #comfyui #comfyuitutorial #controlnet
zhlédnutí 2,9KPřed měsícem
In this tutorial i am gonna show you how to change the style of an image using the new version of depth anything & controlnet, this is a simple workflow which allows you to keep consistency of the image. #comfyui #comfyuitutorial #controlnet #sdxllightning #depthanything Chapitres 00:00 Intro 00:28 Installation Part 02:43 Workflow Presentation 04:41 Depth Anything Comparaison 06:13 Img to Img u...
Comfyui Tutorial : Morphing Animation Using QRControlnet #comfyui #controlnet #comfyuitutorial
zhlédnutí 946Před měsícem
In this tutorial i am gonna teach you how to create morphing animation using animatediff, controlnet and ipadapter on LOW VRAM graphical cards. We will also see how to upscale our video using Comfyui and TopazAI #comfyui #comfyuitutorial #stablediffusion #topaz Chapitres 00:00 Intro 00:43 Workflow Overview 04:19 Installation Part 09:10 Upscale using ComfyUI 10:16 Upscale using TopazAI My Upwork...
ComfyUI Tutorial: Exploring Stable Diffusion 3 #comfyui #comfyuitutorial #stablediffusion3
zhlédnutí 634Před měsícem
In this tutorial we will test out the new Stable Diffusion 3 SD3 Medium Model, you will learn how to install it localy and we will test it with some prompt to see its image generation efficiency Chapitres 00:00 Intro 00:47 Installation part 02:19 Workflow Overview 03:21 SD3 Test with my prompt 11:12 Stability AI prompt Test 14:12 Conclusion & Outro My Upwork Profile www.upwork.com/freelancers/~...
Comfyui Tutorial : Style Transfert using IPadapter #comfyui #comfyuitutorial #ipadapter #controlnet
zhlédnutí 1,6KPřed měsícem
In this tutorial you are gonna learn how to transfert the style of an image using Controlnet and IPADAPTER while keeping the object details, which is very usefull for architect designers. #comfyui #comfyuitutorial #comfyui #painting #ipadapter #controlnet. Chapitres 00:00 Intro 00:37 Workflow Overview 02:14 Installation part 03:52 Workflow Test 08:15 Inpainting process 12:30 Outro My Upwork Pro...
ComfyUI Tutorial: Preserving Details with IC-Light #comfyui #comfyuitutorial @risunobushi_ai
zhlédnutí 1,7KPřed měsícem
This tutorial is you are gonna learn how to change the background of product image, and control the light source a mix of nodes such as IC-Light, Controlnet and IPADAPTER while keeping the object details using my new restore details nodes. #comfyui #comfyuitutorial #comfyui #painting #ipadapter #iclight. @risunobushi_ai Chapitres 00:00 Intro 01:10 Workflow Overview 04:41 Testing The Workflow 07...
ComfyUI Tutorial: Background and Light control using IPadapter #comfyui #comfyuitutorial #ipadapter
zhlédnutí 3,1KPřed 2 měsíci
In this tutorial i am gonna show you how to change the background and light of an image using a mix of nodes such as IC-Light and IPADAPTER to obtain perfect results. #comfyui #comfyuitutorial #comfyui #painting #ipadapter #iclight Chapitres 00:00 Intro 00:32 Workflow Presentation 02:03 IC-Light background change 04:23 Light Positions 07:10 Image Upscale 08:00 IP-Adapter Nodes 10:21 Fixing IP-A...
Comfyui Tutorial: Control Your Light with IC-Light Nodes #comfyui #comfyuitutorial
zhlédnutí 2KPřed 2 měsíci
In this tutorial i am gonna show you how to control the light source of an image or video using a IC-Light workflow which allows you to obtain unic results. #comfyui #comfyuitutorial #comfyui #painting Chapitres 00:00 Intro 00:58 Necessary Files 02:42 Image Workflow Presentation 06:36 Video Workflow Presentation 11:26 Outro My Upwork Profile www.upwork.com/freelancers/~01047908de30a2c349 Reddit...
ComfyUI Tutotrial: Backround Replacement With Controlnet #comfyui #comfyuitutorial #controlnet
zhlédnutí 1,1KPřed 2 měsíci
In this tutorial i am gonna show you how to change backround of an image using a simple workflow which allows you to keep consistency of the image with controlnet and ipadapter nodes. #comfyui #comfyuitutorial #sdxlturbo #sdxllightning #painting Chapitres 00:00 Intro 00:40 Necessary Files 02:24 Workflow Presentation 06:44 IPadapter Role 09:22 Outro My Upwork Profile www.upwork.com/freelancers/~...
ComfyUI Tutorial: Painter Nodes for Image Generation #comfyui #sdxlturbo #stablediffusion
zhlédnutí 4,1KPřed 4 měsíci
In this tutorial i am gonna test painter nodes with SDXL-Lightning lora model which allows you to generate images with low cfg scale and steps from a simple drawing sketch. #comfyui #comfyuitutorial #sdxlturbo #sdxllightning #painting Chapitres 00:00 Intro 00:22 Necessary Files 01:17 Effel tower Draw test 05:12 Profile Face Draw test 06:30 Car on The Road Draw test 05:12 Outro My Upwork Profile...
ComfyUI Tutorial: Jauggernaut Lightning fastest model #comfyui #stablediffusion #sdxllightning
zhlédnutí 717Před 4 měsíci
ComfyUI Tutorial: Jauggernaut Lightning fastest model #comfyui #stablediffusion #sdxllightning
ComfyUI Tutorial SDXL Lightning Test #comfyui #sdxlturbo #sdxllightning
zhlédnutí 1,6KPřed 5 měsíci
ComfyUI Tutorial SDXL Lightning Test #comfyui #sdxlturbo #sdxllightning
ComfyUI Tutorial : Backround Removal and Replacement using BRIA A.I #comfyui #stablediffusion
zhlédnutí 3,8KPřed 5 měsíci
ComfyUI Tutorial : Backround Removal and Replacement using BRIA A.I #comfyui #stablediffusion
Comfyui Tutorial: LoRA Inpainting Process with Turbo model #comfyui #stablediffusion #sdxlturbo
zhlédnutí 2,3KPřed 5 měsíci
Comfyui Tutorial: LoRA Inpainting Process with Turbo model #comfyui #stablediffusion #sdxlturbo
ComfyUI Tutorial: Dreamshaper Turbo model comparaison #comfyui #stablediffusion #sdxlturbo
zhlédnutí 411Před 5 měsíci
ComfyUI Tutorial: Dreamshaper Turbo model comparaison #comfyui #stablediffusion #sdxlturbo
ComfyUI Tutorial: Adding Details to Stable Video Diffusion Animation #comfyui #stablevideodiffusion
zhlédnutí 946Před 6 měsíci
ComfyUI Tutorial: Adding Details to Stable Video Diffusion Animation #comfyui #stablevideodiffusion
Comfyui Tutorial: Creating Animation with Animatediff and SDXL #comfyui #stablediffusion #sdxlturbo
zhlédnutí 6KPřed 6 měsíci
Comfyui Tutorial: Creating Animation with Animatediff and SDXL #comfyui #stablediffusion #sdxlturbo
ComfyUI : Stable Video Diffusion Tutorial #comfyui #stablediffusion #aianimationtutorial
zhlédnutí 803Před 7 měsíci
ComfyUI : Stable Video Diffusion Tutorial #comfyui #stablediffusion #aianimationtutorial
Comfyui Tutorial : SDXL-Turbo with Refiner tool #comfyui #comfyuitutorial #sdxlturbo
zhlédnutí 1,1KPřed 7 měsíci
Comfyui Tutorial : SDXL-Turbo with Refiner tool #comfyui #comfyuitutorial #sdxlturbo
Comfyui Tutorial : SDXL-Turbo Extension #comfyui #comfyuitutorial #sdxlturbo
zhlédnutí 1KPřed 7 měsíci
Comfyui Tutorial : SDXL-Turbo Extension #comfyui #comfyuitutorial #sdxlturbo
Comfyui Tutorial : How to combine multiple loras #stablediffusion #comfyui #aianimationtutorial
zhlédnutí 2,9KPřed 8 měsíci
Comfyui Tutorial : How to combine multiple loras #stablediffusion #comfyui #aianimationtutorial
ComfyUI Tutorial : ControlNet animation #stablediffusion #comfyui #aianimationtutorial
zhlédnutí 1,6KPřed 8 měsíci
ComfyUI Tutorial : ControlNet animation #stablediffusion #comfyui #aianimationtutorial
blender rain effect #blender #b3d #bagarain
zhlédnutí 353Před rokem
blender rain effect #blender #b3d #bagarain
blender day night sky #blender #truesky #b3d
zhlédnutí 252Před rokem
blender day night sky #blender #truesky #b3d
blender animation got inspired from avatar the last airbender
zhlédnutí 184Před rokem
blender animation got inspired from avatar the last airbender
ultra instinct goku animation
zhlédnutí 357Před rokem
ultra instinct goku animation
Blender fire simulation 3.2 #blender #godofwarragnarok
zhlédnutí 702Před rokem
Blender fire simulation 3.2 #blender #godofwarragnarok
blender 3.2 quick smoke rendering with AI upscaling #blender #blendertutorial
zhlédnutí 3,7KPřed rokem
blender 3.2 quick smoke rendering with AI upscaling #blender #blendertutorial
blender beach wave sceane
zhlédnutí 2,7KPřed rokem
blender beach wave sceane

Komentáře

  • @vincema4018
    @vincema4018 Před 2 dny

    Sorry to say that, but given the outcomes you got in the video, I don’t feel an urge to make the change. Tile upscale in SDXL is still very primitive, and I’d rather to use Ultimate SD instead, which normally gives me much stable and better results, under both A1111 and Comfyui. Actually for the image generated from SDXL, I always use high res fix with an upscale model like 4xultrasharp, it works great in terms of keeping the original details. However, for upscaling a random images using SDXL, I would prefer SUPIR as it gives me more clean and creative results. One alternative way of refine the images generated from either SD 1.5, SDXL or MJ, is using SD3 with a low denoising without a control net. I find the results are much better than the original version. But be aware it may introduce too much details, and some of them we may not want.

    • @cgpixel6745
      @cgpixel6745 Před dnem

      well thanks for the tips i also wanted to try supir but i has a low vram thats why i wanted to try other methods like ultimatesd or tile upscale because they are less vram consuming and the results is quite acceptable but i will try your method and see the results of it it seems quite promising tips

  • @weirdscix
    @weirdscix Před 3 dny

    Nice video, I used to use Supir quite a lot, but then I moved onto using Mcboaty upscaler/refiner it uses tiling (can even edit the prompt/denoise per tile) and can use controlnet models with it as well, it's from marascott nodes.

    • @leiyangalable
      @leiyangalable Před 3 dny

      do u know how to calculate the tiles width and height? I wanna tile images myself, but I get stuck at the upscaling when I wanna put the tiled images back into whole one.

    • @leiyangalable
      @leiyangalable Před 3 dny

      I used the mcboaty too,but I wanna use different cn,so I tile myself.

  • @arjuneswarrajendran

    after the generation my storage decreased from 50 gb to 30 gb , do you know why?

    • @cgpixel6745
      @cgpixel6745 Před dnem

      it could be related to comfyui update that downloaded other nodes models

  • @user-yx4yt5zq2y
    @user-yx4yt5zq2y Před 4 dny

    Error occurred when executing RIFE VFI: RIFE_VFI.vfi() missing 1 required positional argument: 'frames' File "H:\MachineLearning\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "H:\MachineLearning\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "H:\MachineLearning\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  • @RoshanYadav-v2z
    @RoshanYadav-v2z Před 5 dny

    Error occurred when executing ImageScaleBy: ImageScaleBy.upscale() missing 1 required positional argument: fimage' File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in

    • @user-yx4yt5zq2y
      @user-yx4yt5zq2y Před 4 dny

      same but line 152

    • @RoshanYadav-v2z
      @RoshanYadav-v2z Před 4 dny

      @@user-yx4yt5zq2y I found solution

    • @cgpixel6745
      @cgpixel6745 Před dnem

      it seems that the last update created that issue i am also facing the same problem but you can use upscale image instead

    • @RoshanYadav-v2z
      @RoshanYadav-v2z Před dnem

      @@cgpixel6745 I fixed the problem

  • @health_beaty
    @health_beaty Před 7 dny

    class

  • @MisterCozyMelodies
    @MisterCozyMelodies Před 7 dny

    where can i download depth sdxl.safetensors??

  • @health_beaty
    @health_beaty Před 9 dny

    OK

  • @MaghrabyANO
    @MaghrabyANO Před 10 dny

    Can you provide the workflow in the intro?

    • @cgpixel6745
      @cgpixel6745 Před dnem

      you can found this workflow under ic-light root folder

  • @RoshanYadav-v2z
    @RoshanYadav-v2z Před 10 dny

    I got this error what i do Error occurred when executing LoraLoader: 'NoneType' object has no attribute 'lower' File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", Line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI odes.py", 11, in load lora

    • @cgpixel6745
      @cgpixel6745 Před 10 dny

      make sure that you select the lora file in the lora loader otherwise it will not work

    • @RoshanYadav-v2z
      @RoshanYadav-v2z Před 10 dny

      @@cgpixel6745 ok i will try now

    • @RoshanYadav-v2z
      @RoshanYadav-v2z Před 10 dny

      @@cgpixel6745 I got another error Error occurred when executing ADE_AnimateDiffLoaderWithContext: 'NoneType' object has no attribute 'lower' File :\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, olj.FUNCTION, allow_interrupt=True) File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func) (**slice_dict(input_data_all, i))) File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff- Evolved animatediff odes_gen1.py", line 138, in load_mm_and_inject_params motion_model = load_motion_module_gen1(model_name, model, motion_lora-motion_lora, motion_model_settings=motion_model_settings) C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff- File " Evolved\animatediff\model_injection.py", line 1201, in load_motion_module_gen1 mm_state_dict = comfy.utils.load_torch_file(model_path, safe_load=True) File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\comfy\utils.py", line 14, in load_torch_file

    • @RoshanYadav-v2z
      @RoshanYadav-v2z Před 10 dny

      @@cgpixel6745 I got another error Error occurred when executing ADE_AnimateDiffLoaderWithContext: 'NoneType' object has no attribute 'lower' File :\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, olj.FUNCTION, allow_interrupt=True) File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func) (**slice_dict(input_data_all, i))) File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff- Evolved animatediff odes_gen1.py", line 138, in load_mm_and_inject_params motion_model = load_motion_module_gen1(model_name, model, motion_lora-motion_lora, motion_model_settings=motion_model_settings) C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff- File " Evolved\animatediff\model_injection.py", line 1201, in load_motion_module_gen1 mm_state_dict = comfy.utils.load_torch_file(model_path, safe_load=True) File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\comfy\utils.py", line 14, in load_torch_file

  • @DBPlusAI
    @DBPlusAI Před 10 dny

    where can i find a lastest version , i saw in ur subcribe but it not lastest version <3

    • @cgpixel6745
      @cgpixel6745 Před 10 dny

      the latset version of what exactly ?

    • @DBPlusAI
      @DBPlusAI Před 9 dny

      @@cgpixel6745 I'm very sorry that I haven't watched the video carefully, I misunderstood, I'm sincerely sorry :<<<

  • @RoshanYadav-v2z
    @RoshanYadav-v2z Před 11 dny

    Hi sir i need help about comfyui😊

    • @cgpixel6745
      @cgpixel6745 Před 11 dny

      OFC how can i help you

    • @RoshanYadav-v2z
      @RoshanYadav-v2z Před 11 dny

      @@cgpixel6745 sir when I run comfy ui by clicking run_nvidea_gpu. This error show me what I do plzz guid Traceback (most recent call last): File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\main.py", line 80, in <module> import execution File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 11, in <module> import nodes File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI odes.py", line 21, in <module> import comfy.diffusers_load File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\comfy\diffusers_load.py", line 3, in <module> import comfy.sd File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 5, in <module> from comfy import model_management File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\comfy\model_management.py", line 119, in <module> total_vram = get_total_memory(get_torch_device()) / (1024*1024) File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\comfy\model_management.py", line 88, in get_torch_devi ce return torch.device(torch.cuda.current_device()) File "C:\Users\akash\Documents\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\cuda\__init__.py", line 778, in current_device _lazy_init() File "C:\Users\akash\Documents\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\cuda\__init__.py", line 293, in lazy_init torch._C._cuda_init() RuntimeError: The NVIDIA driver on your system is too old (found version 11060). Please update your GPU driver by downlo ading and installing a new version from the URL: www.nvidia.com/Download/index.aspx Alternatively, go to: https:/ /pytorch.org to install a PyTorch version that has been compiled with your version of the CUDA driver. C:\Users\akash\Documents\ComfyUI_windows_portable>pause Press any key to continue

    • @RoshanYadav-v2z
      @RoshanYadav-v2z Před 11 dny

      @@cgpixel6745 @cgpixel6745 sir when I run comfy ui by clicking run_nvidea_gpu. This error show me what I do plzz guid Traceback (most recent call last): File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\main.py", line 80, in <module> import execution File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 11, in <module> import nodes File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI odes.py", line 21, in <module> import comfy.diffusers_load File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\comfy\diffusers_load.py", line 3, in <module> import comfy.sd File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 5, in <module> from comfy import model_management File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\comfy\model_management.py", line 119, in <module> total_vram = get_total_memory(get_torch_device()) / (1024*1024) File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\comfy\model_management.py", line 88, in get_torch_devi ce return torch.device(torch.cuda.current_device()) File "C:\Users\akash\Documents\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\cuda\__init__.py", line 778, in current_device _lazy_init() File "C:\Users\akash\Documents\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\cuda\__init__.py", line 293, in lazy_init torch._C._cuda_init() RuntimeError: The NVIDIA driver on your system is too old (found version 11060). Please update your GPU driver by downlo ading and installing a new version from the URL: www.nvidia.com/Download/index.aspx Alternatively, go to: https:/ /pytorch.org to install a PyTorch version that has been compiled with your version of the CUDA driver. C:\Users\akash\Documents\ComfyUI_windows_portable>pause Press any key to continue

  • @bgtubber
    @bgtubber Před 14 dny

    I've also had bad results with the old SDXL canny. I've always wondered why it's worse than the SD 1.5 canny. Good to know that the Canny in the Controlnet Union model doesn't have such problems. Thanks for the demonstration!

    • @cgpixel6745
      @cgpixel6745 Před 13 dny

      thanks to you for the positive energy

  • @wellshotproductions6541

    Thank you for this, another great video. I like how you go over the workflow, highlighting different steps. And no background music to disract me! You sound like a wise professor.

    • @cgpixel6745
      @cgpixel6745 Před 16 dny

      @@wellshotproductions6541 thanks for you comments I am trying to improve the quality in every next tutorial based on the community advices, I am glad that you liked it

  • @ken-cheenshang6829
    @ken-cheenshang6829 Před 17 dny

    thx!

  • @lance3301
    @lance3301 Před 17 dny

    Great content and great workflow. Thanks for sharing.

  • @jinxing-xv3py
    @jinxing-xv3py Před 18 dny

    It is amazing

  • @MrEnzohouang
    @MrEnzohouang Před 18 dny

    I have a question to ask about the commercial use of comfyui workflow, is it possible to naturally put its products on the models? At present, I seem to know only clothes and shoes are relatively large products, but jewelry, such as earrings, bracelets, necklaces, etc. Although midjourney can be used to process the modified photos, the appearance of the product cannot be controlled, but the appearance control of very small objects in sd seems not to be strong, at least the thin chain will be difficult, I wonder if you have a solution? Thank you very much

  • @Gavinnnnnnnnnnnnnnn
    @Gavinnnnnnnnnnnnnnn Před 19 dny

    how do i get depth_sdxl.safetensors for depth anything?

  • @sinuva
    @sinuva Před 20 dny

    bit big diference actually

  • @kallamamran
    @kallamamran Před 23 dny

    I feel like V2 actually has LESS details 🤔

  • @Nonewedone
    @Nonewedone Před 26 dny

    Thank you, I use this workflow to generate a picture, everything seems good, but only the upload image didn't affect the color which I masked.

    • @cgpixel6745
      @cgpixel6745 Před 26 dny

      Try to play with the weight value of ipadapter

  • @govindmadan2353
    @govindmadan2353 Před 29 dny

    sdxl depth controlnet keeps giving error -> Error occurred when executing ACN_AdvancedControlNetApply: 'ControlNet' object has no attribute 'latent_format' Do you know anything about this, or can you please give the link to the exact dept and scribble files for ControlNet that you are using

    • @govindmadan2353
      @govindmadan2353 Před 29 dny

      Already using the one given in link

    • @cgpixel6745
      @cgpixel6745 Před 27 dny

      use this link huggingface.co/lllyasviel/sd-controlnet-scribble/tree/main also dont forget to rename your controlnet model and click refrech in comfyui in order to add the model name and it should fix your error

  • @KINGLIFERISM
    @KINGLIFERISM Před 29 dny

    comfy is so annoying. The developer really needs to make it more stable. Could not install this. And I have installed LLM's even dependencies for faceswap, dlib and anyone knows that isn't straightforward but this? No go... sigh. I give up and not reinstalling again.

    • @cgpixel6745
      @cgpixel6745 Před 29 dny

      yes you are right but for this DAV2 it is quite simple did you face any issues ?

  • @pixelcounter506
    @pixelcounter506 Před měsícem

    Thank you very much for your information. For me it's quite surprising to have a more detailed depth map with V2, but more or less the same results. I guess canny or scribble is of help to overcome that lack of precision of depth map V1.

  • @aarizmohamed17138
    @aarizmohamed17138 Před měsícem

    Amazing work🙌🙌🥳🔥

  • @lonelytaigahotel
    @lonelytaigahotel Před měsícem

    how to increase the number of frames?

    • @cgpixel6745
      @cgpixel6745 Před měsícem

      You change it with the number of frame in the video combine

    • @RoshanYadav-v2z
      @RoshanYadav-v2z Před 11 dny

      ​@@cgpixel6745Ipadaptor folder not found in model folder what I do

  • @MattOverDrive
    @MattOverDrive Před měsícem

    Thank you very much for posting the workflow! for anybody curious, I ran CG Pixel's default workflow and prompt on an NVidia P40. Image generation was 25 seconds and video generation was 9 minutes and 11 seconds. I have a 3090 on the way lol.

    • @cgpixel6745
      @cgpixel6745 Před měsícem

      I am glad that I helped you and I also have rtx 3060 yours should perform better than mine especially if you have more than 6 gb vram

    • @MattOverDrive
      @MattOverDrive Před měsícem

      @@cgpixel6745 I put in an rtx 3070ti (8gb) and it generated the image in 5 seconds and the video in 2 minutes and 13 seconds. Time to retire the P40 lol. I'll report back when the 3090 is here

    • @MattOverDrive
      @MattOverDrive Před měsícem

      It was delivered today, RTX 3090 image generation was 3 seconds and the video was 1 minute and 14 seconds. Huge improvement!

  • @weirdscix
    @weirdscix Před měsícem

    Interesting video. Did you base this on the ipiv workflow? As only the upscaling seems to differ.

  • @senoharyo
    @senoharyo Před měsícem

    thanks a lot brother! this is work flow that I'm looking for, you are my superhero ! XD

  • @runebinder
    @runebinder Před měsícem

    Interesting comparison but it's a bit of an apple to oranges one as the fine tuned models have the benefit of a much greater data set and development. Not seen anyone compare it to SDXL Base yet which would be more of an accurate check. SD3's main issue that I can see is it appears to have quite a limited training data set as poses all look very similar etc. Really looking forward to seeing what the community do with it.

    • @cgpixel6745
      @cgpixel6745 Před měsícem

      yeah i also believe that more amazing update are gonna come with this SD3 model lets cross our fingers for it

    • @Utoko
      @Utoko Před měsícem

      If you disincentivizing finetunes with your licencing it is another story tho.

  • @yesheng8779
    @yesheng8779 Před měsícem

  • @yesheng8779
    @yesheng8779 Před měsícem

    thank you so much

  • @Davidgotbored
    @Davidgotbored Před měsícem

    There is a annoying problem When i zoom out the fog on the moon disappears from my vision How can i increase the view, so the fog doesn't disappear? Please help me

    • @cgpixel6745
      @cgpixel6745 Před měsícem

      in the view tab change the end value from 1000 to 10 000 then select the camera go to the camera icon and do the same from 100 to 10 000 and it should be fixed

  • @onezen
    @onezen Před měsícem

    Can we do all the upscale stuff in ComfyUI directly?

    • @cgpixel6745
      @cgpixel6745 Před měsícem

      Yes we can I will upload a video on that soon stay tune

    • @onezen
      @onezen Před měsícem

      @@cgpixel6745

  • @user-kx5hd6fx3t
    @user-kx5hd6fx3t Před měsícem

    so great, thank you so much

  • @pixelcounter506
    @pixelcounter506 Před měsícem

    Thank you for presenting this tool. Seems to be really interesting and could be quite helpful regarding compositing!

  • @pixelcounter506
    @pixelcounter506 Před měsícem

    Your comparison between IC-light and IP-Adapter is really a good idea. I have the feeling that you have more control of the final result with IP-Adapter in selecting a base image. With IC-light you always have a quite heavy color shift. Is the mask still playing a role if you are using IP-Adapter?

    • @cgpixel6745
      @cgpixel6745 Před měsícem

      yes it is still playing role and you can check it by changing its position

  • @vincema4018
    @vincema4018 Před měsícem

    Possible to get your light type images?

  • @zlwuzlwu
    @zlwuzlwu Před měsícem

    Great job

  • @ismgroov4094
    @ismgroov4094 Před měsícem

    Thx sir❤

    • @cgpixel6745
      @cgpixel6745 Před měsícem

      your welcome hope that was helpfull

  • @StudioOCOMATimelapse
    @StudioOCOMATimelapse Před měsícem

    Merci, c'est nickel 👍

  • @ismgroov4094
    @ismgroov4094 Před měsícem

    this is good.

  • @ismgroov4094
    @ismgroov4094 Před měsícem

    thanks a lot. I respect you, sir!

    • @cgpixel6745
      @cgpixel6745 Před měsícem

      thanks it helps me to create more amazing video

  • @SoSpecters
    @SoSpecters Před měsícem

    hey, I really like this workflow and concept, but I can't seem to run it. I keep getting this error Error occurred when executing KSampler: 'ModuleList' object has no attribute '1' And in the console I see WARNING SHAPE MISMATCH diffusion_model.input_blocks.0.0.weight WEIGHT NOT MERGED torch.Size([320, 8, 3, 3]) != torch.Size([320, 4, 3, 3]) IC-Light: Merged with diffusion_model.input_blocks.0.0.weight channel changed from torch.Size([320, 4, 3, 3]) to [320, 8, 3, 3] !!! Exception during processing!!! 'ModuleList' object has no attribute '1' I didn't touch anything, and watched the IC light installation video before. I completely re-installed comfyui and installed only modules used in this workflow, and still I get this error... any ideas?

    • @cgpixel6745
      @cgpixel6745 Před měsícem

      Check your checkpoint model I personally used juggernaut version not the sdxl one

    • @SoSpecters
      @SoSpecters Před měsícem

      @@cgpixel6745 I used 5 different SD1.5 models, including the very first one that comes with comfy. Emu 1.5 or whatever it's called... right now my latest lead indicates that despite installing layerdiffuse, a requirement for IC light, it may not have installed correctly. Further research once I get home.

    • @cgpixel6745
      @cgpixel6745 Před měsícem

      @@SoSpecters in that case try update comfyui or reduce the resolution of the image from 1024 to 512 may be that would do

    • @SoSpecters
      @SoSpecters Před měsícem

      @@cgpixel6745 alright did brother, seems like it was not the case. I opened a ticket with the IC light github, I'm seeing a lot of Ksampler errors like my own. Hoping to get some feedback there and I will share with the community when I figure it out.

  • @MrEnzohouang
    @MrEnzohouang Před měsícem

    Could you help me on this case? Please An error happened while trying to locate the file on the Hub and we cannot find the requested files in the local cache. Please check your connection and try again or make sure your Internet connection is on. File "D:\AI\ComfyUI-aki-v1.3\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "D:\AI\ComfyUI-aki-v1.3\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "D:\AI\ComfyUI-aki-v1.3\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "D:\AI\ComfyUI-aki-v1.3\custom_nodes\comfyui_controlnet_aux ode_wrappers\depth_anything.py", line 19, in execute model = DepthAnythingDetector.from_pretrained(filename=ckpt_name).to(model_management.get_torch_device()) File "D:\AI\ComfyUI-aki-v1.3\custom_nodes\comfyui_controlnet_aux\src\controlnet_aux\depth_anything\__init__.py", line 40, in from_pretrained model_path = custom_hf_download(pretrained_model_or_path, filename, subfolder="checkpoints", repo_type="space") File "D:\AI\ComfyUI-aki-v1.3\custom_nodes\comfyui_controlnet_aux\src\controlnet_aux\util.py", line 324, in custom_hf_download model_path = hf_hub_download(repo_id=pretrained_model_or_path, File "", line 52, in hf_hub_download_wrapper_inner File "D:\AI\ComfyUI-aki-v1.3\python\lib\site-packages\huggingface_hub\utils\_validators.py", line 118, in _inner_fn return fn(*args, **kwargs) File "D:\AI\ComfyUI-aki-v1.3\python\lib\site-packages\huggingface_hub\file_download.py", line 1371, in hf_hub_download raise LocalEntryNotFoundError( I'm put the checkpoint file in D:\AI\sd-webui-aki-v4.5\models\Depth-Anything and Add the file link address in comfyui as: Add the file link address in comfyui as,then i'm put 3 pth files in D:\AI\sd-webui-aki-v4.5\extensions\sd-webui-controlnet\models and mark the same adress on comfyui yaml file

    • @MrEnzohouang
      @MrEnzohouang Před měsícem

      I found the file address and fixed the problem myself, thanks for the edited workflow!

  • @NgocNguyen-ze5yj
    @NgocNguyen-ze5yj Před 2 měsíci

    wonderful tutorials, could you please make a video work with people subjects? ( IClight and IPADAPTER ERROR with face and body) thanks

    • @cgpixel6745
      @cgpixel6745 Před měsícem

      Yeah I will try too I will upload another ic light soon so stay tune

  • @user-kx5hd6fx3t
    @user-kx5hd6fx3t Před 2 měsíci

    I can't find this vedio for 16:9 Version in your channel

  • @IamalegalAlien
    @IamalegalAlien Před 2 měsíci

    could you help me to solve depth anything error..? i got : Error occurred when executing DepthAnythingPreprocessor: [Errno 2] No such file or directory: 'C:\\Users\\meee2\\Desktop\\SD\ ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\comfyui_controlnet_aux\\ckpts\\LiheYoung\\Depth-Anything\\.huggingface\\download\\checkpoints\\depth_anything_vitl14.pth.6c6a383e33e51c5fdfbf31e7ebcda943973a9e6a1cbef1564afe58d7f2e8fe63.incomplete' and dont know how to solve..

    • @cgpixel6745
      @cgpixel6745 Před měsícem

      You need to place the ckpts model into the right folder comfyui\models\controlnet

    • @MrEnzohouang
      @MrEnzohouang Před měsícem

      @@cgpixel6745 I found the file address and fixed the problem myself, thanks for the edited workflow!

  • @briz-vh9sm
    @briz-vh9sm Před 2 měsíci

    Bro, If I have a portrait, I want to change the background of the person in the portrait, but in this case, the shadows and light expressions look awkward. Can you make a workflow that satisfies these two requirements?

    • @cgpixel6745
      @cgpixel6745 Před 2 měsíci

      yes bro you can do it just watch this tutorial czcams.com/video/6OpjPnim0a8/video.html it should resolve everything