ComfyUI Architectural design plan workflow
Vložit
- čas přidán 17. 02. 2024
- ComfyUI interior design ControlNet IPadapter workflow
From an architectural design plan, up to endless design possibilities
you can upload an existing plan, prompt and examples and receive endless variations , adapted to the dimensions of the furniture and spaces you have created.
#comfyui #stablediffusion #ipadapter #controlnet #depthanything #img2img
follow me @ / pixeleasel
Workflow:
*update Workflow Version (Compatible with Ip-adapter V2)
drive.google.com/file/d/1SURg...
old version
drive.google.com/file/d/1Zkkz...
DepthAnyThing demo:
huggingface.co/spaces/LiheYou...
DepthAnyThing Model
huggingface.co/spaces/LiheYou...
LineArt (and other ControlnNets) Model
huggingface.co/lllyasviel/Con...
WidStudio
www.wid.co.il/copy-of-work-2
Nice...Thank you...
more than welcome!
good job bro,i will follow your steps
good luck!!
good job,Thank you very much.
thx!!!
very cool. So the first step is to use one of the available virtual decoration tools and then bring the basic decoration over to ComfyUI to generate multiple versions with different esthetics.
yep! very useful for getting ideas
😍Thank you very much for your video! Very good content, honestly shared. I have a question: Lora is important because I don't have Lora like yours. Thank you very much , Wish you all the best !
thanks!! it's just lora for lcm so it will be a bit faster. you can use the same workflow without it . just adjust the k sampler accordingly
Hi, thanks for the video. What type of lora are you using? My results are blurry, I think it's because I'm using the wrong lora. Thanks
it's lcm , just to make it a bit faster
you can also bypass (just change your ksampler to match)
This is awesome. After loading the depthanything, i see it in load advanced controlnet model, but it doesn't show up in the aio aux preprocessor. Is something missing?
thx. did you try to update comfy?
I did thanks. After it showed. It would also be worth noting that there was a custom node that needed to be updated and it was causing an error. But got it worked out and this is an amazing workflow. Thanks for sharing :)
i'm glad its working
I downloaded the workflow you provided for me to run. I found and collected various necessary parts, but I couldn't find the pytorch_lora_weights_SD.safetensors of the error message. Do you know where I can get this file?
huggingface.co/collections/latent-consistency/latent-consistency-models-loras-654cdd24e111e16f0865fba6 here you can find the models for sdxl and sd 1.5
I started working after many attempts. In the saved value, only the Aio aux Perprocessor part did not work as set, so I set both work windows to none. The output image comes out well as the image in the inserted image and the image in the depth map, but the color does not apply. Is this related to the role of the Aio aux Perprocessor?
if I understand correctly... it can be the problem . you can use any other preprocessor, but it's important to preprocesse
@@PixelEasel Thank you for your answer. I think we need to make efforts to create a more similar environment.
I am not sure where to download deepanything preprocessor, just downloaded the model.
I got bug report below:
Error occurred when executing AIO_Preprocessor:
An error happened while trying to locate the file on the Hub and we cannot find the requested files in the local cache. Please check your connection and try again or make sure your Internet connection is on.
I think the preprocessor should be downloaded automatically
In any case, you can use any other node, and not necessarily in aio
Thanks for the Tut. Everything works if I bypass the IP adapter groupnode. But If I activate it I get this:
Prompt outputs failed validation
IPAdapterModelLoader:
- Value not in list: ipadapter_file: 'ip-adapter-plus_sd15.safetensors' not in []
CLIPVisionLoader:
- Value not in list: clip_name: 'model.safetensors' not in []
By the way I downloaded the 3 ip adapter files from hugging face and place them into the "d-webui-controlnet\models" folder
Any guidance will be appreciated.
it's seems that you need to download the clip vision models and put it in the right directory as it's mentioned in the message you got
"model.safetensors" is so very generic that it seems impossible to find the correct file. There are many projects that generate files with that name. Can you provide a link to the specific one you use? @@PixelEasel
@@PixelEaselYou were right that was missing, but then I placed everything were it's supposed to be and it gave me this error:
Error occurred when executing IPAdapterApply:
Error(s) in loading state_dict for Resampler:
size mismatch for proj_in.weight: copying a param with shape torch.Size([768, 1280]) from checkpoint, the shape in current model is torch.Size([768, 1664]).
@@brianedgin After a looot of hustling I finnaly made it work but not for sd_xl_base 1.0 model that was my goal. I've tried lots of combinations of VAE, IPAdapter, CLIPVision, but I just never make it work for that XL model. Do you happen to know the recomended (VAE, IPAdapter, CLIPVision) checkpoints to work with that model, or I'm missing something else?? Thanks for any clue...
Warning: Ran out of memory when regular VAE decoding, retrying with tiled VAE decoding..........I am getting this error what do I do?
it's seems you run out of memory...
hi after installing Ip adapter i have everything but apply Ip adapter node what should i do i re install it but noting happened and i updated the comfy too
the name has changed, u can use the IPAdapter Advanced
@@PixelEasel in the weight_type of the new IPAdapter version there is no "channel penalty" ...which other type would be the best?
Hello! Let me ask you: How to install Depth-Anything? Thanks!
I'm showing how to install it in the end of this video. you also have the link to the model page in the description
@@PixelEasel Thanks
Error occurred when executing KSampler:
list index out of range
Try reloading the Workflow, this sounds like a strange message regarding this Workflow
🤦 "Promo sm"
Hello, I download the workflow, and I also have a problem when running IPAdapter.
Error occurred when executing IPAdapterApply:
Error(s) in loading state_dict for Resampler:
size mismatch for proj_in.weight: copying a param with shape torch.Size([768, 1280]) from checkpoint, the shape in current model is torch.Size([768, 1024]).
File "E:\05-Software
ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\05-Software
ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\05-Software
ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\05-Software
ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 769, in apply_ipadapter
self.ipadapter = IPAdapter(
^^^^^^^^^^
File "E:\05-Software
ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 369, in __init__
self.image_proj_model.load_state_dict(ipadapter_model["image_proj"])
File "E:\05-Software
ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch
n\modules\module.py", line 2152, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:
\t{}'.format(
Have you also ever encounter some errors like this?
I fixed it, i think it is because I use the wrong clip vision model
good to know! thx
For some reason my ksample kept on throwing errors with this, even with the latest update?
mat1 and mat2 shapes cannot be multiplied (77x2048 and 768x320)
File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\ComfyUI
odes.py", line 1344, in sample
return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\ComfyUI
odes.py", line 1314, in common_ksampler
samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 22, in informative_sample
raise e
File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 9, in informative_sample
return original_sample(*args, **kwargs) # This code helps interpret error messages that occur within exceptions but does not have any impact on other operations.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\control_reference.py", line 47, in refcn_sample
return orig_comfy_sample(model, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
try to resize the sketch out of comfy. and make sure u up to date
@@PixelEasel do both image need to be the exact same resolution ?
I got those after re-size it down
mat1 and mat2 shapes cannot be multiplied (154x2048 and 768x320)
File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\ComfyUI
odes.py", line 1344, in sample
return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\ComfyUI
odes.py", line 1314, in common_ksampler
samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 22, in informative_sample
raise e
File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 9, in informative_sample
return original_sample(*args, **kwargs) # This code helps interpret error messages that occur within exceptions but does not have any impact on other operations.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\control_reference.py", line 47, in refcn_sample
return orig_comfy_sample(model, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\ComfyUI\comfy\sample.py", line 37, in sample
samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
no, the sketch go through the get image size, just so the latent be in the same size, the reference image for the design can be different in size and proportion
how did you fix this error?
@@JonBekk make sure to use the right checkpoint. I noticed that was the problem.
i have this error when running it. can you assist? Thank you.
Error occurred when executing IPAdapterApply:
'NoneType' object has no attribute 'patcher'
File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 751, in apply_ipadapter
clip_embed = encode_image_masked(clip_vision, image, clip_vision_mask)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 270, in encode_image_masked
comfy.model_management.load_model_gpu(clip_vision.patcher)
^^^^^^^^^^^^^^^^^^^
if you can use ipadapter with different model (of adapter) than the problem is with the plus model and you need to install it ... if not try reinstall the ipadapter
Thank you very much for your quick reply. When I loaded your workflow, I missed ipadapter plus model and so I got that one downloaded. I don’t have the clip vision model. I believe it has to be the safetensor model that needs to be in the clip vision folder. Do you have the link to that model?
here you have all the models you need
huggingface.co/h94/IP-Adapter/tree/main/models