Complete Comfy UI Guide Part 1 | Beginner to Pro Series
Vložit
- čas přidán 14. 06. 2024
- This is the first part of a complete Comfy UI SDXL 1.0 Guide. This guide is part of a series to take you from complete Comfy UI Beginner to expert.
Workflow (you can drag the png onto your canvas just like the JSON): drive.google.com/file/d/1WhxG...
If you want to support this channel, I've setup a Patreon for those who wish to help.
/ endangeredai - Věda a technologie
This is lowkey one of the best introductions into how SD works WHILE giving a good start for working with ComfyUI. Well done!
Thank you so much! Glad the videos are helpful!
That might be the first comfy ui explanation that made it looks easier. Good Work! For the advanced videos it would be awesome to explore custom nodes and how to make them work together.
I’m so glad to hear that! I did my best to answer everything I had questions about when I started! I do plan to start introducing custom nodes down the line! But I thought working with out of the box stuff would be better first!
Thank you so much, i was scratchin my head to find the workflow for the refiner...Now it makes sense ! great video !
Glad it helped!
I believe and hope your channel will grow, because it will. Sticking around to see it!
Thank you so much! I really appreciate your comment!
What an outstanding tutorial, detailed step by step and easy to follow.
Thank you very much for taking the time to make this video.
Subbed.
Thank you! ❤️❤️❤️, glad it was helpful!
Thank you very much for this. I can’t wait to see the next one.
Glad you enjoyed it!
Thank you for this tutorial! I was looking for a "how SD works" and this covered it.
Glad it was helpful!
best intro into ComfyUI i have seen, thank you for the help
Glad it helped!
thank you so much for this workflow I dont know if you just have that magic touch or what but other workflows never produced great results for me. For some reason your workflow has allowed me to produce substantial results in sdxl for my business thank you
I’m glad it’s been helpful for you!
Thanks a lot. This helped me to get started with SDXL and ComfyUI
Great to hear!
thank you mate. this got me into the rabbit hole. amazing!
the only thing i can add is "if you are installing with pinokio it will not add the basemodel VAE in the models folder and you will not be able to use the load vae node. i will just give you an error.
you can either fix it by connecting the "load checkpoint VAE output" directly to "VAE decode input" which will make you use the basemodel VAE or you can download a VAE model and copy it to the VAE folder on huggingface or use the "manager" funtion of comfy (bottom right) to find other checkpoints.
but be carefull because when you use old VAEs it will produce just noisy images.
--- as mentioned: i started this all yesterday. the VAE thing took me a while to figure out but this tutorial is so damn amazing. I would never have managed to make this first step without it.
THANK YOU!
I wasn't sure if I would learn anything new. But, I liked your approach of teaching by doing deliberate mistakes rather than just give a step by step instruction. Thanks.
Glad it was helpful! I found that process to help understand better what’s going on!
Very clear instructions, thank you.
Glad it was helpful!
awesome! very well explained.
Glad it was helpful!
@@EndangeredAI I like how you went step by step, even making mistakes to come back and fix them in a logical manner. That's how I like to learn :)
I'm just starting and a bit overwhelmed. First I had a lot of issues with installing but that's my own fault (its never a good idea to just throw an installation together from diffrent installation mehtods I learned :D ) .. Now that I got that figured out it's time to learn. I've checked numerous video's. A lot of folks try to do a cash grab . The real content is on their patreon. That's their prerogative of course but I just want to learn an OPEN SOURCE piece of software for FREE :D You're channel is great and I am sure I will learn a lot. Keep it up!
Thanks so much for your comment! It’s heartwarming to see how many people this video has helped 😁
excellent, new sup cheers and good luck on your channel pal
Thank you! And thanks for your support!
You explain very well. I subscribe
Thank you! I very much appreciate that!
Very good video, the try, get the error, and improve it's an excellent way or learning. Most videos just show the end result.
Thanks! I find it’s helpful to just show my mistakes
Nice tutorial!👍
Thanks!
Very good tutorial...I'll be watching more of your content. One question why does the final image have speckles on it?
Let me check, but most likely it’s leftover noise. I’m replying on my phone. I’ll check the video and image later when I get a chance.
Awesome
What are you typing, clicking, or etc in order to get the popup that lets you type in a word? An example is at the 15:00 mark when you somehow bring up 'Search' to find advanced Ksampler. How did you pop that window up?
Double click :)
hi can you upload csv format for the prompt in comfui
What are the CLIPTextEncodeSDXL nodes? they seem to have the positive and negative merged, there's only one conditioning output
I cover this in the next video, part 2 :)
But why do i need the refiner if i can just up the steps in the first ksampler?
Valid question! The refiner (as the name implies), works on bringing out details from the image to end up with a more crisp and refined result.
Essentially the base model gives you an outline, and the refiner goes in to place the details. Try it out! I’ll make a post about it here too comparing the two
There's not a speed difference between automatic and comfyui when using a basic generation work flow right?
I prefer using comfyui, but I didn't notice any speed difference between the two.
Some people do. I find if you’re doing the initial run it takes longer as everything has to load. But once you’re only making small changes it’s much faster
@@EndangeredAI As in, once everything is loaded, for the same tasks, you find comfyui to be faster?
I think I do, yeah
Any plans to cover controlnet?
Next video in fact! I’m writing the script as I type this!
why am i getting two images in one page when i give prompts in compy ui
Really? Two images? Are you using the workflow in the provided image? I’d love if you could share your output with me, im curious as to what’s happening
kindly Create a video about How to generate consistent character using ComfyUI?
I appreciate it🥺
It's In the pipeline! Please subscribe and click the bell Icon! I´m planning to make the next-next video on controlnet, and from there I´ll introduce consistent characters. Hopefully within the month I´ll have it!
You referenced a link "in the description" for a workflow. No link. Please provide! Thanks.
So sorry! I’ll get it up asap!
That's a good tutorial, but you clearly have misinterpreted what steps, start_step and end_step do.
Steps: overall number of steps, that BOTH nodes are expected to perform in total.
Start step: well, the node assumes that it starts it's work at this stage of denoise.
End step: hard cap which tells the node to stop, EVCEN THOUGH it's work is unfinished.
So, to perform a generation with 42 steps in total, where first 35 steps are done with base model and the last 7 are done with refiner, here are the values:
BASE:
steps: 45
start step: 0
end step: 35
REFINER:
steps: 45 (yes, the same value)
start step: 35
end step: anything higher than total steps count (the default 10k works fine)
You’re absolutely right, however l, I actually explain this more clearly in my ksampler explained video. The purpose of this video is to very simply allow people to get working outputs, with a basic understanding.
I’m making this videos to mirror my learning process, and at one point, before I understood it as how you explained it.
There’s a lot to absorb when working with comfy, and I’m trying to strike the balance between providing enough understanding vs, simplifying other concepts to get people to get a working output.
I do think you make a valid point and I’ll do better to try and include a disclaimer on this somehow
pls make a video on pdinet , thanks for value
Thanks!
5:03 your link didn't happen, it replaced the positive one instead.
Let me check it
Endangered? already?
Well there is only one of me.... hence I´m endangered :O
Did you forgot to put in the link bro? :D
So sorry! I’ll get it up asap!
no worries, just wanted to let you know@@EndangeredAI
ㄱ
Recommended base/refiner ratio is 80/20. You've done some things wrong with your step settings which makes me doubt that your guide will be very helpful for more advanced topics.
Thanks for that, would you mind telling me more about what I´m doing wrong? I´m making these guides as I´m learning to address the challenges I´ve had in getting a complete and well explained guide on ComfyUI & SDXL, so any further input on what I´m doing wrong is extremely helpful.
I think he's referring to the step count in the refiner. Originally 16 -> 40, steps 20. You later corrected it to steps 24 ( 19:42 ) without explaining it.
Comfy UI K sampler Explained | How AI Image generation works | Simple explanation
czcams.com/video/RVwIz63bxN4/video.html
So I explain more thoroughly what happened in this video :)
@@EndangeredAI Opps, I noticed that later. I wrote the comment right after I watched the part 1 video because it confused me.
Load VAE says "undefined". Nothing happens when I click it. Your tutorial is dead in the water as this point. You should explain what to do if something like this occurs.
Hey Scott! Shoot me an email, it’s in my about page with a screenshot and I’ll try and help you out. I’ve honestly not encountered this before
@@EndangeredAI same issue you here, were you able to get it sorted?
Awesome
Thanks!
Awesome
Thank you! 🙏