SDXL - BEST Build + Upscaler + Steps Guide
Vložit
- čas přidán 9. 07. 2023
- Here is the best way to get amazing results with the SDXL 0.9 Model. How to use the Prompts for Refine, Base, and General with the new SDXL Model. What Step Ratio is best for SDXL Base and SDXL Refiner rendering. Getting the most out of your SDXL images with a 2-step upscale to 4096 with ultra high quality. A special SDXL ComfyUI Build that you can download and use right now for free!
#### Links from my Video ####
MY SDXL Comfui Build: drive.google.com/file/d/1Y5ji...
(Drag this image into your ComfyUI Canvas to load the nodes!!!)
Stable Diffusion Discord: / discord
SDXL Base Model huggingface.co/stabilityai/st...
SDXL Refinder Model huggingface.co/stabilityai/st...
1x Skin Detailer Upscaler (load pth and yaml into same folder) drive.google.com/drive/folder...
4x NMKD Upscaler huggingface.co/gemasai/4x_NMK...
4x Ultrasharp Upscaler mega.nz/folder/qZRBmaIY#nIG8K...
Upscaler Database (Check HERE if other links are dead!!!!) upscale.wiki/wiki/Model_Database
ComfyUI Download: github.com/comfyanonymous/Com...
What to watch next:
ComfyUI install guide: • ComfyUI - Node Based S...
ComfyUI Best Builds: • LATENT Tricks - Amazin...
#### Join and Support me ####
Buy me a Coffee: www.buymeacoffee.com/oliviotu...
Join my Facebook Group: / theairevolution
Joint my Discord Group: / discord - Jak na to + styl
#### Links from my Video ####
MY SDXL Comfui Build: drive.google.com/file/d/1Y5jiWd3G3VNixNGPOD9QDjJ8B-FGOcuk/view?usp=sharing
(Drag this image into your ComfyUI Canvas to load the nodes!!!)
Stable Diffusion Discord: discord.gg/stablediffusion
SDXL Base Model huggingface.co/stabilityai/stable-diffusion-xl-base-0.9
SDXL Refinder Model huggingface.co/stabilityai/stable-diffusion-xl-refiner-0.9
1x Skin Detailer Upscaler (load pth and yaml into same folder) drive.google.com/drive/folders/1VkT6tpbCPn2gKZYPtawDJGMpLg6EyRpO
4x NMKD Upscaler huggingface.co/gemasai/4x_NMKD-Siax_200k/tree/main
4x Ultrasharp Upscaler mega.nz/folder/qZRBmaIY#nIG8KyWFcGNTuMX_XNbJ_g
Upscaler Database (Check HERE if other links are dead!!!!) upscale.wiki/wiki/Model_Database
ComfyUI Download: github.com/comfyanonymous/ComfyUI
I wonder if you could use a 1.5 model for the input and the SD_XL_refiner to refine them?
So your SDxL Comfyui build is an output image of the guy in the forest? Could you help me understand how it work?
P/s: just finished the video, I get it. Thanks Olivio! You're the best!
You've didn't share the Comfui Build the link is providing us to an Image @Olivio Sarikas
@@elbistv Drag that image to ComfyUI, it will load everything.
Other upscale files and stuff, send it to "model\upscale_model" folder.
greath video! Comfyui is fantastic, have a cuestion, how can add fix faces? or some extensions?
Deep fried is a specific visual style. It comes from deep fried memes where an image is shared so much on social media platforms where it gets compressed multiple times it creates a pixelated over sharpened look. Having it mentioned in the negative prompts prevents low quality renders.
really? lol, that's cool. never heared of that
I don't mean any offense by this, but sometimes when you say "CFG scale" I hear "sea of cheese" and it makes me chuckle.
you're right
Zat's cherman accent for ya 😂 (German)
Brilliant! This is what I’m calling it from now on
A Sensible chuckle, i hope
Amazing rundown on comfy and SDXL! Wanted to say thank you for expanding the flow of knowledge. Keep killing it🙌🏾🙌🏾🙌🏾
Thank you very much :)
Wow amazing work. I was a little bit disapointed when I noticed that the same prompt from the official website returned pictures that were miles better. I can't wait to download these files and see how I can improve my images with the additonal quality prompts.
Thank you my friend for the kindness of sharing so much knowledge, in such an easy way
looking great! How do you add Loras to this workflow?
Thanks, another great video. Never tried ComfyUI, really should :)
Very good video,.... and thank you for your "easy english" vocabulary -> Although your explanations are very precise, they remain totally comprehensible for those with a very average level of English.
Hi Olivio! Awesome tutorial thank you so much for holding my hand through the whole process. Your tutorials are always clear and thorough! ❤
I watched your Lora training video but my dataset is too large for my my hardware to train within a reasonable timeframe. Can you please, please, please make a tutorial for training LoRA’s using a cloud gpu like colab, preferably focusing on training a concept with thousands of training images. Thanks again friend!🎉
Always enjoy your videos and learn a lot of useful info. I saw in this video that you were putting the 4x upscaler into the comfyUI folder structure - I wondered whether you were aware of the extra_model_paths.yaml file in the root of the ComfyUI folder structure. This allows you to use your existing models from an A1111 install within ComfyUI by just putting the path to your A1111 installation into this yaml file - then ComfyUI adds the paths to its local paths when ComfyUI loads. This means you can keep putting all of your various models into a single folder structure but use them from both A1111 and ComfyUI. Forgive me if you have already included this in one of your videos, but if not, it is a handy thing to know and you could pass it on in an upcoming video.
thanx for such good teaching!
Thank U for this video 🙏
Thanks a lot, might try to set it up...the nodes seem to be a bit hard to use, but its probably just a matter of spending some time to try how it works! 😊
You explained that really well 👍🏼 sub’d
Thanks, this is awesome.
Appreciate the shoutout but even better was seeing my image of an eye as the thumbnail :)
This is great! Thank you for this. I found it very easy to follow your instructions and to set up the workflow is ComfyUI; and this is my first time using ComfyUI! I ran into an interesting issue though. I notice that if I do 60 steps overall, with the refiner starting at 32 steps, it works. But if I up the refiner step to start at step 40, for example, all I get is a black image from the output of the refiner. Any idea what could be causing that to happen?
Thank you so much! ComfyUI is amazing. It is soooo much faster than Automatic1111 and Vlad! Anything over 768x768 on either of those gives me an out of memory error. ComfyUI has never given me one! This thing is amazing.
Hi! How much vram do you have? Do you think I can run this in my 6gb rtx 2060?
@@funnerisawordYes you can, I've been trying on 1050 TI 4gb vram
Put in webui-user.bat, at set COMMANDLINE_ARGS, - - medvram. (Automatic1111)
THANK YOU!!! 🫡
Awesome again
Hi Olivio. Recently automatic 1111 updated the way the training models works and I was following your tutorial but lost my steps as it was totally revamp on how it looks now
This is great 👍 would be nice to see how it makes a "rug made of souls" 😂
Sorry had to do this as I kept seeing this request by chat and had no idea what it meant 😅
thank you @OlivioSarikas, do you have any idea how to use the SDXL Styles?
Just looking at all those nodes makes me think of Houdini and Nuke and my head just wants to explode 🤯💢
It's noodle time!
Thanks.
Hope to know more about how to add Lora's and many other thing such as control net into Comfy UI. I do know that you already have a tutorial describing each but haven't seen a build(for SD 1.5 models) which almost works like Auto1111
Can you do the upscaling also in latent space? In A1111 I'm bascially only using this method as it upscales the image where it's created and that results always in a better quality than any upscaler working in pixel space.
how to set up upscaler size like 2x? or 1.5? not 4x
Is it possible to add roop in this setup for changing faces after generation?
after refiner stage, encode image back to SD 1.5, run through your favorite 1.5 model again with a lora like more detail, low denoise .2 works well, then run through your upscalers.
do you have a workflow I could see?
@@forgottenwisdoms its just rerouting the finished SDXL back through a normal 1.5 gen process
Hello brother, what is the function of the "clip space "on the menu on the right?
fantastic video as always. Thank you!
A couple of questions about comfUI
does is support controlnet? how bout inpaint/outpaint?
is it possible to only re-run the process of specific parts?
for example, if I generate an image with the base and refiner, and want to test 10 different upscalers. can I only re-run the upscaler and not generate a new image from scratch?
Yes
@@ebrahimchalhoub9313 Thank you. I'm going to switch from A1111 to ComfUI since it looks a lot more flexible.
Have you found any limitations that force you back to A1111 yet? any workflows that you think are not properly supported yet?
@@alborzdesignI would like to know this as well. Comfy UI just seems like the most economic choice. I have many extensions in automatic that I use daily, (adetailer, multi controlnet, latent couple, composable lora, inpaint anything). If I can use these in comfy then there is no reason for me to return to automatic1111
As far as i know, no. Comfyui can be an upgrade to a1111 you can have much finer control over what you do and can go very crazy with stuff . It is almost like programming SD but in a visual way.
Is there a node I can connect to the subject prompt so that I can put a list of prompts in?
Thank you very much for the video . But i must say that ( even though some comments says that your workflow is suboptimal) the json file you gave us produce the same of even better results from this one . This has only has an advantage of an upscaler . Can you add the upscaler part to it please ? Having 3 inputs is really weird ...
you can simply copy my upscaler nodes from one to the other. Just open up comfyui in two different tabs and load the different builds and then select the nodes and copy them over
I downloaded a checkpoint model for SDXL and I can see that it's in the checkpoint, but when I go to select it to use it, it doesn't select it.
All SDXL related checkpoints have the same problem. I see it, but I can't select it, so I can't use the SDXL checkpoint. How do I fix it? It's the same on both my laptop and my PC at home. Thanks.
Hi I have 2 questions:
1- My Comfy UI doesnt have some of these boxes... I installed it yesterday
2- Where do I download the refiner and upscalers from?
Can you tell me what the ascore in the clip condition nodes does and why did you set different values for Pos and Neg? I like to take the output from SDXL, upscale and then resample using a SD1.5 model with low noise settings - and I'm using one of the iterative WAS nodes for this. That way I can also use the loras etc that are already available. So it's kind of like img2img starting with SDXL and finishing with SD1.5. I tested about 20 upscaler models and also liked the 1xSkinLite, but several other models are nice too. As you note each one alters the images - so you can just use the 1xskin and then bicubic or lanzos upscale after before resampling for possible better results. There's no real all-in-one upscaler model though.
After some hours of testing, this node does absolutely nothing. It's not in the official workflow, by the way. The other similar node made for the base model has some importance though. I've found that a width and a height of 1024 is better than 4096, but a target_height and target_width of 4096 seem better than 1024.
The first image is generating, but after that I get an error that says "with safe_open(filename, framework="pt", device=device) as f:
safetensors_rust.SafetensorError: Error while deserializing header: InvalidHeaderDeserialization." Anybody know what that means?
Never mind - figured it out. One of my upscale models was corrupted.
Is there a Comfui SDXL Colab Notebook?
How can I get the "SDXL Refinder Model" then? I see lots of "downloads" there, but no download link or button exists anywhere...
I there a way to only upscale 2 times with this json?
let me know if you know plugins to make cable from curve to perpendicular? thank you :D
I never managed to drag and drop, it doesn't work for some reason and I can't find a solution anywhere. Does anyone have any idea?
The image in the link (MY SDXL Comfui Build:) is a man's face. I can't seem to find the workflow to drag in. Help please :)
hi! what kind of GPU do you have?
Hi olivio,I did everything you've told,but I get just weird shapes
Hey does anyone know how to update xformers in ComfyUI?
Hi olivio, I tried loading the flow into my comfyui but it did not recognize the two steps "CLIPTextEncodeSDXL" and "CLIPTextEncodeSDXLRefiner" any tips how to get these node types?
you probably need to update your comfyui. i think using the update.py in the update folder should do the trick
What a PITA. The pain of early adoption. I think I'll wait for this to be less 'complicated.' But thanks for this video!
Hmm i still using Leonardo.. 😢😢
Tried out the updated node setup.... but your old "wrong" setup gives me more te results I want. It may not be the intentional way to use it, but it's far more successful to generate the targeted result, at least for me at the moment =)
it's ok to use the other setup. whatever get's you the results you want is the right way for you
@@OlivioSarikas I'm kind of glad that auto1111 is so slow with the implementiation. Otherwise I would have never tried ComfiUI and its posiibilities to use the same model in different ways. I'm feeling kinda like Bob Ross, using a spattle instead of a pencil to make my paintings =D
So... I'm curious now, how A1111 is going to completely rework it's UI for double model layout...?
By updating and breaking itself for days/weeks like usual. This is why a lot of people are not bothering updating it anymore.
Incoming all the comments of "I will just wait for A1111"
Thats me...
What is A1111
Alternatively, "I will just wait for the SDXL 1.0 model"
I‘m using Vlad with SDXL right now. 😜😅
What is the minimum requirement to run this locally ?
The "MY SDXL Comfui Build" link goes to a photo, a very nice photo but I was expecting a JSON file perhaps? Or am I misunderstanding?
me too
watch my video ;) You need to drag it into the canvas to load the build
@@OlivioSarikas Oh OK, that's weird that works.:-) Ta.
what happened to automatic 1111??????? Catching up on AI art generations 2 months later, feel like a lot has changed quickly. Can you please make an update video Oilivio
a1111 is still very much present. with oobabooga_a1111 now even in the llm department also. it just so happens that the two events of 1) a leaked stable diffusion xl (which is only running on comfy.ui and vlad (a1111 fork) for now) coincided with 2) your return to the community. but you should definitely update your sdwebui :)
I just want to know how to run the refine without my pc going crazy and slow and glitch......gtx 2060 super 8gb, any config or tricks please ?
in A1111 use the SD upscale in scripts, because that will render it in smaller tiles
¿cuanta vram pide?
The SDXL model cant using it on A1111?
Not this version...but this is not the full final release version of SDXL. That releases next week.
@olivio kudos on promoting Comfy; it's definitely the power-user's tool :)
So SDXL is a better version of automatic1111 for better images ? I don't understand what is it X)
SDXL is the latest update of the Stable Diffusion model itself that actually creates images from text. The expectation is that the images generated by SDXL will improve upon those that are currently produced by Stable Diffusion 1 and 2. Automatic1111 is a web app that allows you to easily use the Stable Diffusion models and gives you the controls you need to make the settings that determine how the model works - A1111 doesn't generate the images, the Stable Diffusion model that you select in A1111 does that. Unfortunately, SDXL works differently from the previous Stable Diffusion models, in that SDXL generates an initial result in the Base model and then passes this to a second model called the Refiner, which adds more details to the image created by the Base. As this 2 part generation is different from the previous versions, A1111 is not currently set up to handle this process and so we are having to use other methods to use SDXL - such as ComfyUI - this is because you are able to manufacture a method within ComfyUI that can handle the 2 step process, as shown by Olivio in this video. I am sure that A1111 will be updated at some point to handle SDXL, but for now we have to use the alternatives. Hope this makes sense and answers your question.
@@dcpuzzles2990 Yes, thank you ! I hope SDXL will be available to automatic1111 one day :3
@@dcpuzzles2990
There are actually some SDXL Models that doesn't need a refiner. Like for example ProtoVision I think
Ok but how/how much is better than SD 1.5? It's for sure better but will be great to see some SD battle side by side. ;)
It's better than base 1.5 but I don't believe it's anywhere near the quality of the things FOR 1.5. If we come back in a couple weeks we'll have some amazing SDXL models
If I'm not mistaken SDXL is a 1024 model, no? A lot of quality comes just from that too.
@@x1c3x Yes but you can already get to 1024x1024 easily in 1.5 with upscaling. Especially Ultimate SD Upscaler
@@ozerune is not just about the final res, but base res of 1024 is high.
Specifically, when i prompt at 1024x1024 in normal SD the double heads and double everything is very common, and needs neg prompts. Base 1024 does not.
Idk....i would prefer a simpler more user friendly interface like automatic 1111 for this.
Still, If I see ComfyUI, I see Chaos
Lookingforward to when this becomes available for Automatic1111. Unfortunately ComfyUI is just too complicated for me.
LOL
A111 is to too complicated
@@mirek190 For me it's all the node based stuff with the hundreds of connections. Can't seem to wrap my head around it. Might be a generational thing though, not sure (older Millennial here).
@@adventuresinportland3032 i dont like comfyUI either
But is it possible to work in a way other than the nodes, these lines bother me. Is there a way like SD?
you can also run it in vlad diffusion
I dont want Comfy... as long as A1111 is not updated i dont switch.
SDXL not worth it if you're a casual (yet). slow even on 4080. Literally 5 minutes for this image above and ended up with 4 arms
How did you add text input to cliptextencodesdxl module? I can't wrap my head around it, I'm trying to do Clip area composition with no luck. thanks for the video
If you right click on any of the nodes with parameters the right click menu gives you the ability to change any of those parameters into inputs - I struggled with this to start with. Hope this helps.
There's zero point in using the base model right now vs SD 1.5.
Trash all those upscalers. Connect the resulting image to Swinir upscaler and run upscaled result through skindiffdetail 1x upscaler.
This is macrame.
almost, yes 😂
@OlivioSarikas, it seems that your goal of posting a video every day is not benefiting you. This particular video exemplifies a focus on quantity over quality, as it lacks any meaningful content. It appears that you are simply reading comments from the ComfyUI nods, and it's doubtful whether you prepared for this or found inspiration from the Discord community. Please understand that my intention is not to offend you, but rather to provide an observation based on your channel over the past few months.
Olivio, I luv ya but you are really >>>TERRIBLE
bro, i even put the calculation into the video: 25 - 20% = 20 If you are as good at math as you think, you should probably know that a minus percentage is always lower than a plus percentage because the total that the percentage is applied to is larger... duh
well those examples at the beginning aint amazing at all tbo))
only the best for you, my friend
Don't forget that a) this isn't a final release build and b) it's the bog-standard base model. You need to base your quality comparison with it to what the base SD1.5 model can produce. (v1-5-pruned-emaonly.ckpt or w/e it's called)
@@vallejomach6721 yes yes, i will patiently wait until that shiet drops))
You should try out the upscaler "4x_RealisticRescaler_100000_G"
It doesn't load the upscaler nodes in my UI. Just red boxes. how do I get them? Im using mac m2 ultra