bye midjourney! SDXL 1.0 - How to install Stable Diffusion XL 1.0 (Automatic1111 & ComfyUI Tutorial)
Vložit
- čas přidán 25. 07. 2023
- SDXL 1.0 - Stable Diffusion XL 1.0 is here. Learn how to download and install Stable Diffusion XL 1.0 in both Automatic1111 and ComfyUI for free. This stable diffusion tutorial or SDXL 1.0 install will teach you how to use stable diffusion, how to install stable diffusion, how to download stable diffusion for free 2023. SDXL Automatic1111, SDXL ComfyUI and more SDXL install. How does Stable diffusion XL 1.0 released by Stability AI compare to Midjourney AI. Is SDXL better than Midjourney AI? Say bye to Midjourney AI and download stable diffusion for free 2023.
SDXL 1.0 which supersedes SDXL 0.9 is the latest Stable Diffusion model named Stable Diffusion XL 1.0. You can install it locally so you can sdxl download, sdxl install, install stable diffusion, stable diffusion download and install, stable diffusion download tutorial.
After downloading sdxl learn how to use sdxl automatic1111, sdxl comfyui, by installing Stable Diffusion Automatic1111 and Automatic1111 Comfy UI. How to use stable diffusion xl 1.0 in Automatic1111 and Comfy UI. SDXL 1.0 after SDXL 0.9 or Stable Diffusion XL 0.9 is the top of the line free art Generation AI released by Stability AI and is a free alternative to Midjourney AI.
Although SDXL 1.0 refiner is not truly supported in Automatic1111 yet you can use SDXL refiner in ComfyUI. Stable Diffusion XL refiner model adds more details such as better face, hands and outfits to SDXL 1.0 generations.
Latest Artificial Intelligence News, new artificial intelligence tools, the biggest AI news this week is the release of SDXL 1.0 or Stable Diffusion XL 1.0 a free AI art generator that is just as good if not better than Midjourney AI.
huggingface.co - Věda a technologie
thanks for watching! might be live on twitch for debugging, questions and chat: www.twitch.tv/coderx
huggingface sdxl1.0 base model: huggingface.co/stabilityai/stable-diffusion-xl-base-1.0
huggingface sdxl1.0 refiner model: huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0
automatic1111: github.com/AUTOMATIC1111/stable-diffusion-webui
comfyui: github.com/comfyanonymous/ComfyUI
refiner.json (now updated to refiner_v1.0.json) by camenduru: github.com/camenduru/sdxl-colab/blob/main/refiner_v1.0.json
you should probaby pin this
@@dynoko3295 I thought this was always pinned, this explains a lot of comments :(
Looks like they took down the SDXL model...
looks like they updated the base+refiner models, there were some issues with VAE so they are probably(hopefully) better now
@@CoderXAI I do not see the tensor model in that link. Is it somewhere else?
Appreciate a guide that is not over-explained or under-explained. Was curious about comfy after finding that it seems like it avoids out-of-memory errors, while a1111 crashes with this model. I guess I shoulda got more vram.
Was able to install and run with both interfaces, thank you
Excited about 1.0. Going to try today.
Btw the background music is wonderful
And your explanation is clean, clear and to the point.
thank you, you're too kind! I had been planning this video for over 2 weeks now since SDXL 1.0 was supposed to launch on 18 so it feels good that it has been helpful to others :D
@@CoderXAI I agree, you have done a great job making this accessible and easy to understand. Thank you so much. May I know what the music is please?
@@CoderXAIhello can I batch modify 10 frames like we img2img like we use to do in the old stable diffusion?
Just came from another video trying to sell a one click download LOL. Man, thanks for the quick and concise tutorial!!!
yep SEcourse's video was suggested right after this lmaoo
Great explanation and guide. Thank you.
Hey CoderX,
I have been trying to generate some ummm spicy images but I cant seem to. Im using ComfyUI coz i cant run Automatic1111 or Vladmandic.
Is it ComfyUI's problem?
Also I used absolute reality and its generating what i want but it is again censoring my images if i try to send it to refiner.
Can you please help me?
Thanking you,
Yours faithfully,
Harold
Clipdrop error: Stable Diffusion XL (Watermark, 400 images per day) not per month for free users.
Very helpful. Subbed.
much helpful. Ty CoderX
When using the refiner, do both models occupy VRAM similtaneously, or does the base unload to offer more space to the refiner?
Base unloads and then the refiner loads in ;)
How do I do the options that were in automatic111 inside comfyui like inpainting, imgtoimg ect
i keep getting this message "Creating model from config: C:\Users\Admin\stable-diffusion-webui
epositories\generative-models\configs\inference\sd_xl_base.yaml
Failed to create model quickly; will retry using slow method." do you know why?
XL models won't load on my A1111 UI, it's not the gpu and I've tried reinstlaling, updating, etc
where do you put the refiner file in the 1111 webui folder?
don't know why\how but my comfyUI works way slower than auto1111, does comfy need more vram to generate images or something like that?
I have this error: Stable diffusion model failed to load
Loading weights [31e35c80fc] from C:\Users\Documents\Stable Diffusion\Webui2\webui\models\Stable-diffusion\sd_xl_base_1.0.safetensors
I can not get the refiner to work... I keep getting ERR reconnecting... With both 0.9 and 1.0. I have tried to update but that didn't fix the issue.
when installing run.bat i got out of space, i had to remove folders to make space in the memory. Now i finnished installation but when I launch SD it says "error" when i press generate button, :( solutions??
what folder does the refiner go into?
It was very useful, thank you.
This video was very helpful. Thank you so much.
tx alot, get well soon!
ComfyUI works much better for me. With the same prompts it took Automatic1111 almost 30 minutes to generate a 1024x1024 image, but took only 8 seconds in ComfyUI !
Was your image gen sped up? I have a 3060 12gb vram and it takes about a minute for me with base+refiner.
same graphics card, around similar speed (~40-50s)
I wish there would be full explanation how to install it from very beginning, with this other app that you had before.
(EDIT: I can confirm this issue is fixed with the new update of automatic 1111)
I followed your previous tutorial and everytime i launched automatic 1111 it would redownload all the pytorch.bin files ever single time and take like 10 minutes to launch the web ui. I really hope that doesnt happen this time, but is there a way to prevent this from happening?
Im still downloading the files so this issue might be fixed, but im still yet to see. ill update you
Noob question. Why do you make a new installation of automatic1111 from a previous build instead of simply adding the SDXL model to the automatic1111 that you were already using?
if you already have existing automatic1111 and you can update that using update.bat script or manually, you don't need to do a fresh install. you do need to update since A1111 was recently updated to support SDXL and older versions won't work with it
I am trying to use automatic1111 and sdxl-refiner-1.0 and have memory issue, is there a way to set it up to use cpu since most of my gpu memory is reserved by pytorch. This is the error I get, it loads up but can not run a prompt "Tried to allocate 64.00 MiB (GPU 0; 8.00 GiB total capacity; 7.20 GiB already allocated; 0 bytes free; 7.33 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation."
I use A1111 with CPU only, and prompts that take 4-5 minutes with v1.5, now run 5 hours with SDXL (and just 512x512, have not tried higher resolutions yet!)
You can increase the memory as much as you want (physical is better but swap/virtual should be ok), but memory *is not* your real problem if you use only CPU for SDXL.
im using vlad diffusion the sd xl model was Laoding then stopped loading at 70 percent
Can you install in on the existing sd?
when I try to load the refiner json nothing happens. I downloaded the updated version btw
for those who have trouble with python dependencies use this as a last resort to send all those dependencies to the void of darkness (fix my torch cuda xformers etc lol)
for /F %P in ('py -3.10 -m pip freeze') do py -3.10 -m pip uninstall -y %P
I don't really get why comfyui seems to generate images, with refiner, in like a minute on an 8gb card, but it takes like 6 minutes in A1111
i wont be using this it takes on my card to generate one image 512x512 like 5 minutes what the hell i can make like 12 images like that on Rev animated in 1minute
anyone getting the error: RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check
could be some sort of wrong torch error bug, here's a relevant link: github.com/AUTOMATIC1111/stable-diffusion-webui/issues/9402
Been getting that on 1.4 update and even after a clean install of 1.5
i run into this problem when using refiner
i have 6gb vram, but there is an error while running update bat. How can i fix it? it says:
"return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 6.00 GiB total capacity; 4.06 GiB already allocated; 14.71 MiB free; 4.14 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Stable diffusion model failed to load
Applying attention optimization: Doggettx... done."
I have 8gb and same problem
Thanks! :)
Where do you put the Refiner Model in Auto1111?
same place " path " for the basic model .
I only download refiner and not the base, should I be downloading both? Is there a documentation or tutorial on how to use comfyUI? Thanks
Thank you!
@CoderXAI Great video, well done! And what we can use ComfyUI and Automatic 1111 on the same pc without problem?
yep if one works the other should run as well. you'll run out of graphic card memory if you run both at the same time though! also ComfyUI is currently faster for most people
@@CoderXAI tks for the infos
Any advice for getting this working with AMD GPU?
I’d like to know as well
I have 2070 RTX and 16GB of ram but I keep getting OutOfMemoryError: CUDA out of memory. I have xformers installed and I turned Token Merging ratio up but I still get the error. Any idea how I can resolve this. Using Automatic1111
2080 super here, same
i have 2070 too, xl and a111 no work ,but comfy work fine
Does deforum support?
Where did you get the refiner.json from?
yeh, this was definitely unclear
Thanks to your instructions I got it running, but so far results are disappointing (havent tried the Refiner yet). It feels like starting all over again...
Right, 0.9 was better in generating better quality images
What do you think about Happy Diffusion
What do YOU think about Happy Diffusion?
so its basically a checkpoint i suppose ?
is it imposibble to use the refiner in automatic 1111?
You can use i2i and select refiner model.
hi, when I upload the model, give a prompt and generate, the webui doesnt even move. Am I doing something wrong?
does it say anything on the terminal? it'll either throw an error or show what it's loading/doing
@@CoderXAI That is the thing, no error message. it just stays with just the text that come just before the image starts rendering. and stays there forever.
can we use all automatic 1111 ckpts in in comfy UI ??
You ever find out?
What's the difference between the refiner and the base model, please?
base model is what generates the image/the main model
refiner is an additional model that takes the generated image and adds more details to it(so kind of optional)
@@CoderXAI Thank you for your explanation 🐬
No links in the description. (looks for another tutorial)
it's the pinned comment, have some restrictions on adding links to the description for now
@@CoderXAI Oh ok, thank you for letting me know.
Pfff, Bye Midjourney? I dont think so. Everyone saying MJ killer. Its not. Both have pros and Cons. MJ still usually looks better. SD XL is an improvement sure and you have more control and can do nsfw, but it still cant compare to how many MJ images look.
Something's definitely amiss with the Clipdrop version, which you would think would be setting a good example of how good SDXL is supposed to be. At present, it can't even render a spoon on a white background, in fact, nothing with a white background. MJ can do objects with white backgrounds in its sleep. According to SDXL, a spoon is a DSLR camera, a dessert spoon is a dessert with a camera in the middle of it. I also added 'vector style' yesterday and SDXL wouldn't render anything. MJ does all that with no problems.
Try to generate Xi Jinping in Midjourney (you can't)
can i use my SD1.5 lora on SDXL ?
Nope
👋
Can SDXL 1.0 make NSFW models or images just like SD 1.5 ?
I do not think so. Provably it needs a lot of finetuning like 1.5 does.
It isn't censored, like 2.1 is, so yes you can do NSFW but I don't know how well they'll compare to 1.5. Loras etc will help with that too once they start releasing.
@@Elwaves2925 Uncensored doesnt mean trained. You cant do the same nsfw stuff as 1.5. Some nudes and thats it.
@@danielhernanalonso7219 Obviously they don't mean the same thing but I also don't see the OP asking anything about training. Their comment is vague and can be read multiple ways, we both read it differently. Seeing as 2.1 was heavily censored, it made sense they were asking about that. 🙂
@@Elwaves2925 thats true. I guess the answer is "yes, it can", but at the same time it can mislead him because he cant do the same nsfw stuff right now.
refiner by camenduru > 404 - page not found
thanks for letting me know. they've updated the file to refiner_v1.0.json and I've updated the link in my comment as well
@@CoderXAI 🫡
Is it gonna work on rtx 3050 laptop?
No. Too little vram. Or maybe you can run it on CPU only and then you can generate pictures, even though it would be slow.
all good accept, Keanu has only 4 fingers ....
"Load this refiner.json file"?
i've updated the link in the pinned comment to refiner_v1.0.json, please load that
ComfyUI is faster 😀
I'll never buy anything with AMD on it...
LULW, I think it might work on chonky AMD cards. Comfy has instructions for AMD+linux and auto1111 has some unofficial support but not sure if that works with SDXL as well.
github.com/comfyanonymous/ComfyUI#amd-gpus-linux-only
github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs
Indeed
I am using RX 6800 on Linux its pretty well actually.
@@kano326 Does it support xformers?
@@Cutieplus of course not , xformers uses CUDA
I know this is old. nothing happens when I load the json file.
In comfyui it took only 33 minutes using onboard intel uhd 630 graphics lol