bye midjourney! SDXL 1.0 - How to install Stable Diffusion XL 1.0 (Automatic1111 & ComfyUI Tutorial)

Sdílet
Vložit
  • čas přidán 25. 07. 2023
  • SDXL 1.0 - Stable Diffusion XL 1.0 is here. Learn how to download and install Stable Diffusion XL 1.0 in both Automatic1111 and ComfyUI for free. This stable diffusion tutorial or SDXL 1.0 install will teach you how to use stable diffusion, how to install stable diffusion, how to download stable diffusion for free 2023. SDXL Automatic1111, SDXL ComfyUI and more SDXL install. How does Stable diffusion XL 1.0 released by Stability AI compare to Midjourney AI. Is SDXL better than Midjourney AI? Say bye to Midjourney AI and download stable diffusion for free 2023.
    SDXL 1.0 which supersedes SDXL 0.9 is the latest Stable Diffusion model named Stable Diffusion XL 1.0. You can install it locally so you can sdxl download, sdxl install, install stable diffusion, stable diffusion download and install, stable diffusion download tutorial.
    After downloading sdxl learn how to use sdxl automatic1111, sdxl comfyui, by installing Stable Diffusion Automatic1111 and Automatic1111 Comfy UI. How to use stable diffusion xl 1.0 in Automatic1111 and Comfy UI. SDXL 1.0 after SDXL 0.9 or Stable Diffusion XL 0.9 is the top of the line free art Generation AI released by Stability AI and is a free alternative to Midjourney AI.
    Although SDXL 1.0 refiner is not truly supported in Automatic1111 yet you can use SDXL refiner in ComfyUI. Stable Diffusion XL refiner model adds more details such as better face, hands and outfits to SDXL 1.0 generations.
    Latest Artificial Intelligence News, new artificial intelligence tools, the biggest AI news this week is the release of SDXL 1.0 or Stable Diffusion XL 1.0 a free AI art generator that is just as good if not better than Midjourney AI.
    huggingface.co
  • Věda a technologie

Komentáře • 128

  • @CoderXAI
    @CoderXAI  Před 10 měsíci +24

    thanks for watching! might be live on twitch for debugging, questions and chat: www.twitch.tv/coderx
    huggingface sdxl1.0 base model: huggingface.co/stabilityai/stable-diffusion-xl-base-1.0
    huggingface sdxl1.0 refiner model: huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0
    automatic1111: github.com/AUTOMATIC1111/stable-diffusion-webui
    comfyui: github.com/comfyanonymous/ComfyUI
    refiner.json (now updated to refiner_v1.0.json) by camenduru: github.com/camenduru/sdxl-colab/blob/main/refiner_v1.0.json

    • @dynoko3295
      @dynoko3295 Před 10 měsíci

      you should probaby pin this

    • @CoderXAI
      @CoderXAI  Před 10 měsíci +1

      @@dynoko3295 I thought this was always pinned, this explains a lot of comments :(

    • @DT-GDS
      @DT-GDS Před 10 měsíci

      Looks like they took down the SDXL model...

    • @CoderXAI
      @CoderXAI  Před 10 měsíci

      looks like they updated the base+refiner models, there were some issues with VAE so they are probably(hopefully) better now

    • @DT-GDS
      @DT-GDS Před 10 měsíci +1

      @@CoderXAI I do not see the tensor model in that link. Is it somewhere else?

  • @creedolala6918
    @creedolala6918 Před 8 měsíci +1

    Appreciate a guide that is not over-explained or under-explained. Was curious about comfy after finding that it seems like it avoids out-of-memory errors, while a1111 crashes with this model. I guess I shoulda got more vram.

  • @aiwithoutsecrets
    @aiwithoutsecrets Před 10 měsíci +2

    Was able to install and run with both interfaces, thank you

  • @EuphoricDreamSequence
    @EuphoricDreamSequence Před 10 měsíci +5

    Excited about 1.0. Going to try today.
    Btw the background music is wonderful
    And your explanation is clean, clear and to the point.

    • @CoderXAI
      @CoderXAI  Před 10 měsíci +1

      thank you, you're too kind! I had been planning this video for over 2 weeks now since SDXL 1.0 was supposed to launch on 18 so it feels good that it has been helpful to others :D

    • @DanVogt
      @DanVogt Před 10 měsíci

      @@CoderXAI I agree, you have done a great job making this accessible and easy to understand. Thank you so much. May I know what the music is please?

    • @FirstLast-tx3yj
      @FirstLast-tx3yj Před 10 měsíci

      ​@@CoderXAIhello can I batch modify 10 frames like we img2img like we use to do in the old stable diffusion?

  • @YEAHSURETHINGMAN
    @YEAHSURETHINGMAN Před 10 měsíci +6

    Just came from another video trying to sell a one click download LOL. Man, thanks for the quick and concise tutorial!!!

    • @cl4911
      @cl4911 Před 11 dny

      yep SEcourse's video was suggested right after this lmaoo

  • @TeamLiftMedia
    @TeamLiftMedia Před 10 měsíci +4

    Great explanation and guide. Thank you.

  • @haroldrandolf2981
    @haroldrandolf2981 Před 8 měsíci +1

    Hey CoderX,
    I have been trying to generate some ummm spicy images but I cant seem to. Im using ComfyUI coz i cant run Automatic1111 or Vladmandic.
    Is it ComfyUI's problem?
    Also I used absolute reality and its generating what i want but it is again censoring my images if i try to send it to refiner.
    Can you please help me?
    Thanking you,
    Yours faithfully,
    Harold

  • @globalmodelsdotbiz8965
    @globalmodelsdotbiz8965 Před 10 měsíci +6

    Clipdrop error: Stable Diffusion XL (Watermark, 400 images per day) not per month for free users.

  • @CJ-ur3fx
    @CJ-ur3fx Před 10 měsíci

    Very helpful. Subbed.

  • @sxs8905
    @sxs8905 Před 20 dny

    much helpful. Ty CoderX

  • @JSwanson547
    @JSwanson547 Před 10 měsíci +2

    When using the refiner, do both models occupy VRAM similtaneously, or does the base unload to offer more space to the refiner?

    • @HolidayAtHome
      @HolidayAtHome Před 10 měsíci

      Base unloads and then the refiner loads in ;)

  • @thenextbigthing7268
    @thenextbigthing7268 Před 10 měsíci

    How do I do the options that were in automatic111 inside comfyui like inpainting, imgtoimg ect

  • @wrillywonka1320
    @wrillywonka1320 Před 10 měsíci

    i keep getting this message "Creating model from config: C:\Users\Admin\stable-diffusion-webui
    epositories\generative-models\configs\inference\sd_xl_base.yaml
    Failed to create model quickly; will retry using slow method." do you know why?

  • @Hugh_Mungus
    @Hugh_Mungus Před 9 měsíci

    XL models won't load on my A1111 UI, it's not the gpu and I've tried reinstlaling, updating, etc

  • @wasted828
    @wasted828 Před 10 měsíci +1

    where do you put the refiner file in the 1111 webui folder?

  • @rafa-lk6lf
    @rafa-lk6lf Před 2 měsíci

    don't know why\how but my comfyUI works way slower than auto1111, does comfy need more vram to generate images or something like that?

  • @aurum.graphics
    @aurum.graphics Před 10 měsíci

    I have this error: Stable diffusion model failed to load
    Loading weights [31e35c80fc] from C:\Users\Documents\Stable Diffusion\Webui2\webui\models\Stable-diffusion\sd_xl_base_1.0.safetensors

  • @Oismyurl
    @Oismyurl Před 5 měsíci

    I can not get the refiner to work... I keep getting ERR reconnecting... With both 0.9 and 1.0. I have tried to update but that didn't fix the issue.

  • @aurum.graphics
    @aurum.graphics Před 10 měsíci

    when installing run.bat i got out of space, i had to remove folders to make space in the memory. Now i finnished installation but when I launch SD it says "error" when i press generate button, :( solutions??

  • @christoff124
    @christoff124 Před 9 měsíci +1

    what folder does the refiner go into?

  • @chodon6868
    @chodon6868 Před měsícem

    It was very useful, thank you.

  • @goombagrenade
    @goombagrenade Před měsícem

    This video was very helpful. Thank you so much.

  • @thebonuslvl7181
    @thebonuslvl7181 Před 10 měsíci

    tx alot, get well soon!

  • @rickysanchyz7083
    @rickysanchyz7083 Před 9 měsíci

    ComfyUI works much better for me. With the same prompts it took Automatic1111 almost 30 minutes to generate a 1024x1024 image, but took only 8 seconds in ComfyUI !

  • @ippibean
    @ippibean Před 10 měsíci

    Was your image gen sped up? I have a 3060 12gb vram and it takes about a minute for me with base+refiner.

    • @CoderXAI
      @CoderXAI  Před 10 měsíci +1

      same graphics card, around similar speed (~40-50s)

  • @vancrash666
    @vancrash666 Před 9 měsíci

    I wish there would be full explanation how to install it from very beginning, with this other app that you had before.

  • @RomboDawg
    @RomboDawg Před 10 měsíci +1

    (EDIT: I can confirm this issue is fixed with the new update of automatic 1111)
    I followed your previous tutorial and everytime i launched automatic 1111 it would redownload all the pytorch.bin files ever single time and take like 10 minutes to launch the web ui. I really hope that doesnt happen this time, but is there a way to prevent this from happening?

    • @RomboDawg
      @RomboDawg Před 10 měsíci

      Im still downloading the files so this issue might be fixed, but im still yet to see. ill update you

  • @danielhernanalonso7219
    @danielhernanalonso7219 Před 10 měsíci

    Noob question. Why do you make a new installation of automatic1111 from a previous build instead of simply adding the SDXL model to the automatic1111 that you were already using?

    • @CoderXAI
      @CoderXAI  Před 10 měsíci +1

      if you already have existing automatic1111 and you can update that using update.bat script or manually, you don't need to do a fresh install. you do need to update since A1111 was recently updated to support SDXL and older versions won't work with it

  • @kevinehsani3358
    @kevinehsani3358 Před 10 měsíci

    I am trying to use automatic1111 and sdxl-refiner-1.0 and have memory issue, is there a way to set it up to use cpu since most of my gpu memory is reserved by pytorch. This is the error I get, it loads up but can not run a prompt "Tried to allocate 64.00 MiB (GPU 0; 8.00 GiB total capacity; 7.20 GiB already allocated; 0 bytes free; 7.33 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation."

    • @hernanpulgar7237
      @hernanpulgar7237 Před 10 měsíci

      I use A1111 with CPU only, and prompts that take 4-5 minutes with v1.5, now run 5 hours with SDXL (and just 512x512, have not tried higher resolutions yet!)
      You can increase the memory as much as you want (physical is better but swap/virtual should be ok), but memory *is not* your real problem if you use only CPU for SDXL.

  • @DARKNESSMANZ
    @DARKNESSMANZ Před 10 měsíci

    im using vlad diffusion the sd xl model was Laoding then stopped loading at 70 percent

  • @foxyfox7627
    @foxyfox7627 Před 10 měsíci +1

    Can you install in on the existing sd?

  • @legioneelletregi1100
    @legioneelletregi1100 Před 10 měsíci

    when I try to load the refiner json nothing happens. I downloaded the updated version btw

  • @zikwin
    @zikwin Před 10 měsíci

    for those who have trouble with python dependencies use this as a last resort to send all those dependencies to the void of darkness (fix my torch cuda xformers etc lol)
    for /F %P in ('py -3.10 -m pip freeze') do py -3.10 -m pip uninstall -y %P

  • @bladechild2449
    @bladechild2449 Před 10 měsíci +1

    I don't really get why comfyui seems to generate images, with refiner, in like a minute on an 8gb card, but it takes like 6 minutes in A1111

    • @AMVNicRam
      @AMVNicRam Před 10 měsíci

      i wont be using this it takes on my card to generate one image 512x512 like 5 minutes what the hell i can make like 12 images like that on Rev animated in 1minute

  • @yasen_stoev
    @yasen_stoev Před 10 měsíci +5

    anyone getting the error: RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check

    • @CoderXAI
      @CoderXAI  Před 10 měsíci +1

      could be some sort of wrong torch error bug, here's a relevant link: github.com/AUTOMATIC1111/stable-diffusion-webui/issues/9402

    • @RealXbot
      @RealXbot Před 10 měsíci

      Been getting that on 1.4 update and even after a clean install of 1.5

    • @RoboMagician
      @RoboMagician Před 10 měsíci

      i run into this problem when using refiner

  • @effinballers2543
    @effinballers2543 Před 10 měsíci +1

    i have 6gb vram, but there is an error while running update bat. How can i fix it? it says:
    "return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
    torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 6.00 GiB total capacity; 4.06 GiB already allocated; 14.71 MiB free; 4.14 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
    Stable diffusion model failed to load
    Applying attention optimization: Doggettx... done."

    • @larrog8413
      @larrog8413 Před 10 měsíci

      I have 8gb and same problem

  • @_Piers_
    @_Piers_ Před 10 měsíci

    Thanks! :)

  • @scottgust9709
    @scottgust9709 Před 10 měsíci +4

    Where do you put the Refiner Model in Auto1111?

    • @tal7atal7a66
      @tal7atal7a66 Před 10 měsíci

      same place " path " for the basic model .

  • @kevinehsani3358
    @kevinehsani3358 Před 10 měsíci

    I only download refiner and not the base, should I be downloading both? Is there a documentation or tutorial on how to use comfyUI? Thanks

  • @throow
    @throow Před 9 měsíci

    Thank you!

  • @mattfx
    @mattfx Před 10 měsíci

    @CoderXAI Great video, well done! And what we can use ComfyUI and Automatic 1111 on the same pc without problem?

    • @CoderXAI
      @CoderXAI  Před 10 měsíci +1

      yep if one works the other should run as well. you'll run out of graphic card memory if you run both at the same time though! also ComfyUI is currently faster for most people

    • @mattfx
      @mattfx Před 10 měsíci

      @@CoderXAI tks for the infos

  • @chazanderson4532
    @chazanderson4532 Před 10 měsíci +4

    Any advice for getting this working with AMD GPU?

    • @Dragon211
      @Dragon211 Před 10 měsíci

      I’d like to know as well

  • @mistertitanic33
    @mistertitanic33 Před 10 měsíci

    I have 2070 RTX and 16GB of ram but I keep getting OutOfMemoryError: CUDA out of memory. I have xformers installed and I turned Token Merging ratio up but I still get the error. Any idea how I can resolve this. Using Automatic1111

    • @AioWey
      @AioWey Před 10 měsíci

      2080 super here, same

    • @ryand-rs7pu
      @ryand-rs7pu Před 10 měsíci

      i have 2070 too, xl and a111 no work ,but comfy work fine

  • @ParvathyKapoor
    @ParvathyKapoor Před 10 měsíci

    Does deforum support?

  • @Sergatx
    @Sergatx Před 10 měsíci +4

    Where did you get the refiner.json from?

  • @feflechi
    @feflechi Před 10 měsíci +1

    Thanks to your instructions I got it running, but so far results are disappointing (havent tried the Refiner yet). It feels like starting all over again...

    • @TunesofIndia037
      @TunesofIndia037 Před 10 měsíci

      Right, 0.9 was better in generating better quality images

  • @cy7
    @cy7 Před 9 měsíci +1

    What do you think about Happy Diffusion

    • @sirtimatbob
      @sirtimatbob Před 5 měsíci

      What do YOU think about Happy Diffusion?

  • @GambarAsliatauAI
    @GambarAsliatauAI Před 10 měsíci

    so its basically a checkpoint i suppose ?

  • @Im_that_guy_man
    @Im_that_guy_man Před 10 měsíci +1

    is it imposibble to use the refiner in automatic 1111?

    • @Cutieplus
      @Cutieplus Před 10 měsíci +2

      You can use i2i and select refiner model.

  • @ZenBenzineX
    @ZenBenzineX Před 10 měsíci

    hi, when I upload the model, give a prompt and generate, the webui doesnt even move. Am I doing something wrong?

    • @CoderXAI
      @CoderXAI  Před 10 měsíci

      does it say anything on the terminal? it'll either throw an error or show what it's loading/doing

    • @ZenBenzineX
      @ZenBenzineX Před 10 měsíci

      @@CoderXAI That is the thing, no error message. it just stays with just the text that come just before the image starts rendering. and stays there forever.

  • @iloveshibainu9003
    @iloveshibainu9003 Před 10 měsíci +1

    can we use all automatic 1111 ckpts in in comfy UI ??

  • @palomaetienne
    @palomaetienne Před 10 měsíci

    What's the difference between the refiner and the base model, please?

    • @CoderXAI
      @CoderXAI  Před 10 měsíci +1

      base model is what generates the image/the main model
      refiner is an additional model that takes the generated image and adds more details to it(so kind of optional)

    • @palomaetienne
      @palomaetienne Před 10 měsíci

      @@CoderXAI Thank you for your explanation 🐬

  • @Kim-uu8fc
    @Kim-uu8fc Před 10 měsíci

    No links in the description. (looks for another tutorial)

    • @CoderXAI
      @CoderXAI  Před 10 měsíci +2

      it's the pinned comment, have some restrictions on adding links to the description for now

    • @Kim-uu8fc
      @Kim-uu8fc Před 10 měsíci

      @@CoderXAI Oh ok, thank you for letting me know.

  • @MrHannessie
    @MrHannessie Před 10 měsíci +2

    Pfff, Bye Midjourney? I dont think so. Everyone saying MJ killer. Its not. Both have pros and Cons. MJ still usually looks better. SD XL is an improvement sure and you have more control and can do nsfw, but it still cant compare to how many MJ images look.

    • @marcusmercer3208
      @marcusmercer3208 Před 10 měsíci +2

      Something's definitely amiss with the Clipdrop version, which you would think would be setting a good example of how good SDXL is supposed to be. At present, it can't even render a spoon on a white background, in fact, nothing with a white background. MJ can do objects with white backgrounds in its sleep. According to SDXL, a spoon is a DSLR camera, a dessert spoon is a dessert with a camera in the middle of it. I also added 'vector style' yesterday and SDXL wouldn't render anything. MJ does all that with no problems.

    • @jopansmark
      @jopansmark Před 10 měsíci

      Try to generate Xi Jinping in Midjourney (you can't)

  • @umarudoma1811
    @umarudoma1811 Před 10 měsíci

    can i use my SD1.5 lora on SDXL ?

  • @LouisGedo
    @LouisGedo Před 10 měsíci

    👋

  • @Eleganttf2
    @Eleganttf2 Před 10 měsíci +2

    Can SDXL 1.0 make NSFW models or images just like SD 1.5 ?

    • @cyfused
      @cyfused Před 10 měsíci

      I do not think so. Provably it needs a lot of finetuning like 1.5 does.

    • @Elwaves2925
      @Elwaves2925 Před 10 měsíci

      It isn't censored, like 2.1 is, so yes you can do NSFW but I don't know how well they'll compare to 1.5. Loras etc will help with that too once they start releasing.

    • @danielhernanalonso7219
      @danielhernanalonso7219 Před 10 měsíci

      @@Elwaves2925 Uncensored doesnt mean trained. You cant do the same nsfw stuff as 1.5. Some nudes and thats it.

    • @Elwaves2925
      @Elwaves2925 Před 10 měsíci

      @@danielhernanalonso7219 Obviously they don't mean the same thing but I also don't see the OP asking anything about training. Their comment is vague and can be read multiple ways, we both read it differently. Seeing as 2.1 was heavily censored, it made sense they were asking about that. 🙂

    • @danielhernanalonso7219
      @danielhernanalonso7219 Před 10 měsíci

      @@Elwaves2925 thats true. I guess the answer is "yes, it can", but at the same time it can mislead him because he cant do the same nsfw stuff right now.

  • @RyokoChanGamer
    @RyokoChanGamer Před 10 měsíci

    refiner by camenduru > 404 - page not found

    • @CoderXAI
      @CoderXAI  Před 10 měsíci +1

      thanks for letting me know. they've updated the file to refiner_v1.0.json and I've updated the link in my comment as well

    • @RyokoChanGamer
      @RyokoChanGamer Před 10 měsíci

      @@CoderXAI 🫡

  • @chaitanyapahl
    @chaitanyapahl Před 10 měsíci

    Is it gonna work on rtx 3050 laptop?

    • @innocentiuslacrim2290
      @innocentiuslacrim2290 Před 9 měsíci

      No. Too little vram. Or maybe you can run it on CPU only and then you can generate pictures, even though it would be slow.

  • @sergeyfedatsenka7201
    @sergeyfedatsenka7201 Před 9 měsíci

    all good accept, Keanu has only 4 fingers ....

  • @3ricMO
    @3ricMO Před 10 měsíci

    "Load this refiner.json file"?

    • @CoderXAI
      @CoderXAI  Před 10 měsíci

      i've updated the link in the pinned comment to refiner_v1.0.json, please load that

  • @godpunisher
    @godpunisher Před 10 měsíci

    ComfyUI is faster 😀

  • @pureone5692
    @pureone5692 Před 10 měsíci +6

    I'll never buy anything with AMD on it...

    • @CoderXAI
      @CoderXAI  Před 10 měsíci +1

      LULW, I think it might work on chonky AMD cards. Comfy has instructions for AMD+linux and auto1111 has some unofficial support but not sure if that works with SDXL as well.
      github.com/comfyanonymous/ComfyUI#amd-gpus-linux-only
      github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs

    • @Eleganttf2
      @Eleganttf2 Před 10 měsíci

      Indeed

    • @kano326
      @kano326 Před 10 měsíci

      I am using RX 6800 on Linux its pretty well actually.

    • @Cutieplus
      @Cutieplus Před 10 měsíci

      @@kano326 Does it support xformers?

    • @Eleganttf2
      @Eleganttf2 Před 10 měsíci

      @@Cutieplus of course not , xformers uses CUDA

  • @dickstarrbuck
    @dickstarrbuck Před 4 měsíci

    I know this is old. nothing happens when I load the json file.

  • @CuntyMcShitballs100
    @CuntyMcShitballs100 Před 10 měsíci +2

    In comfyui it took only 33 minutes using onboard intel uhd 630 graphics lol