Generate up to 60% faster than base SDXL with less compute power!

Sdílet
Vložit
  • čas přidán 6. 11. 2023
  • SSD-1B is like SDXL - only it's up to 60% faster AND uses less VRAM! At a mere 4.6GB download and with comparable image generation quality, SSD-1B brings the power of SDXL to even the potato computer :)
    Even with a decent PC, with SSD-1B you can now enjoy both faster training and generation times, so what's not to like?
    == Links ==
    Model card: huggingface.co/segmind/SSD-1B
    Files: huggingface.co/segmind/SSD-1B...
    Workflow: github.com/nerdyrodent/AVeryC...
    == More Stable Diffusion Stuff! ==
    * Learn about ComfyUI! • ComfyUI Tutorials and ...
    * ControlNet Extension - github.com/Mikubill/sd-webui-...
    * How do I create an animated SD avatar? - • Create your own animat...
    * Installing Anaconda for MS Windows Beginners - • Anaconda - Python Inst...
    * Dreambooth Playlist - • Stable Diffusion Dream...
    * Textual Inversion Playlist - • Stable Diffusion Textu...
  • Věda a technologie

Komentáře • 91

  • @JustMaier
    @JustMaier Před 6 měsíci +16

    We added SSD-1B as a base model to Civitai just last week because we’re so excited about more people being able to run SDXL locally.

    • @NerdyRodent
      @NerdyRodent  Před 6 měsíci +1

      Nice! :)

    • @liquidmind
      @liquidmind Před 6 měsíci

      are there any more models like this one? Low GPU friendly? i manage to generate one photo on a 6 vram gpu..... 1024-1024 takes about 3 minutes for 1 photo, but still faster than SDXL in which can take 10 minutes or even 1 hour to generate a phot on a 6 vram card, LOL

    • @benmaynard3059
      @benmaynard3059 Před 6 měsíci +1

      Why? @@liquidmind I have 6gb and it takes 1minute to generate an XL image ?

    • @benmaynard3059
      @benmaynard3059 Před 6 měsíci +1

      @@liquidmind I also have 24gb regular ram maybe that's the difference?

    • @liquidmind
      @liquidmind Před 6 měsíci

      what model you use? maybe you have special hidden powers? the stability AI team even have a chart mentioning how SLOW is to generate a 1024x1024 on a 6vram gpu, that can take HOURS, and that a direct quote,,, let me see if i find it..... :D whats your magic? what resolution you use?@@benmaynard3059

  • @vi6ddarkking
    @vi6ddarkking Před 6 měsíci +32

    I love how the open source projects are prioritizing efficiency over raw power.
    Since the community takes care of the tools It leaves them free to optimize the AIs as much as possible before advancing to the next step in power.
    Still can't wait for the 2048 x 2048 images to become the standard. The jump from 512 to 1024 made ControlNet so much better due to all the new pixels it had to work with.
    The next jump will be marvelous.

    • @NerdyRodent
      @NerdyRodent  Před 6 měsíci +4

      Woo! Go open source!

    • @testales
      @testales Před 6 měsíci

      It's not exactly easy or fast but already possible, I'm doing 1792x2304 quite often. It works by "upscaling" but actually if you do it right, it's a very guided resampling of an input image. That means it more or less is indeed created from scratch at this resolution.

    • @Phobos11
      @Phobos11 Před 6 měsíci

      Open source progresses based on users’ needs. Corporations progress based on control, adding limitations, blocking competition and getting money. They forgot about the users

  • @Mr.Sinister_666
    @Mr.Sinister_666 Před 6 měsíci +7

    Man you are just out here putting out consistently S Tier videos! I honestly am thrilled anytime I see a new video of yours pop up and often find myself checking the channel just to make sure I didn't miss anything. Straight to the point but fun and informative. People are sleeping on you man for sure! Thank you for your work, it is massively appreciated 👊

    • @NerdyRodent
      @NerdyRodent  Před 6 měsíci

      Thanks! Glad you like the things ;)

  • @FlamespeedyAMV
    @FlamespeedyAMV Před 6 měsíci +3

    Open Source is so dam good, screw the greedy corporations

  • @MrSongib
    @MrSongib Před 6 měsíci +1

    Actually an improvement.

  • @IlRincreTeam
    @IlRincreTeam Před 6 měsíci +5

    It is also to mention that the 4.6 gb file is in FP32, which should mean that the FP16 model is about the same size of a SD2.1!

    • @NerdyRodent
      @NerdyRodent  Před 6 měsíci +1

      Awesome!

    • @sherpya
      @sherpya Před 6 měsíci

      but unfortunately does not work on fp32, I have a 1650 and since it doesn't support fp16 it always run at fp32, I only get black images

  • @TomerGa
    @TomerGa Před 6 měsíci

    Hey Nerdy I lovw your videos! Is there anyway to combine this model with the LCM LoRA to be used on 1 6GB VRAM card?

  • @animatedjess
    @animatedjess Před 3 měsíci

    Thanks for the tutorial! Do you know how to train a lora on this model?

  • @leafdriving
    @leafdriving Před 6 měsíci +4

    Comfy UI ~ actually runs full SDXL on 6GB (I have GTX1060) ~ super slow ~ SSD-1B Faster (What I use to set up) ~ Comfy UI auto-selects "low v-ram load" ~ slow ~ but stack the queue and come back later, getting it done.

    • @NerdyRodent
      @NerdyRodent  Před 6 měsíci

      Nice! Cool to hear it runs on 6GB too 😀 Awesome that this model should be faster on whatever hardware!

    • @synthoelectro
      @synthoelectro Před 6 měsíci

      1650 GTX works too, 4GB VRAM, but you have to work with it like virtual swap.

    • @liquidmind
      @liquidmind Před 6 měsíci

      A tensor with all NaNs was produced in VAE.
      Web UI will now convert VAE into 32-bit float and retry.
      To disable this behavior, disable the 'Automatically revert VAE to 32-bit floats' setting.
      To always start with 32-bit VAE, use --no-half-vae commandline flag.@@synthoelectro

    • @Adante.
      @Adante. Před 6 měsíci

      How slow?

  • @puoiripetere
    @puoiripetere Před 6 měsíci

    Beautiful video and for my "potato" computer it is a godsend :) Question: is the VAE included in the SSD-1B SPEC model or do you recommend using sdxl-vae-fp16-fix? Thanks for the great work.

  • @moo6080
    @moo6080 Před 6 měsíci

    Wow, thank you for keeping up with this news and sharing it with us, I can run SDXL on ComfuUI, but not on my GPU, so i'm going to give this a try!
    EDIT: Unfortunately, it looks like it still gives out of memory error on my 6GB VRAM card

    • @NerdyRodent
      @NerdyRodent  Před 6 měsíci

      Plenty of other commenters say it works on on their 6GB cards… I think someone said even a 980ti was ok!

    • @moo6080
      @moo6080 Před 6 měsíci

      Yeah i was expecting it to, as well, considering the model is only 4.2GB. I tried with --lowvram on comfyUI and I still got the OOM error@@NerdyRodent

    • @liquidmind
      @liquidmind Před 6 měsíci +1

      it does works, change OPTIMIZATION TO AUTOMATIC. and try LOWER pixels like 1024x768 first!!! the go higher!!

    • @moo6080
      @moo6080 Před 6 měsíci +1

      @@liquidmind i don't see that option on ComfyUI, what are you using as your interface?

    • @liquidmind
      @liquidmind Před 6 měsíci

      automatic1111 - go to the Xformers and SDP section and choose xformers or automatic optimization,,, can you see xforemrs?@@moo6080

  • @Elwaves2925
    @Elwaves2925 Před 6 měsíci

    Nice video, not something I need but many others will.
    So, if you can't use existing loras with SSD-1B, can you use loras trained on this model with other SDXL checkpoints (like RealVis and JuggernautXL)?

    • @NerdyRodent
      @NerdyRodent  Před 6 měsíci +1

      SSD-1B loras will train super fast :)

  • @twilightfilms9436
    @twilightfilms9436 Před 6 měsíci

    Can you do a tutorial for ZL controllers formA1111? Thanks in advance…..

  • @artist.zahmed
    @artist.zahmed Před 6 měsíci +1

    Can you do sd xl model training tutorial please I have 4090 v card and i really wanna make my own model please 😢❤❤

  • @MegaGasek
    @MegaGasek Před 6 měsíci +4

    Thanks for bringing such great content. The spaghetti UI is not at all comfy, looks very intimidating and this is coming from a guy who used to work with 3D Maya a lot (Ah, the Maya multilister... such a ''joy'' to use!)... Anyway, I've got a 2080 with 8GB of Vram so from what you mentioned, it is faster in COMFY UI. Will try comfy UI for the first time. However, I have to say this: with all the Civitai LORAS and community based tools I don't feel like SDXL is really necessary. I'm still using 1.5 in all my projects.

    • @neocaron87
      @neocaron87 Před 6 měsíci +1

      Davinci Resolve user here, love the nodes there, hate it in comfy even though I really want to like it XD

    • @MegaGasek
      @MegaGasek Před 6 měsíci

      @@neocaron87 Da Vinci Resolve is just a marvel of the modern world. An unbelievably great piece of software for free that is as capable as Premiere or any other video editor. I use Premiere myself just because It integrates well with Photoshop, Illustrator, AE and Audition but I see myself using it in the future.

  • @MrLerola
    @MrLerola Před 6 měsíci +1

    I run into OoM frequently with my 8 GB 3070, so super excited to try this! Do we know what got 'slimmed down'?

    • @NerdyRodent
      @NerdyRodent  Před 6 měsíci

      They did nerdy stuff 😆 The model card has a bit more info…

  • @midgard9552
    @midgard9552 Před 6 měsíci

    have no ideas why but in auto1111 i always get NaN errors with this model only

  • @banzai316
    @banzai316 Před 6 měsíci

    Any improvement in the language model , prompt understanding?
    Looks good 👍

    • @NerdyRodent
      @NerdyRodent  Před 6 měsíci +2

      Seems pretty much the same so far… or at least my tiny, rodent brain hasn’t found a noticeable difference as yet

    • @puoiripetere
      @puoiripetere Před 6 měsíci +1

      With the latest Nvidia i 546 drivers the vram is no longer a "problem" using normal RAM

    • @banzai316
      @banzai316 Před 6 měsíci

      @@puoiripetere , good to know. Probably some better too if you use the Studio Driver, vs Game Ready

  • @beecee793
    @beecee793 Před 6 měsíci +3

    Well, what's not to like is we lose all the custom fine tunes/lora's etc which is a huge part of what makes the SD ecosystem so useful, right? Would you need to train new LORA's and whatnot using the 1B as base model?

    • @NineSeptims
      @NineSeptims Před 6 měsíci

      60% is worth it as sad as it is.

    • @mattmarket5642
      @mattmarket5642 Před 6 měsíci +1

      True, what would make it *really* useful was if a genius figured out how to convert models/loras between the two. I guess it’s impossible, but that would be brilliant. The community being split between making things for 1.5 and SDXL is already putting a damper on things a bit.

    • @Phobos11
      @Phobos11 Před 5 měsíci

      @@mattmarket5642there’s not really a “split” based on the models, but based on the hardware limitations. Most people don’t have machines to run SDXL with and SD1.5 is good for almost everyone

  • @cyril1111
    @cyril1111 Před 6 měsíci

    yes! But what do you think of the quality difference between this and Normal XL ?

    • @NerdyRodent
      @NerdyRodent  Před 6 měsíci +2

      49% of the time I prefer the other model

    • @MrAwesomeTheAwesome
      @MrAwesomeTheAwesome Před 6 měsíci

      @@NerdyRodent Does that mean that 51% of the time you prefer this new model? Or are we also accounting for some ties?

  • @KonImperator
    @KonImperator Před 6 měsíci +2

    My guy talking about 8 gigs VRAM like it's the standard for every low end pc out there 🤣

  • @elmyohipohia936
    @elmyohipohia936 Před 6 měsíci +1

    I don't know how to train lora since sdxl (I have a 8gb vram gpu), do you have a tuto or something ? Before this I used to use astria, collab, 1111 a bit...

    • @_arkel7374
      @_arkel7374 Před 6 měsíci

      Same here. I can train LORAs in 1.5, but not SDXL due to VRAM limitations. Do we know whether it's possible to train with SSD-1B? If so, a how to video would be MUCH appreciated.

  • @vintagegenious
    @vintagegenious Před 6 měsíci

    Obviously also supported inside SDNext

  • @NineSeptims
    @NineSeptims Před 6 měsíci

    The rate this tech is moving I might be able to press generate and get 100 1024x1024 images instantly. 😳

  • @FusionDraw9527
    @FusionDraw9527 Před 6 měsíci

    感謝分享 AI進步真很快速呢 還沒習慣SDXL 就有新的SDXL Distilled了 進步太神速了

  • @CoconutPete
    @CoconutPete Před 2 měsíci

    I tried the same prompt with A1111 and SSD-1B and my image looks like a cheap cartoon lol

  • @Kelticfury
    @Kelticfury Před 6 měsíci +2

    Have you checked out SwarmUI yet?

    • @NerdyRodent
      @NerdyRodent  Před 6 měsíci +2

      Not yet, no. Any good?

    • @Kelticfury
      @Kelticfury Před 6 měsíci +1

      You might like it. It is fast and runs on a backend of comfyui. Still in beta I think? So not perfect but definitely an intriguing start. One thing to keep in mind is that it works with python 3.11 (took me a bit to figure out what my problem was)

  • @eukaryote-prime
    @eukaryote-prime Před 6 měsíci

    I've been using fooocus for SDXL and it takes 7 minutes an image on my 6gb 980ti 😅😅😅

  • @liquidmind
    @liquidmind Před 6 měsíci

    Error on a 2060 6GB RAM
    A tensor with all NaNs was produced in VAE.
    Web UI will now convert VAE into 32-bit float and retry.
    To disable this behavior, disable the 'Automatically revert VAE to 32-bit floats' setting.
    To always start with 32-bit VAE, use --no-half-vae commandline flag.

    • @liquidmind
      @liquidmind Před 6 měsíci

      ok i manage to get it to work..... CANT use SDP on a 6gb ram gpu, i chose automatic optimization and --xformers and works well, SLOW AF, but great

    • @NerdyRodent
      @NerdyRodent  Před 6 měsíci +1

      Awesome to hear!

    • @liquidmind
      @liquidmind Před 6 měsíci

      thanks for all your tutorials.@@NerdyRodent

  • @Gh0sty.14
    @Gh0sty.14 Před 6 měsíci +2

    For some reason I'm running out of memory using this model but not while using regular SDXL.

    • @moo6080
      @moo6080 Před 6 měsíci

      same

    • @Gh0sty.14
      @Gh0sty.14 Před 6 měsíci

      @@moo6080 I saw someone on reddit say it only works with the dev branch of a1111 so maybe that's the issue.

    • @moo6080
      @moo6080 Před 6 měsíci

      @@Gh0sty.14 Im using ComfyUI, it says on their huggingface it should be compatible

  • @CasanovaSan
    @CasanovaSan Před 6 měsíci +1

    does it need a refiner?

    • @NerdyRodent
      @NerdyRodent  Před 6 měsíci +1

      Not needed, but you can if you like!

  • @puoiripetere
    @puoiripetere Před 6 měsíci

    I tested the model and noticed that the model is very sensitive to individual variations of the prompt. To be more precise, the writing of the prompt must be very precise to obtain good results. With the standard model a very generic prompt can give nicer results. You will have to spend more time writing the prompt. Once you understand how to interface with this model the results are exceptional. The learning curve is higher. I recommend starting from an extremely generic prompt and then working on adding details. For example in the generation of a person, you will have to be very specific in the construction of each part of the body, from the face, eye alignment, skin texture and so on. Let's say this model is like a car with manual gearbox.

  • @LIMBICNATIONARTIST
    @LIMBICNATIONARTIST Před 6 měsíci +1

    First!

  • @greendsnow
    @greendsnow Před 6 měsíci +2

    why are you talking this way? :D

    • @MegaGasek
      @MegaGasek Před 6 měsíci +6

      What do you mean? He has a great voice and explains things really clearly. If it is a joke I didn't get it.

    • @Elwaves2925
      @Elwaves2925 Před 6 měsíci +1

      Why are you typing that way? 😉

    • @MegaGasek
      @MegaGasek Před 6 měsíci +1

      @@Elwaves2925 Don't blame me, it was my cat.