Komentáře •

  • @MonzonMedia
    @MonzonMedia Před 10 měsíci +4

    ***Note*** I'll be updating this video soon as I discovered a way to optimize A1111 better, check out this video here for now czcams.com/video/7mlJQ6viH20/video.html

  • @MonzonMedia
    @MonzonMedia Před 10 měsíci +10

    Hope you enjoyed this fun little test! I'm curious to know how fast these platforms run on your system? Let me know your specs and general speeds!

  • @NotThatOlivia
    @NotThatOlivia Před 10 měsíci +5

    Great comparison!

    • @MonzonMedia
      @MonzonMedia Před 10 měsíci

      Thank you! Been wanting to do it for some time. 😊👍🏼

  • @hleet
    @hleet Před 10 měsíci +3

    Thanks for theses metric, really cool to know

    • @MonzonMedia
      @MonzonMedia Před 10 měsíci +2

      You're welcome! Curiosity got the better of me 😁Now I want to test the others 👍

  • @gatotboediman9680
    @gatotboediman9680 Před 7 měsíci +2

    aha. you are the Playground sensei! I have subscribed, to both. thank you for the tutorials

    • @MonzonMedia
      @MonzonMedia Před 7 měsíci +1

      Oh no my cover is blown! hahaha! Appreciate the visit here as well my friend! 👍🙌

  • @internetceo
    @internetceo Před 10 měsíci +1

    Thanks for that Video 👌. If you can, try comparing the speeds of even more programs.

    • @MonzonMedia
      @MonzonMedia Před 10 měsíci +1

      Sure but which ones would you like to see? SDNext is one I'm looking to do and when the new Automatic1111 1.6 comes out.

  • @iqbalfajry
    @iqbalfajry Před 10 měsíci +2

    hello I'm still a beginner and still learning SDXL, if I use Ryzen 5600x with VGA gtx 1660s which one do you think I should use, is it automatic 1111 comfy or invoke?

    • @MonzonMedia
      @MonzonMedia Před 10 měsíci +2

      Are you learning stable diffusion in general? With your gpu performance comfyui is the most optimized but there is a bit of a learning curve and not really recommended for beginners. If you are just starting out I’d suggest easy diffusion czcams.com/video/gJeFxd1O_90/video.htmlsi=mxFLhJ8hWOavat47 but start with stable diffusion 1.5. SDXL runs kind of slow on easy diffusion unfortunately

  • @user-rb8bv2yx6o
    @user-rb8bv2yx6o Před 10 měsíci +3

    Is there a setting somewhere in InvokeAI equivalent to AUTOMATIC1111's Clip Skip ?

    • @MonzonMedia
      @MonzonMedia Před 10 měsíci +3

      Yes indeed, you can find it in settings>show advanced options, then go back to the main interface on the bottom left will be an advanced options drop down. czcams.com/video/1Iz4F7o6hgQ/video.html

    • @user-rb8bv2yx6o
      @user-rb8bv2yx6o Před 10 měsíci +2

      @@MonzonMedia Thank you very much ! 🙏

    • @MonzonMedia
      @MonzonMedia Před 10 měsíci +1

      @@user-rb8bv2yx6o You're welcome my friend!

  • @DoubleBob
    @DoubleBob Před 10 měsíci +7

    You have preview enabled on A1111, which means you have VAE decoding and rendering every few iterations. This changes the speed quite a lot on older hardware. Just deactivate preview and it should be as quick as the others.

    • @MonzonMedia
      @MonzonMedia Před 10 měsíci +3

      It was also on for Invoke ai. I did test it afterwards however but it only improved performance by 1sec for 1024x1024. Let's see if 1.6.0 improves it once it's released officially.

    • @MonzonMedia
      @MonzonMedia Před 10 měsíci +4

      You were kind of right. I wasn't just the actual preview, but the decoding process itself! I figured it out and got gains of 13-14 seconds for 1024x1024 and about 9-10 seconds at 1024x768. Will do an update soon! And no 1.6.0 wasn't the reason as I tested this on 1.5.2 as well 👍

    • @DoubleBob
      @DoubleBob Před 10 měsíci +1

      @@MonzonMedia Good ol A1111! Nice to see a healthy competition between these frameworks.

    • @dreambigwav
      @dreambigwav Před 9 měsíci

      Considering that data you provided in this comment it would make comfyui like 1-2 seconds faster than automatic1111. I wonder what would the difference be on 16gb card though, would the difference be bigger or smaller, I guess still in favor of comfyui? @@MonzonMedia

  • @3diva01
    @3diva01 Před 10 měsíci +3

    Thank you for the great comparison video! I've definitely seen that ComfyUi speeds are much faster than Automatic1111 in my own tests. I wonder how Fooocus holds up in a test like this?

    • @MonzonMedia
      @MonzonMedia Před 10 měsíci +1

      You're welcome! I had the same thought as I downloaded it a few days ago to try out. Perhaps I'll test it along with SDNext? 😁

    • @knightride9635
      @knightride9635 Před 10 měsíci +3

      Between ComfyUI and A1111, with a 1660s I get one image in around 4mn with focus and less than 2mn with Comfy.

    • @MonzonMedia
      @MonzonMedia Před 10 měsíci

      Sounds like Fooocus is the same as A1111 in terms of generation time?

    • @MrAsmadaseggs
      @MrAsmadaseggs Před 10 měsíci

      Fooocus has a different methodology from other comunity GUI's. Fooocus has a new rational over the CLIP model which provides better results for less prompting like midjourney. Would be unreasonable to compare Fooocus with traditional SDXL pipeline methods which call a vanilla CLIP.

  • @SBaldo8
    @SBaldo8 Před 10 měsíci +2

    I believe the couple seconds difference between comfy and invoke can be attributed to the preview, none in comfy, quite a few intermediate images in invoke

    • @MonzonMedia
      @MonzonMedia Před 10 měsíci +2

      Yeah highly likely. I'm working on an updated video since I've been able to get better speeds with A1111. I think this time around I will remove the preview option. Not a huge difference but still a factor. I can say ComfyUi is better optimized overall, with SDXL controlnet, it's fantastic. A1111, invoke ai...not so much.

  • @Grimmona
    @Grimmona Před 10 měsíci +3

    I have just six Gigabyte VRAM and I had to run on the lowram comment on automatic1111(one picture needs round about 8 minutes) to get any picture on SDXL i'm going to try today if it's still necessary or if I can use midram. If that's not working I will finally try ConfiUI

    • @MonzonMedia
      @MonzonMedia Před 10 měsíci +3

      Did you try using -xformers as well as -medvram? That’s all I had to do for auto1111. But yeah those numbers are way too long. ComfyUi is great but takes time to set up. Invoke Ai has been my main for some time now though. Keep me posted on your progress.

    • @Grimmona
      @Grimmona Před 10 měsíci +1

      @@MonzonMedia first, yes I'm using xformers.
      Second : I tried medram on A1111, didn't work
      I installed compfyUI, first try 36 seconds with base SDXL and second try same settings 33 seconds... I'm quite speechless

    • @Grimmona
      @Grimmona Před 10 měsíci +1

      @@MonzonMedia I tried out some set ups, mostly I need 45 seconds(30 steps) . 88 seconds if I use Refiner and upscale sometimes more. It's still quite fast I'm just not happy with the results yet, I had just the right setup on A1111 and now I need to recreate it in compfyUI. Just loading a A1111 picture in compfyUI doesn't do the trick

    • @MonzonMedia
      @MonzonMedia Před 10 měsíci +1

      @@Grimmona I've been working on my own set ups as well. I have some basic ones that do a decent job. drive.google.com/drive/folders/1A1-VjCLQX0XBHNaQ-ICcRTSVjvmanKar?usp=sharing There is also one template without the refiner if you want to use custom models. Refiner isn't always necessary. Just keep in mind when you use other people's workflows, you might have to redirect the model path in the yaml file. But if you are just downloading them directly to your confyui models folder than you don't have to worry about that.

    • @Grimmona
      @Grimmona Před 10 měsíci +1

      @@MonzonMedia thank you, very interesting. I trying to get a Lora working in the set up with Refiner and upscale. I'm getting it working without so far but not with the stuff

  • @_gr1nchh
    @_gr1nchh Před 8 měsíci +2

    Comfy is faster for me, I'm on a 4GB 1050 ti (I don't use XL but just SD 1.5) and Comfy is much faster. The problem is how complicated everything is to get set up and it seems that using ControlNet in Comfy is much slower than in A1111.

  • @MrAsmadaseggs
    @MrAsmadaseggs Před 10 měsíci +3

    This video will need reevaluating very shortly as new A1111 RC 1.6 build has vastly improved its SDXL method to be more competitive.

    • @MonzonMedia
      @MonzonMedia Před 10 měsíci +1

      Indeed, I've been playing around with RC 1.6 but personally have not seen the speed increases some are claiming, although it could be that I'm limited to my GPU? But not making any presumptions until the official release. Definitely a welcome update!

  • @jorgennorstrom
    @jorgennorstrom Před 8 měsíci +1

    i usually generate images in ComfyUI and then do afterwork in A1111

    • @MonzonMedia
      @MonzonMedia Před 8 měsíci

      Yeah I think many people have a similar workflow. Although the more I get used to comfy the less I use A1111. I just find A1111 doesn’t favor people like me with lower end graphics cards, performance wise it’s not the best.

  • @rickykngo
    @rickykngo Před 10 měsíci +1

    How is Ho Chi Minh appear on your intro?

    • @MonzonMedia
      @MonzonMedia Před 10 měsíci +1

      hahahaha! Well, Both Auto1111 and ComfyUi don't have official logos so I used their Github profile avatars 😁☺

  • @udappkuma
    @udappkuma Před 10 měsíci +2

    AUTOMATIC1111 v1.6.0-RC is almost fast as comfyUI for me, its three times faster than AUTOMATIC1111 v1.5, my generation time went from 57seconds to 17seconds for 1024X1024 generations..

    • @MonzonMedia
      @MonzonMedia Před 10 měsíci +1

      Interesting, haven't seen any significant speed increases, but I am limited to 3060Ti 8GB of VRAM I'm still getting the same speeds from this video and I believe I have the latest v1.6.0-RC. What GPU are you running?

  • @Heldn100
    @Heldn100 Před 5 měsíci +1

    Can you redo this in least update for now day's

    • @MonzonMedia
      @MonzonMedia Před 5 měsíci +1

      I’ve been meaning to just have had the chance lately. Hopefully soon. 👍🏼

  • @fixelheimer3726
    @fixelheimer3726 Před 10 měsíci +2

    I guess with more vram the difference between these will become smaller.

    • @MonzonMedia
      @MonzonMedia Před 10 měsíci +2

      Yes most likely, my son has the same card but with 12gb vram, I might do the same tests with it. 👍🏼

  • @DasLooney
    @DasLooney Před 10 měsíci +1

    Thanks for the video it's nice to see how the different software affects render time on your system! However, without knowing your rig specs CPU GPU RAM it really isn't as helpful as it could be. That would be most helpful! (NOTE: I messed up and missed where you did exactly that. My apologies and thanks to the people that quickly pointed out my mistake.)

    • @MonzonMedia
      @MonzonMedia Před 10 měsíci +2

      Bruh! Of course I I would have provided that info, I guess you skipped forward? 0:41 😁

    • @DasLooney
      @DasLooney Před 10 měsíci +1

      @@MonzonMedia my apologies I thought that was unlike you. I don't know how I missed it.

    • @MonzonMedia
      @MonzonMedia Před 10 měsíci +1

      ☺All good bud, just wanted to make sure you saw it because you are right, that's important info. Always appreciate your support!

    • @DasLooney
      @DasLooney Před 10 měsíci +1

      @@MonzonMedia thanks 🙏

  • @muratokur01
    @muratokur01 Před 24 dny

    Forge ?

    • @MonzonMedia
      @MonzonMedia Před 24 dny

      I compared forge here czcams.com/video/1hWpJqrR2xc/video.htmlsi=k2aNi34sHlRR-w0z

    • @muratokur01
      @muratokur01 Před 24 dny +1

      @@MonzonMedia Thanks for interest...

  • @jaroslavstreit4050
    @jaroslavstreit4050 Před 6 měsíci

    Seems it's wrongly tested... what about VRAM?

    • @MonzonMedia
      @MonzonMedia Před 6 měsíci

      What's wrong about it? Did you see 0:41 for the specs? Also most of these programs have been updated since I recorded this so speeds will be different today.

    • @jaroslavstreit4050
      @jaroslavstreit4050 Před 6 měsíci

      @@MonzonMedia it's incomplete test - What was the GPU, CPU, RAM, VRAM usage? Plus as others wrote - different software settings. I wrote it, because there are more sources of bad testing like this and e.g. in biggest Czech AI group people are getting it as axiom, which is wrong

    • @MonzonMedia
      @MonzonMedia Před 6 měsíci

      Like I said the specs are in the video! Of course I would add that info.

  • @MateuszZych
    @MateuszZych Před 10 měsíci +1

    I'm confused, I have a 3070ti and I'm having worse times than you and InvokeAI practically refuses to render me anything from SDXL :|

    • @MonzonMedia
      @MonzonMedia Před 10 měsíci

      Can you tell more about what's happening? Can't help wit no context. First off make sure you have the latest update 3.0.2post1. When you start invoke ai there is an option to update. Secondly when using SDXL, make sure you use fp32 under VAE Precision which is under the scheduler section. Also what helps is to have all your models on an SSD drive, in fact any of your stable diffusion installations should be on SSD for optimal speed. These are just some things that are often overlooked but let me know what you are experiencing.

    • @MateuszZych
      @MateuszZych Před 10 měsíci +1

      ​@@MonzonMedia It's a fresh install, I'm using fp32, I'm using ssd only. nvoke notoriously clogs VRAM completely

    • @MonzonMedia
      @MonzonMedia Před 10 měsíci

      @@MateuszZych Yeah that sounds odd...and you have nothing else running in the background? Could be a number of things, check for any updates on your GPU driver too. What are your system specs? CPU and system RAM?

    • @MateuszZych
      @MateuszZych Před 10 měsíci +1

      @@MonzonMedia no matter if i have something on background - i get about 15-25s/it each time.
      I have Ryzen 5800x and 64gb ram 3600mt/s.
      Did you type --medvram somehow? I don't see such a possibility.

    • @MonzonMedia
      @MonzonMedia Před 10 měsíci +1

      Yeah you should be golden with those specs! hhhhmmmm, might be a long shot but check your start up options. Launch Invoke ai select #6 "change start up options". Use the arrow keys to move up, down, left and right and space bar to select. Under GPU management make sure "free gpu memory after each generation" is checked as well as "Enable xformers support". Floating point precision leave on auto. Then under "RAM cache size" set it to 12. Below "VRAM cache size" put it to 0. You can experiment with this but I personally found when you use this for SDXL it may not be the best option since I only have 8GB of VRAM.
      Other than that...if you are still having issues it's beyond me, best to go to their discord for support. Let me know if any of this helps. discord.gg/invokeai-the-stable-diffusion-toolkit-1020123559063990373

  • @yajhochannel3289
    @yajhochannel3289 Před 10 měsíci

    Like 66❤️❤️🌹🌹👍👍

  • @younesaitdabachi7968
    @younesaitdabachi7968 Před 10 měsíci +1

    comfyui is the best and its 1 time download and portable and faster ilove it

    • @MonzonMedia
      @MonzonMedia Před 10 měsíci +2

      It's great but not for everyone. Personally I'm enjoying discovering new things. 👍

  • @Paulo-ut1li
    @Paulo-ut1li Před 10 měsíci

    The time you waste tweaking and dragging nodes and workflows, updating custom nodes makes comfyui much slower

    • @MonzonMedia
      @MonzonMedia Před 10 měsíci

      Well I admit ComfyUI isn't for everyone, it's for those people who want advanced and specific workflows. And once you have a set workflow you like, there is no need to tweak. Just save the template and open it up when you need it. Personally it helps me understand what happens under the hood and how the pipeline of stable diffusion works. If I'm more educated I can further serve my audience. But I hear ya, most people don't want to bother with all this. 😁

    • @Paulo-ut1li
      @Paulo-ut1li Před 10 měsíci

      @@MonzonMedia I understand ComfyUI has it's place, but also no workflow is a silver bullet (YET), everyday there's new stuff released by the community, and you will find yourself tweaking workflows to test new features. I rely solely on ComfyUI for SDXL models and a 100% prompt workflow, considering the quality of outputs and speed compared to Automatic 1111. But for me as a programmer and 2d illustrator it's very anoying having to rely purely on nodes and zooming UI, always having to zooming and drag stuff if I want to incorporate something new, considering that in automatic all I have to do is use vimium and go straight to the place I want. I love comfy for it's reliability, and speed on SDXL and I understand 3D artisits are used to these nodes interfaces. But I still can produce better outputs with Automatic 1111 because I can tweak things very fast and get the result I want quickier. Anyway I really think the Swarm UI approach of mixing Automatic 1111 tabs with Comfy nodes may be the correct approach.

  • @dunknow9486
    @dunknow9486 Před 10 měsíci

    Invoke wasn't the fastest, I don't understand why U keep mention ?

    • @MonzonMedia
      @MonzonMedia Před 10 měsíci

      As I said just pleasantly surprised. I knew ComfyUi was the fastest already. Didn’t think invoke Ai could hold it’s own.

  • @tvortsa
    @tvortsa Před 10 měsíci

    3060 6Gb - Invoke 3 - 10 MINUTs per image 512x512 !!! (((((((((((((

    • @MonzonMedia
      @MonzonMedia Před 10 měsíci

      Seems way too long. You can tweak the start up options to see if that helps. Launch Invoke ai select #6 "change start up options". Use the arrow keys to move up, down, left and right and space bar to select. You want to check if you have it using the GPU and not CPU. Under GPU management make sure "free gpu memory after each generation" is checked as well as "Enable xformers support". Floating point precision leave on auto. Then under "RAM cache size" set it to 12. Below "VRAM cache size" put it to 0. You can experiment with this. Otherwise you can get support from the invokeai team on discord, discord.gg/invokeai-the-stable-diffusion-toolkit-1020123559063990373