Is Adobe Firefly better than Midjourney and Stable Diffusion?

Sdílet
Vložit
  • čas přidán 13. 09. 2024
  • Is Adobe Firefly better than Midjourney and Stable Diffusion? And is this the gamechanger we have been waiting for? This video is a deep dive into comparing the best AI solutions, so you can find out which one is the best for you.
    It's made on the with Wacom Cintiq Pro 32, Macbook Pro M1 Max, Atem Mini Pro, Adobe Premiere Pro, After Effects, Adobe Illustrator.
    -------------------------------
    SUBSCRIBE
    ▶ handle: / @levendestreg
    ▶ You can subscribe to our channel here: www.youtube.co...
    ▶Cheat sheet (prompt generator): gumroad.com/a/...
    ▶ Use this promo code for Gumroad: CREATE30
    ▶ RunDiffusion Promo Code: levendestreg15
    ▶ RunDiffusion reference: bit.ly/RunDiff...
    ▶ Adobe Firefly: firefly.adobe....
    -------------------------------
    My setup
    Macbook Pro M1 Max: amzn.to/3Znd44l
    Wacom Cintiq 32 inch (couldn't find the 32 inch - so links is for 27 inch): amzn.to/40ubLSl
    Atem Mini Pro: amzn.to/3nAe9s4
    Brydge dock: amzn.to/3Zr5Kob
    Stream Deck XL: amzn.to/3Zun5N9
    Razer Tartarus v2: amzn.to/3TV239a
    Logitech MX master: amzn.to/3TSFK43
    -------------------------------

Komentáře • 36

  • @markelliot2994
    @markelliot2994 Před rokem +1

    and how will we pay for this: additional license? Only with Adobe Stock subscription? Creative Cloud addition? A whole new "Adobe AI" monthly license...sorry I'm broke from licenses

  • @qjojotaro2695
    @qjojotaro2695 Před rokem +3

    I like her positive and energetic energy. The best.

    • @LevendeStreg
      @LevendeStreg  Před rokem +1

      Thank you kindly, @qjojotaro2695, I really appreciate it🙌

  • @Gisburne2000
    @Gisburne2000 Před rokem +1

    As soon as you selected 'overhead view' I expected MidJourney to ignore that part, and it definitely did. I've never been able to persuade MJ to create an overhead view (landscapes and cities work, but not people). Although the MJ images at 5:40 are better than the Firefly ones at 6:00, Firefly definitely nails the 'overhead view' you asked for, whereas in MJ there is no trace of it. The MJ finished images are WAY better, but other tools are better in certain areas. The text tool in Firefly definitely demonstrates that. The trick is knowing how to work with all of them (or some of them) together, to get to where you want to go. And that's why we need you, of course, to show us how! Great video, as always.

    • @LevendeStreg
      @LevendeStreg  Před rokem

      Thank you kindly @Gisburne2000! I think most of the AIs will become better and better with how they interpret our input (or I hope so)...And I agree, we need to work with at least a few different AI solutions 🙌

  • @nicknick6464
    @nicknick6464 Před rokem +2

    Thanks Maria for the interesting comparison. Keep on being enthusiastic. It is amazing that the Canny edge detector is still so useful after 37 years. Here is the reference to the original article: Canny, J., 1986. A computational approach to edge detection. IEEE Transactions on pattern analysis and machine intelligence, (6), pp.679-698.

    • @LevendeStreg
      @LevendeStreg  Před rokem

      Thank you kindly, nicknick6464. Yes, it’s awesome with the Canny model. Thank you for the link🙌

  • @markparker5585
    @markparker5585 Před rokem +1

    Two quick tips for Midjourney workflow.
    1. For repeating your last prompt, you don’t need to use copy/paste. CMD Z on a Mac or CTRL Z on a PC will bring back your last prompt in Discord, which you can then edit if you wish, before hitting return. You can keep hitting CMD or CTR Z to keep scrolling back through older prompts, but there is a point where copy/paste would be quicker.
    2. For new fresh prompts, if your last command was a /imagine (which it probably would have been), you shouldn’t need to type in the full /imagine word. Normally /i is enough, (maybe just the /) as the UI auto fills the rest of the word when you hit return.

    • @LevendeStreg
      @LevendeStreg  Před rokem

      Wow. I’m gonna try those tips. Thank you for sharing! 🙌

    • @Gisburne2000
      @Gisburne2000 Před rokem +1

      @@LevendeStreg I would add that if MJ is set to 'remix mode', every time you click 'regenerate' (the circular arrows) it will bring up the prompt you used for that 2x2 set of images, so you can edit it and resubmit it as a new prompt. If you're on the 2x2 grid it treats it as a new prompt. It's easier than copying and pasting the prompt.

    • @LevendeStreg
      @LevendeStreg  Před rokem

      @@Gisburne2000 Good point - thank you. I'll add that to one of my upcoming videos.👍

  • @jamescunningham2035
    @jamescunningham2035 Před rokem +1

    Great video! It is really good to see the direct comparisons. I am slowly working through your videos but jumped ahead as learning from videos back in January is already so out of date!
    I have two CGI robot characters from a short film I made and I want to make concept art / pitch images of them for another project. Midjourney images look great (interesting that is totally failed to do what you asked and make an aerial image) but I do not see a way to teach Midjourney to create images with my characters - is that right?
    So that leaves me with Stable Diffusion? I have tried training with Astria and was going to try the Google Colab but now I see maybe RunDiffusion is better as it rolls it all together. Can SD with Dreambooth do two characters? All these AI images seem to be single shots, not two shots. It would be great to see a video that goes into detail on training a model in Dreambooth (more detail than your Astria/Colab video), maybe a model that is not human.

    • @LevendeStreg
      @LevendeStreg  Před rokem

      Thank you so much James. Glad you like my videos. Yes, correct RunDiffusion is a great alternative. But training is no walk in the park as it often breaks (Auto 1111) is known to cause that. And ideally you could train a model with both characters at the same time. This is hard process, normally I would train a model with each character and then fuse the image together with inpainting. But as you know now, that takes an enormous amount of time. Eventually it will not be so hard with AI. But we’re all learning right now. And though there is progress every day, it’s still not quite there yet when it comes to two characters at a time - unless you first create 3D models of both characters together and then train on those images…

    • @jamescunningham2035
      @jamescunningham2035 Před rokem +1

      @@LevendeStreg I do have 3D models of them, rigged in Maya, and of course I have footage of them in the short film. It would be great to feed the AI the footage and get it to train itself off that.
      Would you train in RunDiffusion over Astria?

    • @LevendeStreg
      @LevendeStreg  Před rokem

      @@jamescunningham2035 oh nice. I would probably start with Astria - it’s much easier. And at the moment my training keeps breaking in RunDiffusion. Don’t really know why. There is also Leonardo, I’m just looking at training in that. I would love to see some of the result, if you’d be open to sharing. And I’d love to do a video where I mention your work… just an idea. People would love to learn… 🙌

  • @TheGimber
    @TheGimber Před rokem +1

    thanks for the comparison! 🎉

    • @LevendeStreg
      @LevendeStreg  Před rokem

      You’re very welcome. Thanks for watching.🙌

  • @rustyroy5385
    @rustyroy5385 Před rokem +1

    Just got accepted to use Firefly today, and the results are extremely similar to Stable Diffusion. Not sure if it actually is using Stable Diffusion, or it's just using the same base model?

    • @LevendeStreg
      @LevendeStreg  Před rokem +1

      IMHO It’s worse than Stable Diffusion. It’s trained on their own stock images an cloud. But no nudity. It really struggles to make humans🔥

    • @pfizerpricehike9747
      @pfizerpricehike9747 Před rokem +1

      @@LevendeStreg not only that the filters are way to restrictive
      Adobe literally became a meme in the community for censoring everything from a necktie, bedsheets, removing humans from pictures, etc..
      Also they ain’t got no controlnet so it’s like half a year behind current state of the art technology
      There’s a reason tencent copied the code instantly after release and Microsoft Hongkong now also got their own implementation of CN in the testing

    • @LevendeStreg
      @LevendeStreg  Před rokem

      @@pfizerpricehike9747 Hahaha... I know this of course. This is when they released Firefly. I'm not impressed with Firefly. But I love the integration into Photoshop 2023 (see this video I did: czcams.com/video/HWoTS4xp9dA/video.html). And they're like 2 years behind the tech. But I think they will catch up quickly. They have the muscle power to buy up lots of newcomers and integrate it into Adobe. So I think they will become a real player in the field. But so will Nvidia. They have the graphics cards everybody wants + they can create their own solutions + paint to words and their new platform looks amazing. So I think we will end up with many solutions! And artist will probably need to use at least 2-4 solutions.

  • @almor2445
    @almor2445 Před rokem +1

    Nothing's even close to Mid Journey except Stable Diffusion.

    • @LevendeStreg
      @LevendeStreg  Před rokem +1

      I absolutely agree!🙌

    • @pfizerpricehike9747
      @pfizerpricehike9747 Před rokem +1

      I must say midjourney isn’t really good it just gives the impression to beginners.
      It’s great at giving you award winning photos without any prompting skills but will become really restrictive the better you get, bc it will not generate your highly descriptive prompt but like with almost no input, just make a guesstimation of what the average consumer most likely want to see.
      As you can see with the overhead example, it’s far less capable in general , just much better at the low end, exactly like adobe firefly, you got almost no artistic freedom but it’s great at infilling without a prompt, but that’s where it kinda ends with the good stuff. Also midjourney is lacking all the extensions that make SD so strong in the first place.
      SD’s biggest weakness compared to dall-e is that it’s not that good at multiple subjects especially with different colors in the prompt, which can be easily tweaked with two extensions. One is cutoff for color so you can have your medieval castle with blue banners hanging from it and a red dragon in the same pic, the other is so you can specify what part of the image should take what prompt, so you can reliably get the dragon where you want it instead of everything being based on rng and luck like in all the alternatives. Midjourney produces the best results ootb without any prior knowledge, while SD is the pinnacle of the technology if you know how to use it correctly

    • @LevendeStreg
      @LevendeStreg  Před rokem

      @@pfizerpricehike9747 I'm also very fond of Stable Diffusion. And it's by far the AI that I use the most right now and is learning about on a professional level. In the end I'm guessing that SD will win the battle. Because it's open source, people expand and develop on it like crazy and you can train it. And with ControlNet it's awesome. So yes - for now I actually learn and know about most of the AI solutions. And I'm very happy about the newest Photoshop BETA 2023 update too.

  • @LouisGedo
    @LouisGedo Před rokem +1

    👋

  • @ladyrose358
    @ladyrose358 Před rokem +3

    No it's not.

    • @LevendeStreg
      @LevendeStreg  Před rokem

      You’re right. It’s not… yet. Maybe it will shape up at a later stage.

  • @jopansmark
    @jopansmark Před rokem +1

    Better than MJ, worse than SD

    • @LevendeStreg
      @LevendeStreg  Před rokem +1

      Yeah, I agree - I’m also hardcore fan of SD!

  • @briandoortodoordelivery2236

    was really interested in the comparison, but couldn't make it two minutes into the excessively overblown fake enthusiasm....dial it back about 8 notches!

    • @LevendeStreg
      @LevendeStreg  Před rokem +1

      Hahaha. I’m a very shy person in private, so I krank up the energy when making the videos. Actually it drains a lot of energy. So, thank you for watching the two minutes, though. I really appreciate it 🙌

  • @Draper811
    @Draper811 Před rokem +1

    10 bucks for the gumroad pdf hahahaha no thanks

    • @LevendeStreg
      @LevendeStreg  Před rokem

      If you use the promocode it’s 30% Off, just so you know. Thanks for watching🙌