Diffusion models explained. How does OpenAI's GLIDE work?

Sdílet
Vložit
  • čas přidán 13. 09. 2024

Komentáře • 111

  • @Mrbits01
    @Mrbits01 Před 2 lety +54

    As I was about to go and generate the avocado armchair, I heard you say no avocado armchair. My disappointment is immeasurable and my day is ruined.

    • @AICoffeeBreak
      @AICoffeeBreak  Před 2 lety +6

      Imagine, our day was ruined too! 😭

    • @johnvonhorn2942
      @johnvonhorn2942 Před 2 lety +1

      Why can't it generate that iconic chair? Paradise lost. We miss those simpler times of that junior AI

    • @AICoffeeBreak
      @AICoffeeBreak  Před 2 lety +2

      🤣🤣

  • @MachineLearningStreetTalk

    Amazing production quality! Here we go!!

  • @LecrazyMaffe
    @LecrazyMaffe Před rokem +4

    This video offers one of the best explanations for classifier-free guidance.

  • @r00t257
    @r00t257 Před rokem +3

    love your video so much! lots of helpful intuition 🌻🌻💮Thanks ms. coffee bean a lot

  • @alfcnz
    @alfcnz Před 2 lety +3

    Nice high-level summary. Thanks!

  • @daesoolee1083
    @daesoolee1083 Před 2 lety +2

    Nice explanation! You got my subscription!

  • @AICoffeeBreak
    @AICoffeeBreak  Před 2 lety +7

    Sorry, the upload seems buggy. Re-uploading did not help. I'll wait to see if this gets better over time.
    Did you try turning it off and on again? 🤖

  • @ElieAtik
    @ElieAtik Před 2 lety +3

    This is the only video that goes into how OpenAI used text/tokens in combination with the diffusion model in order to achieve such results. That was very helpful.

  • @emiliomorales2843
    @emiliomorales2843 Před 2 lety +5

    I was waiting for this Leticia, love your channel, thank you

  • @Nex_Addo
    @Nex_Addo Před 2 lety +6

    Thank you for the first effective high-level explanation of Diffusion I've found. Truly, I do not know how I went so long in this space not knowing about your channel.

  • @alexandrupapiu3310
    @alexandrupapiu3310 Před 2 lety +2

    This was soo informative. And the humour was spot on!

  • @undergrad4980
    @undergrad4980 Před 2 lety +3

    Great explanation. Thank you.

  • @amirarsalanrajabi5171
    @amirarsalanrajabi5171 Před 2 lety +2

    Just found your channel yesterday and I'm loving it! Way to go !

  • @jonahturner2969
    @jonahturner2969 Před 2 lety +25

    Love your channel! Cat videos get millions of views. Your videos might get in the thousands of views, but they have a huge impact by explaining high level concepts to people who can actually use them. Please keep up your exceptional work

    • @AICoffeeBreak
      @AICoffeeBreak  Před 2 lety +5

      Wow, thank you! Funny, I was thinking about my videos vs. cat videos very recently in a chat with Tim and Keith from MLST. I remember that part was not recorded. It's nice to read that you had the same thought. :)

  • @OP-yw3ws
    @OP-yw3ws Před 9 měsíci +2

    You explained the CFG so well. I was trying to wrap my head around it for a while!

  • @samanthaqiu3416
    @samanthaqiu3416 Před 2 lety +5

    I love Yannic, but boy do I like your articulate presentation? I think I do

  • @muhammadwaseem_
    @muhammadwaseem_ Před 7 měsíci +1

    classifier-free guidance is explained well. Thank you

  • @klarietakiba1445
    @klarietakiba1445 Před 2 lety +3

    You always have the best, clear and concise explanations on these topics

    • @AICoffeeBreak
      @AICoffeeBreak  Před 2 lety +2

      Thanks! ☺️

    • @taseronify
      @taseronify Před 2 lety

      I don't think so. I did not understand why noise is added to a perfect image?
      What is achieved by adding noise?
      Can anyone explain it please?

    • @AICoffeeBreak
      @AICoffeeBreak  Před 2 lety +1

      @@taseronify We train the model on existing images where we know how they should look like. Then with new noise, the model generates new images during testing.

  • @theeFaris
    @theeFaris Před 2 lety +4

    very helpful thank you

  • @HangtheGreat
    @HangtheGreat Před rokem +2

    very well explained. love the intuition / comparison piece. send my regards to ms coffee bean :D

    • @AICoffeeBreak
      @AICoffeeBreak  Před rokem +1

      Thanks! Ms. Coffee Bean was so happy to read this. :)

  • @_tgwilson_
    @_tgwilson_ Před 2 lety +1

    Just started playing around with disco diffusion. This is the best explanation I've found and I love the coffee bean character. Subbed.

    • @AICoffeeBreak
      @AICoffeeBreak  Před 2 lety +2

      Welcome to the coffee drinkers' club! ☕

    • @_tgwilson_
      @_tgwilson_ Před 2 lety +1

      @@AICoffeeBreak ☕
      Thanks, the content on your channel is really well thought out and wonderfully conceived. I really hope the channel grows, and am quite sure Mr CZcams will favour a channel dedicated to the architecture that underpins his existence 😀 I spent some time during lockdown going through many chapters of Penroses The Road to Reality (one of the best and most difficult books I've ever read) with nothing but calc 1 to 3 and some linear algebra under my belt. I'm very interested in studying ML in my free time as many of the ideas are informed by physics. Thanks again for your educational content, the quality is top notch.

  • @ArjunKumar123111
    @ArjunKumar123111 Před 2 lety +5

    I'm here to speculate Ms Coffee Bean knew the existence of DALLE 2... Convenient timing...

  • @CristianGarcia
    @CristianGarcia Před 2 lety +52

    Something not stated in the video is that Diffusion Models are WAY easier to train than GANs.
    Although it requires you to code the forward and backward diffusion procedures, training is rather stable which is more gratifying.
    Might release a tutorial on training diffusion models on a toy-ish dataset in the near future :)

    • @AICoffeeBreak
      @AICoffeeBreak  Před 2 lety +8

      Great point, thanks! 🎯
      Paste the tutorial in the comments, when ready! 👀

    • @MultiCraftTube
      @MultiCraftTube Před 2 lety +5

      That would be a great tutorial! Mine doesn't want to learn MNIST 😅

    • @taseronify
      @taseronify Před 2 lety +2

      WHY noise is added to a perfect image? And why we reverse it? To get a clear image? We already had a clear image at the beginning.
      This video fails to explain it.

    • @AICoffeeBreak
      @AICoffeeBreak  Před 2 lety +7

      @@taseronify Because we train the model on existing images where we know how they should look like. Then with new noise, the model generates new images during testing.

    • @RishiRaj-hu9it
      @RishiRaj-hu9it Před rokem

      Hi.. just curious to know.. if any tutorial has come up?

  • @phizc
    @phizc Před rokem +2

    Wow what a difference a few months make. Dall-E 2 in April, Midjourney in July, and Stable Diffusion in August.
    Hi from the future 😊.

  • @gergerger53
    @gergerger53 Před 2 lety +4

    Great, as always

  • @balcaenpunch
    @balcaenpunch Před 2 lety +4

    At @3:55, in "227" the two "2s" written differently - I have never seen someone else other than myself do this! Cheers, Letitia. Great video.

  • @Yenrabbit
    @Yenrabbit Před 2 lety +4

    What a great explainer video! Thanks for sharing 🙂

  • @Vikram-wx4hg
    @Vikram-wx4hg Před 2 lety +1

    Wonderful review - not just does it capture the essential information, but it is also is interspersed with some very good humor. Look forward to more from you!

  • @JosephRocca
    @JosephRocca Před 2 lety +4

    Astoundingly well-explained!

    • @AICoffeeBreak
      @AICoffeeBreak  Před 2 lety +5

      Hehe, thanks! Astoundingly positively impactful comment. ☺️

  • @theaicodes
    @theaicodes Před 2 lety +4

    Nice video! very instructive!

  • @tripzero0
    @tripzero0 Před 2 lety +3

    I finally understand diffusion! (Not really but moreso than before)

  • @Mutual_Information
    @Mutual_Information Před 2 lety +10

    Very nice video! It's nice to see Diffusion models getting more attention. It seems the coolest AI generated art is all coming from diffusion models these days.

  • @RalphDratman
    @RalphDratman Před 2 lety +3

    This is an excellent teaching session. I learned a great deal. Thank you.
    I do not personally need another avocado armchair as that is all we ever sit on now in my house. It turns out that avocados are not ideal for chair construction. When the avocado becomes fully ripe the chair loses its furniture-like qualities.
    I would like to know whether the smaller, released version of GLIDE is at least useful for understanding the GLIDE archtecture and getting a feel for what GLIDE can do.

    • @AICoffeeBreak
      @AICoffeeBreak  Před 2 lety +1

      Haha, your middle line cracked me up.
      Regarding your last question, the answer is rather no. Scale enables for some capabilities that small data and models simply do not show.

  • @DeepFindr
    @DeepFindr Před 2 lety +7

    Very nice video! I'm working with flow-based models atm and also came accross lilian weng's blogpost, which is superb. I feel like diffusion models and flow-based models share some similarities. In fact all generative models share similarities :D

  • @Micetticat
    @Micetticat Před 2 lety +9

    Amazing video. All concepts are explained so clearly. "Teeeeeext!" That notation made me laugh. It seems that that Classifier-free guidance technique they are using could be used in a lot of other cases where multimodality is required.

  • @chainonsmanquants1630
    @chainonsmanquants1630 Před 2 lety +2

    thx

  • @alexijohansen
    @alexijohansen Před 2 lety +4

    Very nice video!

  • @sophiazell9517
    @sophiazell9517 Před 2 lety +2

    "Is this a weird hack? - Yes, it is!"

  • @spacemanchris
    @spacemanchris Před 2 lety +5

    Thanks so much for this video and your channel. I really appreciate your explanations, I'm coming at this topic from the art side rather than the technical side so having these concepts explained is very helpful. For the last month I've been producing artwork with Disco Diffusion and it's really a revolution in my opinion. Let me know if you'd like to use any future videos and I can send you a selection.

    • @AICoffeeBreak
      @AICoffeeBreak  Před 2 lety +5

      Hey, write me an email or tell me your Twitter handle.

  • @marcocipriano5922
    @marcocipriano5922 Před 2 lety +4

    you can feel this is serious stuff by the workout background music.
    Super interesting topic and a very clear video considering how many complex aspects were involved.
    14:20 I wonder what GLIDE predicts here on the branch which inputs just noise without the text (at least at the first iteration?).

    • @AICoffeeBreak
      @AICoffeeBreak  Před 2 lety +4

      RE: music. Cannot leave the impression we are talking about unimportant stuff, lol. 😅
      RE: Prediction without text from just noise. I think the answer is: something. Like, anything, but always depending on the noise that was just sampled. Different noise => different generations. Being the first step out of 150, this would mean that it basically adds here and there pieces of information that can crystallize in the remaining 149 iterations.

  • @Neptutron
    @Neptutron Před 2 lety +7

    I love your videos! I also love how many comments you respond to...it makes it feel more like a community than other ML channels
    The idea of generating globally coherent images via a u-net is pretty cool - the global image attention part is weird I'll have to look into more lol.
    From DALLE-2 it seems another advantage of diffusion models is that it can be used to edit images, because it can modify existing images somehow

    • @AICoffeeBreak
      @AICoffeeBreak  Před 2 lety +4

      Hey, thanks! Yes, we totally forgot to mention how editing can be done: basically, you limit the diffusion process to only the area you want to have edited. The rest of the image is left unchanged.

    • @RfMac
      @RfMac Před 2 lety +2

      @@AICoffeeBreak yeah, I agree, your videos are awesome! I just met your channels and it covers so many recent papers! I'm watching a bunch of your videos hahah
      And is global image attention covered in some other video?
      Thanks for the content!

  • @BlissfulBasilisk
    @BlissfulBasilisk Před 2 lety +5

    Teeeeext!

  • @alexvass
    @alexvass Před rokem +2

    nice and clear

  • @Youkouleleh
    @Youkouleleh Před 2 lety +4

    Is it possible to create an embedding of an input image using a diffusion model? If the way to do it is to add noise, does the embedding still have interesting propreties ? I would not think so

    • @AICoffeeBreak
      @AICoffeeBreak  Před 2 lety +3

      Maybe I lack imagination, but I also do not think so. The neural net representations just capture the noise diff, which is not really an image representation.

    • @Youkouleleh
      @Youkouleleh Před 2 lety +1

      @@AICoffeeBreak I have another question, does the network used during the denoising part (predict the noise to remove it) is the same at every noise level, or is it N different models for each level of noise?

    • @AICoffeeBreak
      @AICoffeeBreak  Před 2 lety +2

      The same model for each step. :)

    • @Youkouleleh
      @Youkouleleh Před 2 lety +1

      @@AICoffeeBreak Just for information, there are indeed "no single latent space" because the sampling procedure is stochastic. But that why some people proposed a deterministic approach to produce sample from the target distribution, DDIM (denoising diffusion implicit model) which does not require to retrian the DDPM but only changes the sampling algorithm and allows the concept of latent space and encoder for diffusion models.

  • @MakerBen
    @MakerBen Před 2 lety +2

    Thanks!

  • @shahaffinder5355
    @shahaffinder5355 Před 2 lety +1

    Great video :)
    One small mistake I would like to point out is at 6:30, where the example with the extra arrow is in fact a Markovian structure (Markov random field), but not a chain :)

  • @marcinelantkowski662
    @marcinelantkowski662 Před 2 lety +4

    I absolutely love your channel and the explanations you provide, thanks for all the great work you put into these videos!
    But here I don't fully get the intuition behind the step-wise denoising:
    At step T we ask the network to predict the noise from step T-1, correct?
    But the noise at step T-1 is indistinguishable from the noise at step T-2, T-3, ... T-n, no?
    Let's say we add some random noise only twice: img = (img + noise_1) + noise_2
    It seems like a non-identifiable problem! I can imagine we could train the network to predict (noise_1 + noise_2),
    but it should be physically impossible to predict which pixels were corrupted by noise_1, which were corrupted by noise_2?

  • @lewingtonn
    @lewingtonn Před 2 lety

    bless your soul!

  • @Sutirtha
    @Sutirtha Před 2 lety +2

    Amazing video.. Any recommendations about the python code, to implement this model with any custom dataset?

  • @RfMac
    @RfMac Před 2 lety +2

    I would like to give 1000 likes in this video!

  • @aungkhant502
    @aungkhant502 Před rokem

    What is intuition behind classifier free approach?

  • @bhuvaneshs.k638
    @bhuvaneshs.k638 Před 2 lety +1

    How's unet becomes a Markov chain if there's skip connection?
    Can you explain this? I did get it exactly

    • @AICoffeeBreak
      @AICoffeeBreak  Před 2 lety +2

      Not the Unet is markov, but the successions of steps where at each step, you apply a Unet or something else.

  • @renanmonteirobarbosa8129
    @renanmonteirobarbosa8129 Před 2 lety +2

    Letitia do you have a channel discord ?

  • @adr3000
    @adr3000 Před rokem

    Question: Can the NOISE ( input ) be used as a SEED to be highly-deterministic with the diffusion models outputs? (Assuming the trained model (PT or w/e) is the same?)

  • @core6358
    @core6358 Před 2 lety +1

    you should do an update video now that dalle 2 and imagen are out and people are hyping them up

    • @AICoffeeBreak
      @AICoffeeBreak  Před 2 lety +1

      We already have a video on Imagen. 😅

    • @AICoffeeBreak
      @AICoffeeBreak  Před 2 lety +1

      Imagen video. czcams.com/video/xqDeAz0U-R4/video.html

    • @AICoffeeBreak
      @AICoffeeBreak  Před 2 lety +1

      And a DALL-E 2 secret language video. czcams.com/video/MNwURQ9621k/video.html

  • @Imhotep397
    @Imhotep397 Před 2 lety

    Does the diffusion model essentially work like Chuck Close’s art method, while CLIP actually finds the requisite parts that are to be put together to create the crazy images? Also, how do you even get an invite to Imagen or Dall-E to test this beyond all the possibly rigged samples they have up.

  • @aifirst9478
    @aifirst9478 Před 2 lety

    Thanks for this amazing video. Do you know any online course where we can practice with training diffusion models?

  • @ithork
    @ithork Před 2 lety

    Can anybody recommend a video that describes how this works in less technical terms? Like explain it to an art major?

  • @lendrick
    @lendrick Před 2 lety +4

    "open" AI

  • @peterplantec7911
    @peterplantec7911 Před rokem

    You lost me from time to time, but I think I have an overview now. I wish you have better explained how Diffusion models decide what they are going to use in their construction of the image. Sure It goes from noise to image, but If I use Ken Perlin's noise, it doesn't have any image component in it. So how does the diffusion model suck image information out of it?

  • @hoami8320
    @hoami8320 Před 2 měsíci

    i'm sorry,
    😁 you can decode the architecture of Model meta llama 3

  • @DuskJockeysApps
    @DuskJockeysApps Před 6 měsíci

    Well I went to have a look at the Glide Text2im. To say I am not impressed would be an understatement. My prompt was "girl with short blonde hair, cherry blossom tattoos, pencil sketch". What did I get back, after 20 minutes? A crude drawing of 2 giraffes. And the one on the left is barely recognisable.

  • @bgspss
    @bgspss Před 2 lety

    Can someone pls explain how exactl ythis model was inspired the nonequilibrium thermodynamics?

  • @julius4858
    @julius4858 Před 2 lety +1

    „Open“ai

  • @jadtawil6143
    @jadtawil6143 Před 2 lety +2

    i like you

    • @DerPylz
      @DerPylz Před 2 lety +1

      I like you, too

  • @DazzlingAction
    @DazzlingAction Před 2 lety +2

    Why is everything a chain lately... kinda of laughable...

  • @ujjwaljain6416
    @ujjwaljain6416 Před 2 lety

    We really don't need that coffee bean jumping around in the video.

  • @stumby1073
    @stumby1073 Před 2 lety +1

    I'm so stupid

  • @diarykeeper
    @diarykeeper Před 2 lety

    Give me vocal isolation.
    Spleeter and uvr are nice, but if image stuff can work this well, apply it to music.
    Gogogo