[Classic] Generative Adversarial Networks (Paper Explained)

Sdílet
Vložit
  • čas přidán 13. 09. 2024

Komentáře • 69

  • @Youtoober6947
    @Youtoober6947 Před 2 lety +29

    I don't know if you have an idea, but I would like to tell you that I believe you have NO idea how helpful (and especially how helpful with time management) the Paper Explained series you're doing is for me. These are SERIOUSLY invaluable, thank you so much.

  • @Aniket7Tomar
    @Aniket7Tomar Před 4 lety +107

    I am loving these classic paper videos. More of these, please.

  • @TheInfinix
    @TheInfinix Před 4 lety +92

    I think that such an initiative will be useful for fresh researchers and beginners.

  • @kateyurkova6384
    @kateyurkova6384 Před 3 lety +11

    These reviews are priceless, you add so much more value than just reading the paper would bring, thank you for your work.

  • @MinecraftLetstime
    @MinecraftLetstime Před 4 lety +14

    These are absolutely amazing, please keep them coming.

  • @JulienneSorel-r5f
    @JulienneSorel-r5f Před 2 měsíci

    I often wished that something like this existed on CZcams. Your series is a dream come true. Many thanks.

  • @datamlistic
    @datamlistic Před 4 lety +3

    The classic papers are amazing! Please continue making them!

  • @andresfernandoaranda5498
    @andresfernandoaranda5498 Před 4 lety +5

    I thank you for making this resources free to the community ))

  • @sulavojha8322
    @sulavojha8322 Před 4 lety +5

    Classic paper is too good. Hope you upload such more videos. Thank you !

  • @maltejensen7392
    @maltejensen7392 Před 4 lety +6

    It's extremely helpful to hear your thoughts on what the authors have been thinking and things like researchers trying to put MCMC somewhere it was intended not to be. This gives a better idea of how the machine learning in academia works. Please continue this and thanks!

  • @fulin3397
    @fulin3397 Před 4 lety +6

    classic paper and very awesome explanation. Thank you!

  • @bjornhansen9659
    @bjornhansen9659 Před 4 lety +1

    I like these videos on the papers. It is very helpful to hear how another person views the ideas discussed in these papers. thanks!

  • @benjaminbenjamin8834
    @benjaminbenjamin8834 Před 3 lety +1

    @Yannic , this is such a great initiative and you are doing a great great job. Please carry it on.

  • @narinpratap8790
    @narinpratap8790 Před 2 lety +1

    This was awesome! I am currently a graduate student, and I have to write a paper review for my Deep Learning course. Loved your explainer on GANs. This has helped me understand so much of the intuition behind GANs, and also the developments in Generative Models since the paper's release. Thank you for making this.

  • @agbeliemmanuel6023
    @agbeliemmanuel6023 Před 4 lety +2

    It's great to have origin of most models in ML today. Good work

  • @falachl
    @falachl Před 2 lety

    Yannic, thank you, In this overloaded world for ML you are providing a critical informative service. Please keep it up

  • @ambujmittal6824
    @ambujmittal6824 Před 4 lety +1

    You're truly a God's gift for people who are comparatively new in the field. (Maybe even for experienced ones) Thanks a lot and keep up the good work!

  • @herp_derpingson
    @herp_derpingson Před 4 lety +15

    12:00 I never quite liked the min-max analogy. I think a better analogy would be a teacher student analogy. The discriminator says, "The image you generated does not look like a real image and here are the gradients which tells you why. Use the gradients to improve yourself."
    .
    32:30 I am pretty sure this interpolations existed in auto-encoder literature
    .
    Mode collapse is pretty common for human teachers and students. Teachers often say that you need to solve the problems the way I taught in class. "My way or the highway" XD

    • @YannicKilcher
      @YannicKilcher  Před 4 lety +9

      Yes the teacher student phrasing would make more sense, I think the min-max is just the formal way of expressing the optimization problem to be solved and then people go from there into game theory etc.
      The mode collapse could also be the student that knows exactly what to write in any essay to make the one particular teacher happy :D

  • @aa-xn5hc
    @aa-xn5hc Před 4 lety +3

    I love these historical videos of you!!

  • @SallyZhang-vt2oi
    @SallyZhang-vt2oi Před 4 lety

    Thank you very much. I really appreciate your understanding of these papers. Please keep on releasing such kind of videos. They helped me a lot. Thanks again!

  • @YtongT
    @YtongT Před 4 lety +3

    very useful, thank you for such quality content!

  • @sergiomanuel2206
    @sergiomanuel2206 Před 4 lety +3

    Very good paper!! , can you please go to the paper of next bigger step to the state of art in GANs. Thank you!

  • @AnassHARMAL
    @AnassHARMAL Před rokem

    This is amazing, thank you! As a materials scientist trying to utilize machine learning, this just hits the spot!

  • @DasGrosseFressen
    @DasGrosseFressen Před 4 lety +3

    "Historical" in ML : 6 years :D
    The series ist nice, thanks! one question though: you said that the objective is to minimize the exoectations in (1), but the minmax is already performed to get to the equality, right? How does V look?
    Edit: oh, never mind. In (3) you see that (1) is in the typical CS-sloppy notation...

  • @westcott2204
    @westcott2204 Před rokem

    Thank you for providing your insights and current point of view on the paper. it was very helpful.

  • @alexandravalavanis2282

    Damn. I’m enjoying this video very much. Very helpful. Thank you!

  • @frankd1156
    @frankd1156 Před 3 lety

    Wow ...this is gold.Keep up man.be blessed

  • @aman6089
    @aman6089 Před 2 lety

    Thank you for the explaination.
    It is a great resource for beginner like myself!

  • @avishvj
    @avishvj Před 2 lety

    brilliant, would love more of these!

  • @Throwingness
    @Throwingness Před 2 lety

    I'd appreciate more explaining on the math in the future. This kind of math is rarely encountered by most programmers.

  • @Notshife
    @Notshife Před 4 lety +1

    Hey @Yannic, I followed up on the BYOL paper you covered. While I'm not super familiar with machine learning I do feel I implemented something which is mechanically the same as what was presented and I thought it might interest you that the result for me was that it converged to a constant, every time. The exponential moving average weighted network and the separate augmentations did not prevent it. I will be going back through to see if I maybe have made a mistake. But I have been trying a bit of everything and so far nothing has been able to prevent the trivial solution. Maybe I'm missing something, which I hope because I liked the idea. My experimentation with parameters and network architecture has not been exhaustive... But yeah, so far: no magic.

    • @YannicKilcher
      @YannicKilcher  Před 4 lety +1

      Yes, I was expecting most people to have your experience and then apparently someone else can somehow make it work sometimes.

  • @AltafHussain-gk2xe
    @AltafHussain-gk2xe Před 2 lety

    Sir I'm big fan of you. I'm following you for last one year I find your every video is full of information and really useful. Sir I request you to please make few videos one segmentation as well I shall be thankful to you.

  • @kristiantorres1080
    @kristiantorres1080 Před 3 lety

    Beautiful paper and superb review!

  • @jintaoren6755
    @jintaoren6755 Před 3 lety +1

    why youtube hasn't recommended me this channel earlier?

  • @goldfishjy95
    @goldfishjy95 Před 3 lety

    Hi this is incredibly useful, thank you so much!

  • @kvawnmartin1562
    @kvawnmartin1562 Před 4 lety

    Best GAN explanation ever

  • @bosepukur
    @bosepukur Před 4 lety

    great initiative ....love to see some classis NLP papers

  • @jeromeblanchet3827
    @jeromeblanchet3827 Před 4 lety +1

    Most people tells stories with data insights and model prediction. Yannic tells stories with papers.
    An image is worth a 1000 word, and a good story is worth a 1000 image.

  • @dl569
    @dl569 Před rokem

    thanks a lot!

  • @TheKoreanfavorites
    @TheKoreanfavorites Před 2 lety

    Great!!!

  • @robo2.069
    @robo2.069 Před 4 lety

    Nice explained thanku.......Can you make a video on Dual motion GAN(DMGAN) .

  • @sweatobertrinderknecht3480

    I‘d like to see a mix of papers and actual (python) code

  • @flyagaric23
    @flyagaric23 Před 4 lety

    Thank you, Excellent.

  • @rameshravula8340
    @rameshravula8340 Před 4 lety

    Yannic, could you give application examples at the end of each paper you review.

  • @lcslima45
    @lcslima45 Před 3 lety

    This channel is awesome

  • @utku_yucel
    @utku_yucel Před 4 lety

    YES! THANKS!

  • @paulijzermans7637
    @paulijzermans7637 Před 11 měsíci

    i'm writing my thesis on GAN's atm. Would enjoy an interesting conversation with an expert:)

  • @vigneshbalaji21
    @vigneshbalaji21 Před 2 lety

    Can you please post a video of GAIL ?

  • @hahawadda
    @hahawadda Před 4 lety +3

    Funny how now we can say the original paper on GAN is classic

  • @shivombhargava2166
    @shivombhargava2166 Před 4 lety +1

    Please make a video on pix2pix GANs

  • @XOPOIIIO
    @XOPOIIIO Před 4 lety +4

    In the future there'll be an algorithm to transform scientific papers into your videos.

    • @adamantidus
      @adamantidus Před 4 lety +1

      No matter how efficient this algorithm might be, Yannic will still be faster

  • @dandy-lions5788
    @dandy-lions5788 Před 4 lety

    Thank you so much!! Can you do a paper on UNet?

  • @DANstudiosable
    @DANstudiosable Před 4 lety +1

    What you mean by prior on input distribution?

  • @ehza
    @ehza Před 3 lety

    Thanks

  • @jithendrayenugula7137
    @jithendrayenugula7137 Před 4 lety

    very awesome explanation! Thanks man!
    Is it too late or waste of time to play with and explore GANs in 2020 where BERT/GPT are hot and trending in AI community?

    • @ssshukla26
      @ssshukla26 Před 4 lety +1

      Is it too late to learn something? No... Is it too late to research into GANs? Absolutely not... Nothing is perfect, GANs are not, there will be decades of research on these same topics. Whether you can make money out of knowing GANs... Ummmm debatable...

  • @aishwaryadhumale1278
    @aishwaryadhumale1278 Před 3 lety

    Can I please more content on GAN

  • @sadface7457
    @sadface7457 Před 4 lety +1

    Revisit attention is all you need because that is now a classic paper.

    • @audrius0810
      @audrius0810 Před 4 lety

      He's done the actual paper already

  • @chinbold
    @chinbold Před 4 lety

    I'm only inspired by watching your videos 😢😢😢

  • @timothyschollux
    @timothyschollux Před 4 lety

    The famous Schmidhuber-Goodfellow moment: czcams.com/video/HGYYEUSm-0Q/video.html