I don't know if you have an idea, but I would like to tell you that I believe you have NO idea how helpful (and especially how helpful with time management) the Paper Explained series you're doing is for me. These are SERIOUSLY invaluable, thank you so much.
It's extremely helpful to hear your thoughts on what the authors have been thinking and things like researchers trying to put MCMC somewhere it was intended not to be. This gives a better idea of how the machine learning in academia works. Please continue this and thanks!
This was awesome! I am currently a graduate student, and I have to write a paper review for my Deep Learning course. Loved your explainer on GANs. This has helped me understand so much of the intuition behind GANs, and also the developments in Generative Models since the paper's release. Thank you for making this.
12:00 I never quite liked the min-max analogy. I think a better analogy would be a teacher student analogy. The discriminator says, "The image you generated does not look like a real image and here are the gradients which tells you why. Use the gradients to improve yourself." . 32:30 I am pretty sure this interpolations existed in auto-encoder literature . Mode collapse is pretty common for human teachers and students. Teachers often say that you need to solve the problems the way I taught in class. "My way or the highway" XD
Yes the teacher student phrasing would make more sense, I think the min-max is just the formal way of expressing the optimization problem to be solved and then people go from there into game theory etc. The mode collapse could also be the student that knows exactly what to write in any essay to make the one particular teacher happy :D
Thank you very much. I really appreciate your understanding of these papers. Please keep on releasing such kind of videos. They helped me a lot. Thanks again!
"Historical" in ML : 6 years :D The series ist nice, thanks! one question though: you said that the objective is to minimize the exoectations in (1), but the minmax is already performed to get to the equality, right? How does V look? Edit: oh, never mind. In (3) you see that (1) is in the typical CS-sloppy notation...
Hey @Yannic, I followed up on the BYOL paper you covered. While I'm not super familiar with machine learning I do feel I implemented something which is mechanically the same as what was presented and I thought it might interest you that the result for me was that it converged to a constant, every time. The exponential moving average weighted network and the separate augmentations did not prevent it. I will be going back through to see if I maybe have made a mistake. But I have been trying a bit of everything and so far nothing has been able to prevent the trivial solution. Maybe I'm missing something, which I hope because I liked the idea. My experimentation with parameters and network architecture has not been exhaustive... But yeah, so far: no magic.
Sir I'm big fan of you. I'm following you for last one year I find your every video is full of information and really useful. Sir I request you to please make few videos one segmentation as well I shall be thankful to you.
Most people tells stories with data insights and model prediction. Yannic tells stories with papers. An image is worth a 1000 word, and a good story is worth a 1000 image.
very awesome explanation! Thanks man! Is it too late or waste of time to play with and explore GANs in 2020 where BERT/GPT are hot and trending in AI community?
Is it too late to learn something? No... Is it too late to research into GANs? Absolutely not... Nothing is perfect, GANs are not, there will be decades of research on these same topics. Whether you can make money out of knowing GANs... Ummmm debatable...
I don't know if you have an idea, but I would like to tell you that I believe you have NO idea how helpful (and especially how helpful with time management) the Paper Explained series you're doing is for me. These are SERIOUSLY invaluable, thank you so much.
I am loving these classic paper videos. More of these, please.
ok indian
+1
I think that such an initiative will be useful for fresh researchers and beginners.
These reviews are priceless, you add so much more value than just reading the paper would bring, thank you for your work.
These are absolutely amazing, please keep them coming.
I often wished that something like this existed on CZcams. Your series is a dream come true. Many thanks.
The classic papers are amazing! Please continue making them!
I thank you for making this resources free to the community ))
Classic paper is too good. Hope you upload such more videos. Thank you !
It's extremely helpful to hear your thoughts on what the authors have been thinking and things like researchers trying to put MCMC somewhere it was intended not to be. This gives a better idea of how the machine learning in academia works. Please continue this and thanks!
classic paper and very awesome explanation. Thank you!
I like these videos on the papers. It is very helpful to hear how another person views the ideas discussed in these papers. thanks!
@Yannic , this is such a great initiative and you are doing a great great job. Please carry it on.
This was awesome! I am currently a graduate student, and I have to write a paper review for my Deep Learning course. Loved your explainer on GANs. This has helped me understand so much of the intuition behind GANs, and also the developments in Generative Models since the paper's release. Thank you for making this.
It's great to have origin of most models in ML today. Good work
Yannic, thank you, In this overloaded world for ML you are providing a critical informative service. Please keep it up
You're truly a God's gift for people who are comparatively new in the field. (Maybe even for experienced ones) Thanks a lot and keep up the good work!
12:00 I never quite liked the min-max analogy. I think a better analogy would be a teacher student analogy. The discriminator says, "The image you generated does not look like a real image and here are the gradients which tells you why. Use the gradients to improve yourself."
.
32:30 I am pretty sure this interpolations existed in auto-encoder literature
.
Mode collapse is pretty common for human teachers and students. Teachers often say that you need to solve the problems the way I taught in class. "My way or the highway" XD
Yes the teacher student phrasing would make more sense, I think the min-max is just the formal way of expressing the optimization problem to be solved and then people go from there into game theory etc.
The mode collapse could also be the student that knows exactly what to write in any essay to make the one particular teacher happy :D
I love these historical videos of you!!
Thank you very much. I really appreciate your understanding of these papers. Please keep on releasing such kind of videos. They helped me a lot. Thanks again!
very useful, thank you for such quality content!
Very good paper!! , can you please go to the paper of next bigger step to the state of art in GANs. Thank you!
This is amazing, thank you! As a materials scientist trying to utilize machine learning, this just hits the spot!
"Historical" in ML : 6 years :D
The series ist nice, thanks! one question though: you said that the objective is to minimize the exoectations in (1), but the minmax is already performed to get to the equality, right? How does V look?
Edit: oh, never mind. In (3) you see that (1) is in the typical CS-sloppy notation...
Thank you for providing your insights and current point of view on the paper. it was very helpful.
Damn. I’m enjoying this video very much. Very helpful. Thank you!
Wow ...this is gold.Keep up man.be blessed
Thank you for the explaination.
It is a great resource for beginner like myself!
brilliant, would love more of these!
I'd appreciate more explaining on the math in the future. This kind of math is rarely encountered by most programmers.
Hey @Yannic, I followed up on the BYOL paper you covered. While I'm not super familiar with machine learning I do feel I implemented something which is mechanically the same as what was presented and I thought it might interest you that the result for me was that it converged to a constant, every time. The exponential moving average weighted network and the separate augmentations did not prevent it. I will be going back through to see if I maybe have made a mistake. But I have been trying a bit of everything and so far nothing has been able to prevent the trivial solution. Maybe I'm missing something, which I hope because I liked the idea. My experimentation with parameters and network architecture has not been exhaustive... But yeah, so far: no magic.
Yes, I was expecting most people to have your experience and then apparently someone else can somehow make it work sometimes.
Sir I'm big fan of you. I'm following you for last one year I find your every video is full of information and really useful. Sir I request you to please make few videos one segmentation as well I shall be thankful to you.
Beautiful paper and superb review!
why youtube hasn't recommended me this channel earlier?
Hi this is incredibly useful, thank you so much!
Best GAN explanation ever
agreed
great initiative ....love to see some classis NLP papers
Most people tells stories with data insights and model prediction. Yannic tells stories with papers.
An image is worth a 1000 word, and a good story is worth a 1000 image.
thanks a lot!
Great!!!
Nice explained thanku.......Can you make a video on Dual motion GAN(DMGAN) .
I‘d like to see a mix of papers and actual (python) code
Thank you, Excellent.
Yannic, could you give application examples at the end of each paper you review.
This channel is awesome
YES! THANKS!
i'm writing my thesis on GAN's atm. Would enjoy an interesting conversation with an expert:)
Can you please post a video of GAIL ?
Funny how now we can say the original paper on GAN is classic
Please make a video on pix2pix GANs
In the future there'll be an algorithm to transform scientific papers into your videos.
No matter how efficient this algorithm might be, Yannic will still be faster
Thank you so much!! Can you do a paper on UNet?
What you mean by prior on input distribution?
it's the way the inputs are distributed
Thanks
very awesome explanation! Thanks man!
Is it too late or waste of time to play with and explore GANs in 2020 where BERT/GPT are hot and trending in AI community?
Is it too late to learn something? No... Is it too late to research into GANs? Absolutely not... Nothing is perfect, GANs are not, there will be decades of research on these same topics. Whether you can make money out of knowing GANs... Ummmm debatable...
Can I please more content on GAN
Revisit attention is all you need because that is now a classic paper.
He's done the actual paper already
I'm only inspired by watching your videos 😢😢😢
The famous Schmidhuber-Goodfellow moment: czcams.com/video/HGYYEUSm-0Q/video.html