Few-Shot Learning (2/3): Siamese Networks
Vložit
- čas přidán 29. 06. 2024
- Next Video: • Few-Shot Learning (3/3...
This lecture introduces the Siamese network. It can find similarities or distances in the feature space and thereby solve few-shot learning.
Sides: github.com/wangshusen/DeepLea...
Lectures on few-shot learning:
1. Basic concepts: • Few-Shot Learning (1/3...
2. Siamese networks: • Few-Shot Learning (2/3...
3. Pretraining and fine-tuning: • Few-Shot Learning (3/3... - Věda a technologie
This is hands down the best explanation of Siamese networks on CZcams
Please upload more of these English lectures sir! Best content ever! I'm not bored listening to your careful explanations!
After reading dozens of papers (including the original ones) this is the place where I got my understanding of Siamese clear. Thanks.
Mind-blowing and very-well explained. This video succeeds in giving us the intuitive aha moment when you finally understand what few-shot is and how Siamese networks are used for that! Thank you.
Hands down the best tutorial on Siamese Networks!
Best description of Siamese Network, can you also make video on MAML?
Presentation is very well prepared graphically. Simple and with pauses. It looks easy, but it's not. Thank you, Shusen Wang,🙏
Your explanations are very easy to understand. Thank you!
Best tutorial that I have ever seen, much better than those technical articles or Academic thesis which are full of mathematical symbols and formulas
Best lecture. Please keep posting.Best video ever.
Best Video on this topic so far!
Thank you, very explicit explanation. 讲的太好了老师!感谢!
Best video on few shot learning
I'm a non-English speaker, but I understand everything.
thank you sir for all the effort you made in this clear explanation it helped me a lot in understanding Siamese network
淺顯易懂~讚
Thanks for such a nice explanation.
this lecture is awesome!
Thanks so much for the lectures!!!
Sweet Explanation! Thanks!
Holy shit, dont know why other articles are little bit harder to understand, but explained very good. Thanks a lot!
What a great tutorial!
Excellent explanation.
This is freaking awesome !!!!!!!!!!!
I like the detailed explaination
Thank you Wang 😊
Good description on siamese
Great video! :)
I feel like autoencoder can be used for the classification task and might work better. Because autoencoder can map the input into a latent space which captures the patterns.
Great explanation, thank you. I'm confused about last example of classification and support set. I was thinking that after training, model should have distance metric and present predictions for all classes provided in training before that.
In practice, what mechanism would you use to generate the support set? I ask because let's say your support set contained a bunch of rodents so it might be hard to distinguish a squirrel, whereas you have another support set with a variety of objects including your support squirrel. Obviously, you now have a choice of two support sets where using one will be harder to correctly classify your squirrel. Do we include a metric in the loss that accounts for the distances between the support images? For example, we want to help out when our support images are more similar to one another, but we don't care when our support images are already pretty dissimilar.
gerat video - thanks!
nice sir..thank you
Very similar to word embedding
so the training set is much bigger than the support set ? and i only use the support set to help with the classification of query images ?
Is any pytorch code available on this?
if you can provide the code for implementation then it will be great.
Is triplet loss create cluster of similar images in future space?
Yes, its goal is to make the same class in a cluster
一年后的留言 这个和simCLR 是一个吗