Convolution Neural Networks - EXPLAINED

Sdílet
Vložit
  • čas přidán 27. 07. 2024
  • In this video, we talk about Convolutional Neural Networks. Give the video a thumbs up and hit that SUBSCRIBE button for more awesome content.
    Code to demonstrate Equivariance wrt Translation: github.com/ajhalthor/cnn-note...
    My video on Generative Adversarial Networks: • Generative Adversarial...
    INVESTING
    [1] Webull (You can get 3 free stocks setting up a webull account today): a.webull.com/8XVa1znjYxio6ESdff
    Music at: www.bensound.com/royalty-free...
    REFERENCES
    First CNN Paper: yann.lecun.com/exdb/publis/pdf...
    The Deep Learning Book for details: www.deeplearningbook.org/conte...
    DCGAN Paper: arxiv.org/abs/1511.06434
    About CNNs : cambridgespark.com/content/tu...
    Flatten Vs. FC1: alexisbcook.github.io/2017/gl...
    Diffference between CNN & MLP: www.quora.com/What-is-the-dif...
    Nine Deep learning papers you should know about: adeshpande3.github.io/adeshpa...
    Global Average Pooling (GAP) Layers: alexisbcook.github.io/2017/gl...
    Equivariance: arxiv.org/pdf/1411.5908.pdf
    CNN slides based on the deep learning book (birds eye read): www.cedar.buffalo.edu/~srihari...
    Google's Deep Mind Learns how to walk: • Google's DeepMind AI J...
    Generate CNNs visually : github.com/yu4u/convnet-drawer
    Playing Atari with Deep Reinforecemnt Learning: arxiv.org/pdf/1312.5602v1.pdf
    Deep Mind's Q-Learning playing Atari games: • Google DeepMind's Deep...

Komentáře • 153

  • @taihatranduc8613
    @taihatranduc8613 Před 4 lety +3

    you made me realize there are indeed other CZcamsrs "don't really know much about" what they're saying (0:17). You explain the best way in youtube especially about the structure of the CNN

  • @sharpshootoyaj
    @sharpshootoyaj Před 3 lety

    This is genuinely a brilliant explanation. Many thanks

  • @insidiousmaximus
    @insidiousmaximus Před 3 lety +4

    mate I have been working as junior AI engineer for over a year now and I have successfully deployed custom built CNNs on nvidia hardware but I am still learning from your videos! Just discovered and watched them all back to back. Best videos I have found and I watch a hell of a lot of videos on this topic! I have also read some hardcore books on it. Your videos are par excellence please keep making them! Would love to see some practical examples, there are many tutorials on how things like segmentation and superpixels WORK but nobody wants to show us how to actually implement them into a custon network and display the results. ie. detect flame or smoke. When it comes to practical solutions nobody really goes beneath the provided API examples! Very frustrating.

  • @sneha_more
    @sneha_more Před rokem

    The way you explained made me feel like I didn't know so much about CNN. I wonder when did you read so many papers. Thanks for sharing your knowledge. Helps a lot.

  • @darasingh8937
    @darasingh8937 Před 2 lety

    Thanks a lot for not having a superficial touch of the topic. Keep it up!

  • @neillunavat
    @neillunavat Před 3 lety +1

    You explain better than well established organizations boi!! Keep it up.

  • @xxdxma6700
    @xxdxma6700 Před 2 lety

    Such an amazing video man. The best educational I have watched in a while

  • @HafeezUllah
    @HafeezUllah Před 2 lety

    I had no idea about CNN at all, this was great and given me immense confidence in learning about CNN. Great video. scratch to end explained beautifully.

  • @smealzzon
    @smealzzon Před 5 lety +1

    Great video, filled in a lot of gaps of understanding.

  • @artinbogdanov7229
    @artinbogdanov7229 Před 3 lety

    Great explanation. Thank you!

  • @TheRealJackfrog
    @TheRealJackfrog Před 4 lety

    Well done! Your voice and method left me wanting a more detailed explanation from you.

    • @TheRealJackfrog
      @TheRealJackfrog Před 3 lety

      Maybe you could give that explanation over a cup of hot chocolate by the fire as we cuddle up, listening to the latest episode of the Lex Fridman Podcast together. We laugh as Lex goes off on some profound tangent about how the human mind is hard to understand. "That's not the only thing that's hard" I think to myself, as you spoon me ever so gently. It's a perfect night. Just you and me, by the fire, as the sky darkens outside the cabin windows. I know that you could never leave me wanting more...
      Sorry, I got a paper due in 9 days that I don't want to write.

  • @abhilasht6471
    @abhilasht6471 Před 5 lety +14

    thank you so much for an amazing video even after going through several videos I did not get the concept clear after this video all of my doubts are clear
    please make hands-on tutorials it's a humble request, hope to see you soon
    small correction @16:40 calculation of 12.5, not 13.5 == (26-2+1)/2 = 12.5

  •  Před 4 lety

    you provide references, thank you very much. yours videos is great.

  • @ozancanacar8237
    @ozancanacar8237 Před 5 lety +31

    Thank you so much! Everyone just explaining like : ""So this is convolution and that generates this numbers and this is our feature cubes and you apply pooling and get that... lets jump in to the python code i wrote in 5 weeks but imma explain in 15 seconds". You've explained all these concepts clearly and one by one. Can you make a video about training the CNN, it would be awesome.

    • @SuperMaDBrothers
      @SuperMaDBrothers Před 2 lety

      5 weeks? Nah bro they're not as dumb as you are lol. But seriously code is a shit way of explaining something. You should check out lectures from universities though, this video was pretty shit too

  • @manishsharma2211
    @manishsharma2211 Před 4 lety

    Bang on. Explained very good

  • @ehsankiani542
    @ehsankiani542 Před 4 lety

    Well done! Thanks buddy.

  • @MrStudent1978
    @MrStudent1978 Před 5 lety

    Excellent explanation

  • @prodbreeze
    @prodbreeze Před měsícem

    YOU HAVE MADE ME ACTALLY LIKE ML DL for the first time

  • @nurfaizahmusa496
    @nurfaizahmusa496 Před 4 lety +1

    Great video, this is really helpful and detailed. Loved it!!!

  • @sciWithSaj
    @sciWithSaj Před 3 lety

    Thanks you very muchh.
    Cleared lots of doubts.

  • @fahnub
    @fahnub Před 2 lety

    this is just so good. thank you for this.

  • @Bilangumus
    @Bilangumus Před rokem

    Still relevant today, thanks.

  • @terencechengde
    @terencechengde Před rokem

    very well explained! good job! thank you so much for putting the effort in this video!

  • @clearwavepro100
    @clearwavepro100 Před 5 lety

    gonna need to subscribe bc multiple videos about audio and cnns ! :) yes!

  • @sambarajuchiluveru8444
    @sambarajuchiluveru8444 Před 6 lety +1

    hello dear, thank you for video i have question how to deal with pooling in one dimensional input case?

  • @anemoiacApache
    @anemoiacApache Před 5 lety +11

    Should've found this a month ago before i proceeded to try and learn this on the fly and just embarrassed myself in front of my department

  • @alexfourie6491
    @alexfourie6491 Před 3 lety

    Nice video, quick question though. How do you determine the weights in each filter? I would assume they are randomly assigned like the weights in a normal neural network on the first feed-forward pass.
    Follow up question:
    How would one then go about updating the weights in each filter?
    Thank you

  • @GKS225
    @GKS225 Před 3 lety

    Awesome video! Keep it up!

  • @sokiprialajonah4932
    @sokiprialajonah4932 Před 3 lety +1

    this video really help me alot

  • @mohammedhassan7770
    @mohammedhassan7770 Před 5 lety

    Good job, thanks.

  • @natjimoEU
    @natjimoEU Před 4 lety

    great video mate.

  • @MustafaHoda
    @MustafaHoda Před 5 lety

    The 32 Filters that are demonstrated at 8:46, are those filters in the other layers behind the first the same or different?

  • @danishnawaz7869
    @danishnawaz7869 Před 4 lety

    Thank you!

  • @himanshusrihsk4302
    @himanshusrihsk4302 Před 4 lety

    Please make a video on visual question answering

  • @JohnUsp
    @JohnUsp Před 3 lety +9

    17:00 - From 13x13x32 to conv3x3,64. How the volume/deep of 32 is handle? I understand the result of 11x11x64(filters) but those 32 layers are summed/packed and send to conv3x3x64?

    • @thomasmarsden1870
      @thomasmarsden1870 Před 3 lety +3

      lmao I have the same question. pretty sure there are 64, 3*3*32 filters.

  • @IndiaNirvana
    @IndiaNirvana Před 6 měsíci

    Great videos. One small question at 5:07 how did you select the weights of the 3 by 3 filter

  • @adam_sporka
    @adam_sporka Před 3 lety

    Thank you very much!

  • @abhijitmahapatra8024
    @abhijitmahapatra8024 Před 4 lety

    Hello AJ, today I discovered your channel( subscribed long back but never explored this much) and guess what you provide much simple intuition of topics that’s hard to grasp within minutes. Can you do the same for some Machine learning part like ARIMA and other predictive models..!! Anyhow great content. Really appreciate your effort and knowledge.

    • @CodeEmporium
      @CodeEmporium  Před 4 lety +1

      Ive been playing around with time series models recently too. Not sure if there is enough drive for a video at this time. But will definitely keep this in mind

    • @abhijitmahapatra8024
      @abhijitmahapatra8024 Před 4 lety

      CodeEmporium That would be a great help. thanks for the reply AJ can’t thank enough for your efforts.

  • @SuryadiputraLiawatimena
    @SuryadiputraLiawatimena Před 6 lety +8

    Please explain again why we have 32 and 64 layers (feature maps)? from where these number, are they calculated or just pick numbers? thanks.

    • @manishsharma2211
      @manishsharma2211 Před 4 lety

      Sir. It depends how many feature vector do you need. These num are majorly used

    • @ravikumarhaligode2949
      @ravikumarhaligode2949 Před 3 lety

      I am also having same query, how to decide how many filters are required

  • @ishaquenizamani9800
    @ishaquenizamani9800 Před 2 lety

    your videos are great please make a video on U-net plz

  • @ocnarfchan4857
    @ocnarfchan4857 Před 4 lety

    How does back propagation work for Convolutional Neural Network?

  • @user-pz1jj4eu7g
    @user-pz1jj4eu7g Před 3 lety

    thank u, teacher

  • @MrRameeez
    @MrRameeez Před 5 lety

    What is dense layer, why it is 512??

  • @honeyrulesintheworld
    @honeyrulesintheworld Před 2 lety

    hi can you tell me how to find confusion matrix for image retrival using CNN?

  • @manoharrengasamy4174
    @manoharrengasamy4174 Před rokem

    Thanks,good explanation @ filters. can you refer links :how filters/kernels prepared ?.For a object how many filters minimum required?, development and updation of filter upto latest yolo model

  • @TawhidShahrior
    @TawhidShahrior Před 2 lety

    man you are a genius.

  • @abdulcustom
    @abdulcustom Před 3 lety +12

    This is a great video. I have one small doubt. @17:11 How do you apply 64 kernels on 32 response maps and get 64 response maps in the next layer?

    • @gentix8564
      @gentix8564 Před 2 lety +2

      remember the depth of each filter is 32. so actually, you apply 64 3*3*32 filters, which is why the output depth is 64.

    • @npip99
      @npip99 Před 2 lety

      Thank you for this question! Wondering the same thing!

    • @npip99
      @npip99 Před 2 lety

      Ah thank you, so each takes the 3x3 over all of the previous filters.

    • @ttb1513
      @ttb1513 Před rokem

      17:27 Out.width = 13 - 2 + 1 = 11. Something is wrong here, as 13-2+1 is 12.

  • @amithm3
    @amithm3 Před 2 lety

    finally the video i wanted, how to convert the deep volume matrix into ANN input. I have one doubt, suppose we have an image of 28x28 pixel and the first cnn layer with 3 kernel, we will get 3 feature maps, now in the next layer if we have "64" kernels how many feature map do we get, is it 64 * 3 or is it just x no of feature maps. if it is only 64 no of maps then how do we convolve the 3 feature maps into 64 feature maps using only 64 kernels, should we sum the 64 * 3 maps we get into 64 maps??

  • @Nuns341
    @Nuns341 Před 2 lety

    how is h-height change from 3 to 2?

  • @bankawat1
    @bankawat1 Před 4 lety

    good one

  • @malihafarahmand75
    @malihafarahmand75 Před 4 lety

    how to calculate 512 and 512 dense

  • @konstantin7596
    @konstantin7596 Před rokem

    I think at 16:32 the +1 should be outside the fraction in the end again?

  • @StevenSmith68828
    @StevenSmith68828 Před 4 lety

    Where does 32 come from?

  • @suchismitamohapatra4846

    Hey can I get the whole content with diagram

  • @LovedbyGod4ever
    @LovedbyGod4ever Před 3 lety

    Thank u bro

  • @rangaeeee
    @rangaeeee Před 3 lety

    About CNNs url is broken ... Pls update the latest one

  • @raghavamorusupalli7557

    Location independence is an important feature

  • @SvSzYT
    @SvSzYT Před 3 lety

    hey man,
    is it somehow possible to ask you some questions in terms of my master thesis? ;)

  • @shrutiprasad3354
    @shrutiprasad3354 Před 3 lety

    greatest of all the other videos

  • @yashpandit832
    @yashpandit832 Před 4 lety +1

    One doubt: In the last image shown will what will the width of each filter be in the second conv. layer? My understanding is that it will be 32 as the input width is 32 i.e. the filter of 3x3x32. Am I right or is there something wrong I have understood? Plz help.

  • @changqunzhang1277
    @changqunzhang1277 Před rokem

    Thank you very much! This is great video containing many helpful information. Really appreciate the time and effort you spent on making this video. Here is a question, when conv 3*3, 64 applied on 13*13Z*32 images, isn't the result 11*11* (64*32)? for each 32 layers, the filters that is 64 times were applied. One more thing, I believe 13-2+1 = 11 is not correct (should be 12) @17:29

    • @chriswalsh5925
      @chriswalsh5925 Před rokem

      Yes! I thought the same... it is confusing enough as it is! :D ... maybe a mistake or something not mentioned about how the convolution works?

  • @reasoning9273
    @reasoning9273 Před rokem

    Actually, CNNs were introduced bit earlier. I recall it was LeCun's 1989 paper.

  • @Hassan.Wahba.97
    @Hassan.Wahba.97 Před 3 lety +1

    I just noticed that we round up when pooling, we don't floor. cause (26 - 2 + 1)/2 is 12.5 not 13.5

    • @psychotropicalfunk
      @psychotropicalfunk Před 2 lety

      7 months later but I noticed the same. Either that or by mistake calculated using the first output and took 28 instead of 26: (28-2+1)/2 = 13.5

  • @amirulsadikin8716
    @amirulsadikin8716 Před 5 lety

    Thank you soo much ...you saved me alot of reading time....

  • @Geoters
    @Geoters Před 6 lety +10

    Sorry, one moment is not clear. After first convolution (and maxpool) we end up with 13x13x32. When applied conv3x3,64. How did it work? We had 32 layers (feature maps). If we apply conv3x3,64 to each layer we would end up with 32x64 layers. But we end up with only 64 layers. thanks

    • @CodeEmporium
      @CodeEmporium  Před 6 lety +1

      When we have a 13×13×32 volume, and apply one filter of 5×5×32, then we get a 11×11 feature map. So if we apply 64 such filters to the 13×13×32 volume, we end up with 64 such 11×11 feature maps. In other words, an output of 11 × 11 × 64

    • @Geoters
      @Geoters Před 6 lety

      Sorry, allow me rephrase the question. At 4:50 you apply the convo filter 3x3x1 to image 5x5x1. Basically just weighting and adding pixels that fit into 3x3 square. How would you apply 3x3x1 filter to image 5x5x2 (2 layers 5x5x1 ) ? Weighting and adding pixels from both layers.

    • @CodeEmporium
      @CodeEmporium  Před 6 lety

      Depth of the filter and the input should be the SAME. 3 x 3 x 1 filter convolves with a 5 x 5 x 1 image as they have the same depth (1). But in the case of 5 x 5 x 2, we NEED to apply a filter of shape 3 x 3 x 2. A 3 x 3 x 1 filter will only convolve with one of the 5 x 5 x 1 layers. We don't take the average of both layers as they represent different data. Hope that makes sense.

    • @Geoters
      @Geoters Před 6 lety

      15:35. After first convolution and pooling we end up with 13x13x32. So how do we apply convolution 3x3x64 to it? We got 32 layers of 13x13 grid. So now we apply 3x3 convolution filter 64 times and end up with 64 layers. How do we do it since we have 32 layer in the source?

    • @CodeEmporium
      @CodeEmporium  Před 6 lety +3

      We don't apply convolution with a 3 x 3 x 64 filter. We apply convolution for 64 filters of shape 3 x 3 x 32, each with the input 13 x 13 x 32. The result of each convolution will be a 11 x 11 output. Since we have 64 such convolution operations, we end up with 11 x 11 x 64. Just note the OUTPUT depth is equal to the number of filters chosen for convolution. And the depth of filter is equal to the depth of INPUT.

  • @ahmedsabbir5862
    @ahmedsabbir5862 Před 4 lety +2

    @17.25 , Output (width) = 13-3+1/1. So the result will be 11

    • @CodeEmporium
      @CodeEmporium  Před 4 lety +2

      You are right. Will like this so others can see it. Nice catch!

    • @ahmedsabbir5862
      @ahmedsabbir5862 Před 4 lety

      @@CodeEmporium You're welcome. You should do some tutorials on Kaggle Problem solving, it will be helpful.

  • @samarpitasnani7996
    @samarpitasnani7996 Před 3 lety

    can i get the slides of this.

  • @sathishp6257
    @sathishp6257 Před 6 lety +1

    17:25 how come h(width) is 2 and after doing arithmetic Out(width) is 11.. and as per my observation while doing conv3x3, 64 kernal size (h (width)) should be 3 right?

    • @CodeEmporium
      @CodeEmporium  Před 6 lety +1

      When we have a 13×13×32 volume, and apply convolution with one filter of 3×3×32. This will give us an 11×11 feature map (as the stride is 1). Apply 64 such kernels, we get 64 such 11×11 feature maps i.e. a 11×11×64 volume.

    • @wlxxiii
      @wlxxiii Před 5 lety

      Mistake in the slide: should be 13 - 3 + 1 = 11

    • @deepakkumarshukla
      @deepakkumarshukla Před rokem

      @@CodeEmporium where does this 3*3*32 filter come from? did I miss something or is something missing in the images shown?

  • @mpcr9799
    @mpcr9799 Před 3 lety

    I know how a filter in a Convolutional Neural Network "scans" the input image and multiplies the values of the kernel with the corresponding receptive field in the input image and adds it all up to get a new pixel in the output activation map. But Im unsure how the numbers in a filter is decided.
    Is the kernel a patch from the image that is chosen? Like a 5x5 patch of the image that the network must decide to be good to be used as a filter? Or are they random numbers that backpropagation will soon change to fit best with the data? And would these numbers in the filter be considered as the weights of the network?
    Thanks for any help.

    • @barnabyroberts7950
      @barnabyroberts7950 Před 3 lety

      The values in the kernel are randomly initialised and altered via backpropagation. If you know about simple densely connected networks, then you can consider a single weight in this type of network to be analogous to a 2D kernel that convoles a single channel in the input image. If you consider a 3-channel image as the input to a layer, and a single channel as the layer output, then the output (a 2D image) is taken by convolving each input channel with its own K*K kernel and summing (superimposing) the resulting 3 images. This is analagous to a simple densely connected network except each weight in the layer is a K*K kernel rather than a scalar. However it makes more sense to consider a K*K*3 kernel rather than summing 3 K*K kernels for the 3 input channels. If N is the number of input channels, M the number of output channels and K the width of a kernel, then you have K*K*N*M parameters for a single layer.

  • @elinaakhmedova9407
    @elinaakhmedova9407 Před 5 lety

    Thanks for this video! You are cool, keep going 🤗

  • @louerleseigneur4532
    @louerleseigneur4532 Před 4 lety

    merci

  • @sujithtumma6754
    @sujithtumma6754 Před rokem

    Awesome explanation. Loved it. Just a little correction , at 17:24 I think "hwidth" is 3 not 2 .

    • @CodeEmporium
      @CodeEmporium  Před rokem

      Thanks for the catch! Yeah there are definitely a few typos here that you and some others called out. (Also thanks for the compliments) :)

  • @santhoshkolloju
    @santhoshkolloju Před 6 lety

    Hey Can you do intuitive explanation of CNN on text data

  • @videoinfluencers3415
    @videoinfluencers3415 Před 4 lety +1

    Whoaa!!!!

  • @elrosspangue7443
    @elrosspangue7443 Před 5 lety

    Question, why is there an increase of kernels for every convolution layer and where are those kernels coming from? What is the basis of those kernels?

    • @CodeEmporium
      @CodeEmporium  Před 5 lety +2

      The network tries to understand features of the input (image). The shallower layers extract high level features (edges, strokes, shadowing, texture, etc). The deeper we go, lower level features are extracted (could by anything. Most likely not human interpretable). Such lower level features are more complex. Hence we need more parameters to learn them. So the deeper we go, the more kernels we use.

    • @elrosspangue7443
      @elrosspangue7443 Před 5 lety

      @@CodeEmporium Follow up question, where can I get the parameters? What is the basis of these parameters? Are parameters and features the same?
      Just also wanna give appreciation and thanks to your videos and answer! The backstory of this questions is, me and my thesismates are creating a CNN model that revolves on genre classification with some enhancement of new techniques and methodologies. This video was actually our basis from learning how CNN works and it's specifics in terms of layers - from nothing to almost intuitively knowing the basics.

  • @giahuytrinh7195
    @giahuytrinh7195 Před 2 lety

    ty

  • @baskorobaskoro7972
    @baskorobaskoro7972 Před 6 lety

    How to set value in filter (kernel)? Is it set by randomized?

    • @CodeEmporium
      @CodeEmporium  Před 6 lety +1

      Initially, yes. They take on random values, which are later "learned".

    • @SuryadiputraLiawatimena
      @SuryadiputraLiawatimena Před 6 lety

      how do they 'learned'? do you have this cnn code in Keras?

  • @swarajshinde3950
    @swarajshinde3950 Před 4 lety

    Yann Lecun is great

  • @miladmfarid
    @miladmfarid Před 3 lety +1

    16:47 you explained the pooling width output and in the equation used 26-2+1/2 which will be 12.5 but you said it will be 13.5 ! and I don't know how you get to 13 ? can you please explain?

  • @robertcohn8858
    @robertcohn8858 Před 4 lety

    I think the value of this video is not so much that you will be able to sit down and use CNN from the get-go. Rather, it demonstrates some of the key concepts quite well (convolving layers for example). Looking at the final example is helpful and should probably be viewed several times to get the full meaning. But in all, the video is - when used with other information sources - a good start to learning CNN.

  • @mnsnliu9317
    @mnsnliu9317 Před 5 lety

    good

  • @krishnamishra8598
    @krishnamishra8598 Před 3 lety

    why do we use convolution ??? why not just simple ANN in case of image ?? main question is what is need of convolution in CNN?? please Answer....

    • @amithm3
      @amithm3 Před 2 lety

      ANN takes 1D input and thus loses the spatial details of the image, but in cnn those are extracted and presented to ANN in a more meaningful and trainable manner

  • @mehdisoleymani6012
    @mehdisoleymani6012 Před rokem +1

    Be careful !!! thank you, at 17:28 time of the clip there is a mistake in the equation (13-3+1=1 is true however you have typed 13-2+1=11

  • @anandachetanelikapati6388

    May I know how to calculate the input, output and learnable parameters in the following case?
    Assumptions:
    - Input size is (32, 32, 3)
    - No padding for all convolutions
    -------------------------------------------------------------------------------------------------------------------------------------------------------------
    Layer Type Kernel Stride Neurons/feature maps input size output size No. of parameters
    -------------------------------------------------------------------------------------------------------------------------------------------------------------
    1 Conv (3, 3) (1, 1) 16 (32, 32, 3)
    2 Pool (2, 2) (2, 2) 16
    3 Conv (5, 5) (1, 1) 32
    4 Pool (2, 2) (2, 2) 32
    5 Conv (3, 3) (1, 1) 64
    6 Dense -- -- 128
    7 Dense -- -- 2
    --------------------------------------------------------------------------------------------------------------------------------------------------------------
    thank you

  • @RedShipsofSpainAgain
    @RedShipsofSpainAgain Před 6 lety

    16:34 shouldn't that be 12.5, not 13.5? (26-2+1)/2 = 12.5

  • @anwarulislam6823
    @anwarulislam6823 Před rokem

    Someone sending me conversation like AI Chatbot through all of actions in neural networks by inner voice using brain!!! Is it possible or not, if it is than how can I control this thing??
    #Thanks in advanced.

  • @ankitrathore5951
    @ankitrathore5951 Před 2 lety

    17:26 its 13-3+1=11.... Note it happened a silly mistake. Dont cofuse

  • @sunidhinayak6413
    @sunidhinayak6413 Před 5 lety

    can you please make a video on Keras - container

    • @XX-vu5jo
      @XX-vu5jo Před 3 lety

      Dude study on your own lol

  • @reggaebin
    @reggaebin Před 3 lety

    @17:25 13-2+1=11 is not correct.

  • @zhenzhen8766
    @zhenzhen8766 Před 3 lety

    memo 13:30

  • @macsenwyn7223
    @macsenwyn7223 Před 3 lety

    13-2+1 is not 11 its 12

  • @XX-vu5jo
    @XX-vu5jo Před 3 lety

    And my fake PhD supervisor don’t even know or understand a single thing about this!!!! Damn those quacks! My country sucks!

  • @mdyzma
    @mdyzma Před 5 lety +1

    17:21 your filter in round 2 convolution is (3, 3). So it should be 13-3+1=11. Not 13-2+1, which is 12.

  • @Leon-pn6rb
    @Leon-pn6rb Před 4 lety +7

    poorly explained the layers. The same surface level explanation with no intuition behind it for the core concepts
    The easier concepts were explained well but that wasn't why people watch these vids

  • @bishwasapkota9621
    @bishwasapkota9621 Před 4 lety

    Poorly explained!! Anyway a good try

  • @GamingGleeSquad
    @GamingGleeSquad Před 5 lety +1

    Why is the Filter size 3x3 @8: 06? Can we take some different size for the Filter?