Bias in an Artificial Neural Network explained | How bias impacts training

Sdílet
Vložit
  • čas přidán 8. 07. 2024
  • When reading up on artificial neural networks, you may have come across the term “bias.” It's sometimes just referred to as bias. Other times you may see it referenced as bias nodes, bias neurons, or bias units within a neural network. We're going to break this bias down and see what it's all about.
    We'll first start out by discussing the most obvious question of, well, what is bias in an artificial neural network? We'll then see, within a network, how bias is implemented. Then, to hit the point home, we'll explore a simple example to illustrate the impact that bias has when introduced to a neural network.
    Checkout posts for this video:
    / 18290447
    pBhxuRXhlG...
    / 987163658391293952
    🕒🦎 VIDEO SECTIONS 🦎🕒
    00:00 Welcome to DEEPLIZARD - Go to deeplizard.com for learning resources
    00:30 Help deeplizard add video timestamps - See example in the description
    06:42 Collective Intelligence and the DEEPLIZARD HIVEMIND
    💥🦎 DEEPLIZARD COMMUNITY RESOURCES 🦎💥
    👋 Hey, we're Chris and Mandy, the creators of deeplizard!
    👉 Check out the website for more learning material:
    🔗 deeplizard.com
    💻 ENROLL TO GET DOWNLOAD ACCESS TO CODE FILES
    🔗 deeplizard.com/resources
    🧠 Support collective intelligence, join the deeplizard hivemind:
    🔗 deeplizard.com/hivemind
    🧠 Use code DEEPLIZARD at checkout to receive 15% off your first Neurohacker order
    👉 Use your receipt from Neurohacker to get a discount on deeplizard courses
    🔗 neurohacker.com/shop?rfsn=648...
    👀 CHECK OUT OUR VLOG:
    🔗 / deeplizardvlog
    ❤️🦎 Special thanks to the following polymaths of the deeplizard hivemind:
    Tammy
    Mano Prime
    Ling Li
    🚀 Boost collective intelligence by sharing this video on social media!
    👀 Follow deeplizard:
    Our vlog: / deeplizardvlog
    Facebook: / deeplizard
    Instagram: / deeplizard
    Twitter: / deeplizard
    Patreon: / deeplizard
    CZcams: / deeplizard
    🎓 Deep Learning with deeplizard:
    Deep Learning Dictionary - deeplizard.com/course/ddcpailzrd
    Deep Learning Fundamentals - deeplizard.com/course/dlcpailzrd
    Learn TensorFlow - deeplizard.com/course/tfcpailzrd
    Learn PyTorch - deeplizard.com/course/ptcpailzrd
    Natural Language Processing - deeplizard.com/course/txtcpai...
    Reinforcement Learning - deeplizard.com/course/rlcpailzrd
    Generative Adversarial Networks - deeplizard.com/course/gacpailzrd
    🎓 Other Courses:
    DL Fundamentals Classic - deeplizard.com/learn/video/gZ...
    Deep Learning Deployment - deeplizard.com/learn/video/SI...
    Data Science - deeplizard.com/learn/video/d1...
    Trading - deeplizard.com/learn/video/Zp...
    🛒 Check out products deeplizard recommends on Amazon:
    🔗 amazon.com/shop/deeplizard
    🎵 deeplizard uses music by Kevin MacLeod
    🔗 / @incompetech_kmac
    ❤️ Please use the knowledge gained from deeplizard content for good, not evil.

Komentáře • 153

  • @deeplizard
    @deeplizard  Před 6 lety +19

    Machine Learning / Deep Learning Tutorials for Programmers playlist:
    czcams.com/play/PLZbbT5o_s2xq7LwI2y8_QtvuXZedL6tQU.html
    Keras Machine Learning / Deep Learning Tutorial playlist:
    czcams.com/play/PLZbbT5o_s2xrwRnXk_yCPtnqqo4_u2YGL.html
    Data Science for Programming Beginners playlsit:
    czcams.com/play/PLZbbT5o_s2xo_SRS9wn9OSs_kzA9Jfz8k.html

  • @labyrinth1991
    @labyrinth1991 Před 4 lety +10

    Such a clear explanation!! thank you !! :) :)

  • @justchill99902
    @justchill99902 Před 5 lety +6

    How can she be always awesome at the explanations? Thank you so much :)

  • @sambo-g9871
    @sambo-g9871 Před 6 lety +55

    So, I understand how the bias adds flexibility to the neural network but I'm still confused about a couple things:
    1. The bias seems to be no different from the weights that gets adjusted during training. If the weights and the bias both get updated during training, then wouldn't that mean that adding the bias is somewhat redundant because the output of the neuron will still converge to a similar value? Unless adding a bias is similar to adding, say another neuron to a hidden layer or adding another hidden layer. Meaning that it provides enough of a difference that you can get a more optimal result by adding a bias.
    2. Is the function that updates the weights different from the function that updates the bias? For example, if using backpropagation, is the algorithm calculating (and updating) the weights and bias in the same calculation?
    3. It seems as though the bias is compensating for the inflexibility of the activation function. If that's true, is it then possible to choose an activation function that is more flexible (assuming it exists)? Also, what stops you from adding more than one bias to a neuron? Has that been done? At what point would you stop adding additional biases to a neuron (which I would guess greatly increase complexity). My guess would be that adding multiple biases to a single neuron would be similar to adding multiple layers to a neural network, meaning that at first it makes a difference but at some point the complexity eventually outweighs the optimal result you get from the neural network itself. Is that correct?
    Great videos btw, really well put together :)

    • @deeplizard
      @deeplizard  Před 6 lety +84

      Hey Sem - Thank you, I’m glad you’re liking the videos!
      These are all good questions. Let me take a shot at them.
      1. While the weights and biases are both types of learnable parameters in the network, they influence the network in different ways. For example, changing the values for the weights can influence where we fall on the graph of the activation function, say relu, for a particular layer, but changing the value for the biases will change the position of the graph of relu all-together. The response for (3) elaborates more on this.
      2. Yes, the weights and biases are being updated at the same time using SGD and backpropagation. It’s not necessarily happening in the same _calculation,_ but it is happening in the same step. Just as we saw in the backprop videos earlier in this playlist, SGD calculates the gradient of the loss with respect to the weights (via backprop) and then updates the weights with the result. For bias, SGD similarly does this same process of calculating the gradient, but with respect to the biases rather than the weights.
      3. That’s one way of thinking about it-- as some sort of compensation for the inflexibility of the activation function. I’m not sure of a “more flexible” non-linear activation function that exists and has been adopted for use in neural networks. Relu is pretty much the go-to standard for now. If you think about the graph of relu though, you could actually think of it as maybe being pretty flexible. I mean, it's a linear function for all numbers greater than or equal to zero, so it spans the entire positive number line. But when we think about adding bias, we can think of the entire graph of relu shifting to the left or to the right. We’re handing over flexiblity to the training algorithm to decide what it should mean for a neuron to be meaningfully activated, rather than just saying, “if you’re greater than zero, you’re active” and also having it decide different levels of activations for different neurons all using the same underlying activation function.
      I’ve never come across having more than one bias per neuron. Since biases are additive, I would think of having more than one bias per neuron being redundant. If the network was able to learn optimal values for each of the individual biases assigned to a single neuron, which would ultimately be summed together, then it will also be able to learn the optimal value for a single bias term, which is what we see in practice today.

    • @sambo-g9871
      @sambo-g9871 Před 6 lety +30

      OK yeah, makes sense. That helps me understand how biases work. Thanks!
      Btw, I think it's super cool how you take the time to answer your viewer's questions :)

    • @EDeN99
      @EDeN99 Před 4 lety

      @@deeplizard Very super cool explanation. Nice work and nice voice too

    • @waterflowzz
      @waterflowzz Před 2 lety +9

      For your first question, you may know this by now since the question is 3 years old but I’m giving my 2 cents so other people who might have the same question can think about it. Think of weights and bias in terms of a linear equation y=mx+b, m = weight (m is the slope in algebra) and b = bias (b is the constant in algebra). If you think of the equation graphically the m changes the slope and b determines the y-intercept. I think if you think of it this way, it’s much easier to grasp. I had the same question when I was learning about neural nets and I came across a video that explained it this way.

    • @seraphimwang
      @seraphimwang Před 2 lety +1

      @@waterflowzz May I ask you which video, please? Anyway, I had similar idea 💡 which clarify why biases are additive. Cheers 🍻

  • @richarda1630
    @richarda1630 Před 3 lety +2

    once again you guys have helped make understandable something which was previously for me just something you had to plug into a formula . thanks!

    • @richarda1630
      @richarda1630 Před 3 lety

      Where were you guys 5 years ago?? :) haha I see what you did at the end :D

  • @DennisRiungu
    @DennisRiungu Před 4 měsíci +1

    Beautifully expounded. Thank you

  • @emeline894
    @emeline894 Před 3 lety +1

    Perfectly explained. So easy to understand.

  • @tymothylim6550
    @tymothylim6550 Před 3 lety +1

    Thank you very much for this video! I really enjoyed this video and learning about bias! It was great to use the "relu" function to explain!

  • @nathanielislas9245
    @nathanielislas9245 Před 4 lety +1

    This is such a great explanation! Thank you

  • @cjlooklin1914
    @cjlooklin1914 Před 2 lety +1

    Oooh the Biases shift the activation threshold!!! I don't know why that took so long to understand XD

  • @x7331x
    @x7331x Před 2 lety +1

    Perfect explanation, congrats 🔥 !

  • @parisanejatian8940
    @parisanejatian8940 Před 3 lety

    The best youtube channel for learning neural network

  • @joaopaulocasarejoscobra430

    Great explanation, thanks!

  • @manjeetnagi
    @manjeetnagi Před 2 lety +1

    very well explained.

  • @Mo3azSolomon
    @Mo3azSolomon Před 9 měsíci +1

    Good Explanation Thanks 💚

  • @igorgorpinich5197
    @igorgorpinich5197 Před 4 lety +1

    Super good video! Thank you!

  • @CosmiaNebula
    @CosmiaNebula Před 3 lety

    0:29 intro
    1:12 what is bias
    3:00 simple example

  • @pavankumard5276
    @pavankumard5276 Před 4 lety +1

    Really nice video finally understood it

  • @alanjoy7915
    @alanjoy7915 Před 4 lety

    That was a nice explanation. Thank you

  • @basantmounir
    @basantmounir Před 3 lety +1

    You're amazing!!

  • @helenapereira6775
    @helenapereira6775 Před 5 lety +1

    very helpful! Thank you

  • @Waleed-qv8eg
    @Waleed-qv8eg Před 6 lety +2

    Great as always!!

  • @AnoopKumarPrasad
    @AnoopKumarPrasad Před 4 lety +1

    Great one.

  • @esraamohamed5601
    @esraamohamed5601 Před 4 lety +1

    Thank you for your clear and nice video ..you are my hero

  • @georgeognyanov
    @georgeognyanov Před 3 lety

    Great videos and great series obviously! Quick question, is bias in DL the same as the bias in ML, aka the constant, the y-intercept, b0. Like looking at the simplest linear regression formula y = b0 + w1x1 does the bias there do something similar to the bias just discussed in the video or are they totally completely different things.

  • @pawansj7881
    @pawansj7881 Před 6 lety +1

    Perfect!!

  • @caveman4659
    @caveman4659 Před 3 lety +1

    You saved me. Thanks!

  • @umshrana
    @umshrana Před 5 lety +2

    Thank you !

  • @panwong9624
    @panwong9624 Před 6 lety +1

    very helpful!

  • @vinodp8577
    @vinodp8577 Před 6 lety +4

    Yay! Full screen is used for explaining, going forward could you please use full screen for the keras playlist as well

    • @deeplizard
      @deeplizard  Před 6 lety +2

      Hey Vinod - Yes, for sure! Just released a new Keras video, and it's using full screen 😎
      czcams.com/video/zralyi2Ft20/video.html

  • @benbalaj1732
    @benbalaj1732 Před měsícem +1

    0:50 Kerbal Space Program music is goated

  • @sanwalyousaf
    @sanwalyousaf Před 6 lety +1

    brilliant tutorial

  • @sumitdas7489
    @sumitdas7489 Před 2 lety

    but withought using bias if we use leaky Relu instead of Relu then also we can avoid dead activation right? then we dont need biases

  • @islanmohamed390
    @islanmohamed390 Před 4 lety +1

    Good explanation 👌🏿

  • @Ahmadalisalh6012
    @Ahmadalisalh6012 Před 3 lety

    why i would like to determine the threshold?
    thank you

  • @josephmbimbi
    @josephmbimbi Před 5 lety +1

    some graphical example, like a lign separating 2 "data clouds", and how not having bias makes some configuration of the 2 clouds not separable would have made the bias more understandable, and the video clearer

  • @farjadmir8842
    @farjadmir8842 Před 3 lety

    Nice one .🥰

  • @justchill99902
    @justchill99902 Před 5 lety

    Question - As SGD also updates the biases while training,
    1.how are they updated? using backpropagation just like the weights?
    2. Since bias changes affect the activation output which in turn also depends on the weights, do bias updates conflict with weight updates?
    Thank you lizzy!

  • @woah-dude
    @woah-dude Před 4 lety +62

    nice to hear a female voice explaining IT stuff for once, never had that in my 7 years of software development

    • @fosheimdet
      @fosheimdet Před 4 lety +41

      Whenever I clock on an IT video I expect it to have a heavy Indian accent

    • @DrunkenMonkeyHD
      @DrunkenMonkeyHD Před 4 lety

      @@fosheimdet Savage.

    • @GWebcob
      @GWebcob Před 3 lety

      For sure. For me its the best channel on the topic so far

    • @ifusubtomepewdiepiewillgiv1569
      @ifusubtomepewdiepiewillgiv1569 Před 3 lety

      best video but idc what gender does it bc im not sexist lol

    • @woah-dude
      @woah-dude Před 3 lety +3

      @@ifusubtomepewdiepiewillgiv1569 nothing to do with sexism amigo

  • @ahmadzbedi1745
    @ahmadzbedi1745 Před 3 lety

    können Sie mir eine Referenz oder ein Buch dafür empfehlen.
    ich muss den Begriff Bias in meiner BA erörten aber das muss ich einfach zitieren

  • @whatarewaves
    @whatarewaves Před 2 lety +1

    Wish you talked a bit more about how limited variance helps reduce the vanishing gradient problem specifically with an example. Also I know ReLU helps the vanishing gradient problem and it would have been interesting to see how that works too.

  • @farzadimanpoursardroudi45

    very useful

  • @ltoco4415
    @ltoco4415 Před 5 lety

    Is bias same as threshold? if not then what is the difference between them because bias determines if a neuron is activated or not, so that seems to be same as threshold.

  • @quadracycle4000
    @quadracycle4000 Před 4 lety +1

    Came for 1:28, stayed to 7:12. Very informative!

  • @JoseTorres-tr6od
    @JoseTorres-tr6od Před 6 lety +2

    Hello deeplizard, after taking a deep learning class I became unsatisfied with the explanations provided for backpropagation, we were given the weight update formulas for an specific 2 hidden layer network(relu, relu, sigmoid) to train for MNIST. Ever since, I have been independently trying to come up with the formulas for a similar NN, and yesterday I was finally able to get the update formulas for non-output layer weights. When I run my program however, my network is only able to adjust its weights for a single example of X(input) and Y(input).
    I have always been aware that the entire backprop derivation is based on gradient descent for a single example, but I thought that alternating between (input/output) from my training set would be sufficient to extract "the pattern", it does not.
    In one of your videos you said that you don't immediately update the weights but average the changes of an entire batch. Could you explain as to the logic/math or intuition behind this? Thank you.

    • @deeplizard
      @deeplizard  Před 6 lety +1

      Hey Jose - Check out the following video starting at 11:55:
      czcams.com/video/Zr5viAZGndE/video.html
      To summarize, you take the gradient of the loss with respect to a particular weight for _each_ input. You then average the resulting gradients and update the given weight with that average. This would be the case if you passed _all_ the data to your network at once. If instead you were doing batch gradient descent, where you were passing mini-batches of data to your network at a time, then you would apply this same method to each batch of data, rather than to all the data at once.
      Does this help clarify?

    • @JoseTorres-tr6od
      @JoseTorres-tr6od Před 6 lety

      deeplizard
      Thank you!

  • @David-bp2zh
    @David-bp2zh Před 3 lety

    It is a clear explanation. I wonder which tool did you prepare this lecture?

  • @aorusaki
    @aorusaki Před 4 lety +1

    Nice! :)

  • @AdSd100
    @AdSd100 Před 5 lety +13

    Lol I thought my KSP was running in the background. Do you play it?

    • @deeplizard
      @deeplizard  Před 5 lety +3

      Haha I actually just had to look up what KSP is. In doing so, I heard the same track 😆 They used the same music library as I did.

    • @AdSd100
      @AdSd100 Před 5 lety +2

      @@deeplizard Great work BTW!

    • @deeplizard
      @deeplizard  Před 5 lety

      Thank you!

  • @neurojedi42
    @neurojedi42 Před 3 lety

    The thing i didn't understand is why we need bias ? Let's say the transfer function results with -0.35 so the will be no firing but when we add a bias there will be a firing. The thing is, doesn't it let a neuron fire which shouldn't actually fire. I mean wouldn't it let misinterpretation of data ?

  • @bytblaster
    @bytblaster Před 4 lety +1

    I dont realy understand this. If the Bias is just another weight...why wouldnt it just change the weights to be higer in the backpropagation steps so important neurons DO get fired?

  • @amangoyal476
    @amangoyal476 Před 4 lety +2

    I understood most of it but I had a query:
    In the second example of bias , why would the bias be -5 if only weighted sum >= 5 are allowed ?

    • @nellynelly7551
      @nellynelly7551 Před 4 lety +1

      Recall that the goal of the bias is to change the allowed weighted sum by shifting the graph either to the left or the right. ReLu states that only weighted sums of >= 0 are allowed. If we want to shift this to the right, we add -5 to any weighted sum. This makes 5 the new 0 (5-5=0) and only values above five will be outputted.

  • @mdyeasinarafath4450
    @mdyeasinarafath4450 Před 6 lety +2

    That was another great work, Mam!
    We can't control bias, but at-least can't we specify that bias should be the opposite of threshold?
    And we know how weights update by multipying the gradient with the learning rate. But how does biases update?

    • @deeplizard
      @deeplizard  Před 6 lety

      Thanks, Md.Yasin Arafat Yen! We won't need to tell an API, like Keras for example, that bias should be the opposite of the threshold-like value that we talked about here, because Keras interprets the bias as meaning just that already. You can see what exactly we have control over in the Keras video illustrating how to access and initialize the bias terms: czcams.com/video/zralyi2Ft20/video.html
      The biases get updated in the same way as the weights. SGD calculates the gradient of the loss with respect to each bias, then multiplies this gradient by the learning rate, then subtracts this product from the current value of the bias to get the updated value for the bias.

  • @dourwolfgames9331
    @dourwolfgames9331 Před 5 lety

    How do I adjust the bias? I'm pretty sure that its during backpropagation after retrieving the negative cost gradient, but I don't know what the adjustments to the bias are based off of. Does it have something to do with the changes to weights? I'm still very much learning and I may be incorrect. =)

    • @deeplizard
      @deeplizard  Před 5 lety +2

      Hey Dour - Yes, you're exactly right, the adjustment to the bias occurs during backpropagation. Just as the gradient of the loss is calculated with respect to each weight, and then the respective gradient is used to update each weight, the same thing occurs for each bias. The gradient of the loss is calculated with respect to each bias, and then the respective gradient is used to update each bias.
      Let me know if this helps clarify!

  • @adrianogoeswild
    @adrianogoeswild Před 5 lety

    Late one here :).
    Quick question
    You mentioned that the bias will be readjusted at every backprop step along with the weights, with the exception that we will calculate thw gradientd w.r.t the weights and biases individially.
    Now the question, wouldn't it make sense to add the bias as a weight with it's neuron equal to one? With the exception that the weights of the previous layer are not connected to this bias neuron.
    I hope i was clear somehow.
    Thanks alotttttt :)

    • @deeplizard
      @deeplizard  Před 5 lety

      Intuitively, you could think of the bias as a node not connected to the weights in the previously layer, but in terms of it being equal to one, that would only be true when we initialize all the bias terms (assuming we initialize them all to one). During training, the values will change.

  • @timharris72
    @timharris72 Před 6 lety +1

    This tutorial was awesome. Have you thought about doing more basic tutorials with some math (real like this example, not conceptual) and only 2 or 3 nodes to explain some of the concepts. When you do some basic math and keep the examples really simple it really starts to make sense.

    • @deeplizard
      @deeplizard  Před 6 lety

      (Sorry if you’re getting spammed with my comment. I’ve tried replying a few times, but it’s not showing as being posted to you.)
      Thanks, Tim! Yeah, I’ve experimented with this approach recently, and I liked it as well. In fact, I just used a simple network and some basic math to illustrate the concept in my latest video that I just released a few minutes ago: czcams.com/video/pg3hJpSopHQ/video.html
      Appreciate your feedback!

    • @timharris72
      @timharris72 Před 6 lety +1

      I watched the video. The numbers really helped out. Thanks for using the math.

    • @deeplizard
      @deeplizard  Před 6 lety

      Glad to hear!

  • @kemsekov6331
    @kemsekov6331 Před rokem

    Imagine each layer as a combination of different functions that sums up to some figure in input - output space. You need to add together these functions in such a way, that they replicate these data figure, and so bias are just shifting parameter. That's it. It just shifts function a bit further or closer so it's most suited parts will be used to approximate figure.

  • @lankanathaekanayake7680

    how about we adjust weights for activate output neuron instead of adding additional bias parameter?

    • @deeplizard
      @deeplizard  Před 6 lety +4

      Hey Lankanatha - While the weights and biases are both types of learnable parameters in the network, they influence the network in different ways. For example, changing the values for the weights can influence where we fall on the graph of the activation function, say relu, for a particular layer, but changing the value for the biases will change the position of the graph of relu all-together (by shifting the graph to the left or right).

  • @sathyakumarn7619
    @sathyakumarn7619 Před 4 lety

    Is it probable that new videos might be added to this playlist?

    • @deeplizard
      @deeplizard  Před 4 lety

      It's possible :)
      In more advanced future courses, if we notice that a fundamental topic needs to be covered in order to understand the advanced material, and that topic isn't already in this Fundamentals course, then we will likely add it here.

  • @SM-ob5sm
    @SM-ob5sm Před 3 lety

    I love the way these videos explain ANN, easy to understand. But I am really distracted with the music in the background on and off. :(

  • @NK-nf2ym
    @NK-nf2ym Před 4 lety

    How do biases get adjusted during training, witch functions

    • @deeplizard
      @deeplizard  Před 4 lety

      They are adjusted in the same way in which the weights are adjusted.
      You can learn how exactly the adjustments occur on the episodes regarding backpropagation, starting with this one:
      deeplizard.com/learn/video/XE3krf3CQls

  • @naprava7522
    @naprava7522 Před 4 lety

    Thanks. But I still feel that by updating the weight we can have the same thing. To me it feels like it’s equivalent. But i know that I am probably wrong.

  • @drevolan
    @drevolan Před 6 lety +3

    I just found this channel because of Reddit and I must say: it's quite interesting!
    It's presented in an easy to understand manner and I enjoy the narration of both hosts.
    My only complaint would be to work on the visuals, they seem a little bland and at times they look a bit more like a power point presentation than an actual video.
    But that'll come with more experience, overall I really enjoy the channel.
    Hope to see more content from you guys in the future!

    • @deeplizard
      @deeplizard  Před 6 lety

      Hey dangsterr - Really appreciate your feedback! Thank you. We're glad to hear that you're liking the channel!
      We're both new to video creation, so we've been consistently working on our style and exploring new techniques. Thanks for the feedback regarding the visuals. We'll keep that in mind.

    • @ravishankar2180
      @ravishankar2180 Před 6 lety +2

      visuals can be ignored as long as content is awesome and your content indeed takes good care of that.

  • @ronithsinha5702
    @ronithsinha5702 Před 6 lety +1

    What if you use logistic function as an activation function? In that case, why would we need a bias?

    • @deeplizard
      @deeplizard  Před 6 lety +1

      Hey Ronith - The principle would be the same with a logistic function. The bias terms would be parameters that SGD would learn and optimize to signal what it means for given nodes to be meaningfully activated. By adding bias, you can think of the graph of the logistic curve shifting to the left or to the right (based on whether the bias was positive or negative), rather than staying centered.

    • @prasannakumar7035
      @prasannakumar7035 Před 5 lety

      so before passing inputs to the activations function its good to add bias value it seems!so that all the neuron will fire some value:)

  • @BrotherDoorkeeper
    @BrotherDoorkeeper Před 5 lety

    "With an activation output of 0, this neuron is considered to not be activated. Or not firing."
    Does a not activated/not firing neuron still pass 0 as an output to the next layer?

    • @EDeN99
      @EDeN99 Před 4 lety

      @Szabolcs Ambrus, Mathematically speaking, a non-activated neuron still passes 0 to the next layer since it has to pass whatever it output is onward for multiplication with the connected weights.
      But Come to think of it, when the 0 which it passes to the next layer (after being multiplied with the connected weights) gets to the next layer, it still appears as 0 in the next layer and since each neuron in the next layer sums all the weighted outputs from the previous layer, the weighted output from the "non-activated" neuron will have no effect since its value is 0.
      This is actually why the term "not activated/not firing" is used since its output has no effect.
      I hope this helps.
      I am also learning myself.

  • @carbdoto2523
    @carbdoto2523 Před 5 lety +1

    thx

  • @leonhardeuler9839
    @leonhardeuler9839 Před 5 lety

    But what actually are weights? Do we pick them randomly or there is a formula for that?

    • @deeplizard
      @deeplizard  Před 5 lety

      Check out the video and blog for weight initialization:
      deeplizard.com/learn/video/8krd5qKVw-Q

  • @iAndrewMontanai
    @iAndrewMontanai Před 4 lety

    and no single word about how actually bias is learning

  • @victorburca5028
    @victorburca5028 Před rokem

    I watched the explanation several times. I understood all your words, but I still have not understood the practical purpose of bias. Do you just use it to increase/decrease the input for a neuron?! Why?!
    The neuron will send its value (output) to the next neuron indifferent to the value of that output. So it will "fire" always. There are no situations when a neuron will not send the output to the next neuron. Even when the value is equal to zero, it will still be sent to the next neuron.
    I am looking for another youtube explanation.

  • @DEEPAKSV99
    @DEEPAKSV99 Před 4 lety +1

    0:08 Are you really planning to do another series on bias from a political or social standpoint? xD
    It may not be easy to deliver them in short and sweet fashion like you always do. But who knows, you may be patient enough to even break those topics into simple logics and propose your solutions :')

    • @deeplizard
      @deeplizard  Před 4 lety +1

      😅

    • @saluk7419
      @saluk7419 Před 3 lety +1

      Yeah I actually thought before clicking that would be what this video was about. The success of deep learning is really limited by the quality of the input, so bias in selecting samples is a big issue! I had no idea that there was a concept of intentional biases within the neural network itself haha.

  • @yuanzhang1230
    @yuanzhang1230 Před 4 lety +1

    {
    "question": "In practice, can you explicitly choose and control the weights in a network?",
    "choices": [
    "Yes",
    "No",
    "I can control a little",
    "It depends"
    ],
    "answer": "Yes",
    "creator": "SummerGift",
    "creationDate": "2020-07-10T00:16:01.160Z"
    }

    • @yuanzhang1230
      @yuanzhang1230 Před 4 lety

      I make a mistake, the answer is No。

    • @yuanzhang1230
      @yuanzhang1230 Před 4 lety

      Hmm, i want to say control the bias not the weights.

    • @deeplizard
      @deeplizard  Před 4 lety +1

      Thanks, Yuan! Just added your question to deeplizard.com/learn/video/HetFihsXSys :)

    • @yuanzhang1230
      @yuanzhang1230 Před 4 lety

      @@deeplizard It seems i can't find my question now? Did you cancel my question?

    • @deeplizard
      @deeplizard  Před 3 lety +1

      No, it's on the site at the link above. You may need to refresh your cache to see it.

  • @mushoodbadulla9305
    @mushoodbadulla9305 Před 3 lety +1

    very good video, but drop the music.

  • @sai1734
    @sai1734 Před 4 lety

    why bias not 2 ?

  • @wiratamaradiance
    @wiratamaradiance Před 2 lety +1

    Thanks for your explaination, it really help me to understand it
    but your BGM really disturb my focus

    • @deeplizard
      @deeplizard  Před 2 lety

      Thanks for the feedback yoza, BGM has been removed in later videos.

  • @JimmyCheng
    @JimmyCheng Před 5 lety +2

    Just a suggestion, the recording volume can be turned up a notch, the ads are really loud compare to ur voice haha

    • @deeplizard
      @deeplizard  Před 5 lety

      Thanks for the suggestion, Ziqiang! I've been trying to tune the audio and sound levels recently. What do you think of the volume of this newer video: czcams.com/video/Bcuj2fTH4_4/video.html
      Still may need to be brought up a notch?

    • @JimmyCheng
      @JimmyCheng Před 5 lety

      @@deeplizard could be louder still imo. But again you have such a soft and beautiful voice maybe some extra volume is needed haha

    • @deeplizard
      @deeplizard  Před 5 lety

      Thanks for the feedback!

  • @saanvisharma2081
    @saanvisharma2081 Před 5 lety

    2:54 bias(b) should be added to each neuron. But, here they've added single 'b' did they forgotten to insert bracket

    • @justchill99902
      @justchill99902 Před 5 lety +1

      No @Saanvi. The weighted sum is the sum of all multiplied individual weight and input values from the left layer( e.g. input layer) So, these are multiplied and given to "a single neuron" at the right side layer(e.g. hidden layer). Now a "bias" is added to the "hidden layer neuron" where this weighted sum from all input layer neuron is applied. So bias is given to that particular neuron(while is one neuron) and therefore one value of bias. For the next neuron in the same hidden layer, again we feed a weighted sum from the input layer neurons and "one bias" and so on for every neuron in the network.
      Hope this helps.

  • @yanhaeffner7881
    @yanhaeffner7881 Před 5 lety +3

    I can't watch this video without thinking about building a rocket... Soundtrack related.

    • @deeplizard
      @deeplizard  Před 5 lety

      Haha is that a good thing? Building a rocket sounds inspiring.

    • @yanhaeffner7881
      @yanhaeffner7881 Před 5 lety +2

      Yeah, sure it is hahaha
      By the way, great video! I know that is really hard to keep up the whole thing with graphical methods but I guess that a function view of bias would have made it a little bit clearer, but, that's just a spec of dust on a big surface for what this video is! We really need more videos like that!

  • @yoloswag6242
    @yoloswag6242 Před 3 lety

    0:47 is that KERBAL SPACE PROGRAM ost? omg omg

    • @deeplizard
      @deeplizard  Před 3 lety

      Haha yes! KSP creators used the same music library as we did :D

  • @mahendrank9060
    @mahendrank9060 Před 3 lety

    please build neural network that is based on realtime datasets

  • @my-jorney
    @my-jorney Před 3 lety +1

    KSP music😄

  • @DanielSchaefer01
    @DanielSchaefer01 Před 6 lety

    Great video! Just one comment though: the moving background is pretty distracting!

  • @madisonforsyth9184
    @madisonforsyth9184 Před 5 lety +2

    the music in the background is so. distracting. omg. i have to turn captions on and mute it. why am i even on youtube???

    • @deeplizard
      @deeplizard  Před 5 lety +1

      We were experimenting with background music at the point when this video was made. Agree that it is distracting, so we cut way back on it in later videos

  • @fredericfc
    @fredericfc Před 5 lety

    This is hurting my head 🤕

    • @deeplizard
      @deeplizard  Před 5 lety

      More coffee ☕

    • @fredericfc
      @fredericfc Před 5 lety

      @@deeplizard Just went to the kitchen and made some nespresso to have with cookies 🍪 never feels so lonely like today, and my so called buddies only know olap. help me deeplizard!

  • @yuyangtu8687
    @yuyangtu8687 Před 5 lety +1

    I feel dizzy when I watch this video

  • @salahuddinusman2066
    @salahuddinusman2066 Před 4 lety

    i am too biased hearing to your lovely voice!!!

  • @merie8265
    @merie8265 Před 4 lety

    hey what's going on today ..Corona virus and lockdowns

  • @KingDav33
    @KingDav33 Před 5 lety

    This video seems to be really biased...

  • @GauravSingh-ku5xy
    @GauravSingh-ku5xy Před 3 lety

    Adopt me.

  • @yichern4351
    @yichern4351 Před 4 lety

    Legit expected a guy voice ngl

  • @chavorocket
    @chavorocket Před 3 lety

    closed the video as soon as I heard a female voice

  • @annankldun4040
    @annankldun4040 Před 4 lety

    Really ruined it with music. Please don't use music in educational videos. Makes no sense.

    • @deeplizard
      @deeplizard  Před 4 lety

      We were experimenting with audio and music at the time of making this episode. In hindsight, agree, bad idea. We no longer include it during technical discussion.

    • @deeplizard
      @deeplizard  Před 4 lety

      Also, note that each episode has a corresponding written blog that you can use as well.
      deeplizard.com/learn/video/HetFihsXSys

  • @ahmedaj2000
    @ahmedaj2000 Před 3 lety +1

    really good explanation, thank you!