Vanishing Gradient Problem in ANN | Exploding Gradient Problem | Code Example
Vložit
- čas přidán 24. 07. 2024
- Learn about the Vanishing and Exploding Gradient Problems in Artificial Neural Networks (ANNs) with practical code examples. Understand the challenges and solutions for training deep networks effectively. Improve your grasp on these important concepts in neural network training.
Code - colab.research.google.com/dri...
============================
Do you want to learn from me?
Check my affordable mentorship program at : learnwith.campusx.in
============================
📱 Grow with us:
CampusX' LinkedIn: / campusx-official
CampusX on Instagram for daily tips: / campusx.official
My LinkedIn: / nitish-singh-03412789
Discord: / discord
👍If you find this video helpful, consider giving it a thumbs up and subscribing for more educational videos on data science!
💭Share your thoughts, experiences, or questions in the comments below. I love hearing from you!
✨ Hashtags✨
#GradientProblems #NeuralNetworks #codeexamples
⌚Time Stamps⌚
00:00 - Intro
01:11 - Vanishing Gradient Problem
12:20 - Code Demo
18:00 - How to handle VG problem
20:17 - Code Demo
23:18 - RELU Activation function
25:27 - Code Demo
27:28 - Weight Initialization Techniques
28:11 - Batch Normalization
28:34 - Residual Network
31:51 - Outro
hi sir you are playing a big role to make me a good data scientist ... thank you vert much!!!
Thank you sir for another amazing lecture!! 😃
Very nice talk. A lot of my questions are answered. Thanks a lot.
awesome explanation for each and every step. Thank you very much Sir..
Excellent playlist on Deep learning.
Excellent explanation sir. !!!
very good explaination, Thanks
Sir best of best explanation and thanks
The best explantation so far
Great and mind opening videos
so nice great way of teaching
Thank You Sir.
1st comment, thanks so much for taking your time and teaching us.
Please cover EXPLODING GRADIENT also with examples
great video 👏
Awesome ❤️🔥
Sir please upload next videos! I am waiting from last 10 days regularly checking your playlist.. please sir
Great🔥
great content
Awesome
Hi ,Sigmoid derivative lies between 0 and .25
Thanks
Sir thank you
Bhai I have a confusion and it might be very trivial. But in Pipelines with Columntransformers whenever I am using OrdinalEncoding on certain columsn and then OHE on some other column...OHE fails. it gives an error like ValueError: could not convert string to float: 'AllPub'. This is Housing kaggle competition dataset. Is there something i am missing here. I am using ordinal encoder before OHE and providing categories in that. Anyone else can help too..
Thank you..
Superb
@11:50,pls let us kniw know when are you covering tensorboard and callbacks
👍👍
Sir after DL please make video on open cv tutorial using DL.
yes please make video on open cv tutorial using DL.
Sir we are finding the difficult to crack the job in data science placement so please make a dedicated 100 days series of placement tricks & tips, Requested everyone to upvote it , so that sir notice it
❤️
completed
how about we use the learning rate to avoid VGP? like we can use lr = 10 or 100 or 1000 so when it multiplies with the derivatives, the value is again higher, to counter the deep multiplication of 0.1s in a NN. correct me if im wrong
thanks
Can you please make a video on resents
🖤
🙏
Sir please make a video on Hypothesis testing. Tried searching in your channel but did not find it. Love to learn it from you.
If anyone else can help me find it.. Okay let me know and send the links in the thread.
bhai apko milee to batanaa pls
see somesh kumar's prob & stats nptel
31:42 badshaho gradient clipping bi rehta.
I am getting error in movie recommender system using ML sir plz help
sir please upload next video
Sir please explain resnet topic.
6:01 sir dy^/dz stands for ?
What if we keep sigmoid as activation function and increase learning rate.. eventually it will make 5hose values higher ..so will it helpful by keeping layers same and just increase learning rate. ?
Increasing learning rate will prevent model from converging easily
@@ShubhamMitkari oh ok bro thanks .. i thought if we increase values then it will prevent vanishing gradient as values becomes higher ...
Sir random_state ka batadain ya kis kay liya hota hy ?
11:55 Badshaho, tensor board and call backs bi rehte.
Bhai next video ks din upload hogi ?
god
Revising my concepts.
August 12, 2023😅
sir, please upload videos
Sir approx how much time would it take to complete this course?
2 more months
@@campusx-official sir please upload next video
@@campusx-official bhai please finisih the playlist
Sir if u want more views and likes then start some bollywood channel. Is channel ke subscribers intelligent hai jo kuch constructive build karenge. If u want to earn money from this channel we are ready to donate money or launch some course like krish naik.
While explaining Exploding gradient how come 1-100 = 99 ? . It should be -99 hence weights will keep becoming smaller and smaller i.e more negative