Live Day 4- Discussing Decision Tree And Ensemble Machine Learning Algorithms
Vložit
- čas přidán 3. 02. 2022
- Join the community session ineuron.ai/course/Mega-Community . Here All the materials will be uploaded.
Playlist: • Live Day 1- Introducti...
The Oneneuron Lifetime subscription has been extended.
In Oneneuron platform you will be able to get 100+ courses(Monthly atleast 20 courses will be added based on your demand)
Features of the course
1. You can raise any course demand.(Fulfilled within 45-60 days)
2. You can access innovation lab from ineuron.
3. You can use our incubation based on your ideas
4. Live session coming soon(Mostly till Feb)
Use Coupon code KRISH10 for addition 10% discount.
And Many More.....
Enroll Now
OneNeuron Link: one-neuron.ineuron.ai/
Direct call to our Team incase of any queries
8788503778
6260726925
9538303385
866003424
1:14:47, Krish u r right, Gini can't be greater than 0.5 but that is in case of binary classification only. Over here we can clearly see in that node, there are 3 output classes with each having frequency 50, so if we calculate Gini index we'll get 1 - (1/9+1/9+1/9) that is clearly 0.667. So its right over there.
So basically the generalized formula of maximum gini impurity value of a classification problem is 1/n, where n is the number of classes in the output feature. Hope this helps
Great tutorial and very good explanation! Highly recommend to anyone who is trying to understand Decision Tree Algorithm. Helped me a lot! Thank you very much Krish!
Krish...u r doing a great job for all of us.thanks a lot...and keep continuing ..
Great lecture. Simplified to the understanding of a beginner. Remain blessed.
Gracias por ser un excelente profesor!
Great explaination Krish ,coming from a non-technical background i could easily understand the session and keep posting such videos, thank you
Very informative sessions, have never heard such an explanation on CZcams before. Thanks a lot
Sir, your lectures are even useful after 1 year, thank you for this :)
Thank you for the session.
32:57 ,. Entroy session & GI .. excellent explaination sir .. just loved it 🙂
Amazing lecture
Thanks for the insightful teaching, Sir Krish
excellent proffesor !!
Information gain explained so simply. I love this guy. You should put "buy me a coffee" like link in description so that people who are willing to donate can easily do it.
Hi krish, you are awesome!🌟
very interesting sir the whole video i enjoyed it alot... thanks for the struggle you have put over there.... God bless you
This was one of the best Live sessions.
Great session sir
Please don't worry about the live viewers count
Some of us catch up later
Thankkk you so much sir!
Some more clarity on how the splits are being made for the decision tree regression would help, if you can make a separate video on that
As usual you rocked the session Krish!!!!
Thank you.
well explained
Great Session
Wonderful session no one can beat it
Very nice! Great teaching, useful lectures. Thank you so much.
Krish, Please from tomorrow. If possible, Disable the chat during live session as it might affecting your concentration and due to which the people who really come here to learn and understand the complex equations in an easy way suffering a lot. So please disable or not open the chat on other screen so you can see them continuously.
Agree
Agreed!
agreed @Krish Naik
Can not agree more!
Krish, I would say only one thing and which is that YOU ARE AMAZING TEACHER!!!
nice lecture
Hello, Krish sir,
Your teaching is too awesome.
Can you please type in here what you told at the start when your mice was of.
Please it will help me.
finished watching
you're the man
hi Krish, in the practical example, is the reason for Gini being .667, the fact that there are 3 classes and all of them have equal number of samples?
1:15:20 , reason for gini greater than 0. 5 : here we have more than two classes to predict , its a multiclass classification , so gini impurity formula would be 1 - p1^2 - p2^2 - p3^2 - p4^2 , so its possible to have a gini value greater then 0.5
Please explain when to use Gini or Entropy .
great
Excellent explanation still i have some doubt about formula. How to derive Gini impurity and enthalpy formula?
This question asked in one interview with me. I searched a lot but I did not get this answer.
I came here to look how we select root node in decision Tree Regressor but it seems you have just used one feature. So my question is which feature is selected first and if we have multiple columns ? and depending on What?
Hi sir , I have a doubt , Using Gini Impurity over Entropy will help us in faster execution , but can we Use Gini Impurity for Information to select a feature to split . If Yes , what will be the formula , Will it be 2*GI in place of H(S) and H(Sv)
sir your teacch method i like do it wwell.
Please make videos on Polynomial regression ,Transformaitons Krish …Also one week session for in depth dANN ,CNN,RNN ,GAN,NLP,Time series ,MLOPS
he already has a playlist for deep learning . check in his channel's playlist section .
@@ankitbiswas8380 ,I am asking for live session
Concepts are same online(live) and offline (playlist) teaching...🙄
At 29:15,log base 2 of zero is undefined right?
Can we apply fuzzy in decision tree.. if so how
It's not theoretical
It's mathematical
And i love MatheMatics
The playlist link is for advanced statistics, not for ML live sessions
for categorical features how to split in dt regressor
Please make 7 days classes on eda and feature scaling in future
Well said
Pls how do I get all the materials used in this tutorials
If I'm getting entropy as 0.4 , 0.56 like that, whether it is pure split or impure split.. most of the time we will not get either 0 or 1 ..
yes
why do we select that features which has a high information gain?
could anyone help me out . how can i find out these materials and code
I have a question, why entropy is introduced in first place. I mean if we can use gini for large number of features then we can also use gini for small number of features right?
Yes we can use that.
How Decision tree is different from.fuzzy?
Are these session enough to understand ml??
rich content
From the practical implementation you didn't show how to infuse entropy or gini impurity.
Like is there No mathematical python script explaining how entropy or gini impurity works in this algorithm.
Can anyone answer my question in Decision Tree regression there we will calc the MSE value (y^-y) here what is the Y value and what is the Y^ value if we consider F1 as my root node
I believe the y^ value is the Mean value and y value is the original value in the dataset. Please correct me if I am wrong
'y^' is the predicted value and 'y' is the actual value
@@sagark1431 For construct a Decision Tree we dont have any predicated values na. we build a tree based on our dataset in our dataset we can have Y value not Y^ value. after construct a Decision Tree we can get model predicted value there we calculate the MSE na
@@skvali3810 czcams.com/users/clipUgkxYmYUbLQ2ZjBP4sPhk_azjY63vhBe_RcE see if this helps.
@@abhimanyukspillai6572 mean of what
👍👍👍
sir gini can be more than 0.5 /..............we'll look at a dataset with many different labels
lots_of_mixing = [['Apple'],
['Orange'],
['Grape'],
['Grapefruit'],
['Blueberry']]
# This will return 0.8
gini(lots_of_mixing) ##output=0.8
I think, it is coming 0.8 because all the classes here have equal number of samples. correct me if I'm wrong pls
Yep You are Right I Tried On 7 Category gini Value Turned out to be 0.84 . So More No of Output Labels More Number Of Gini Value Even Surpassing it range 0.5 which defies formula logic I Think That's the Limitation It is Very Good against Binary & may be 3 classes but at root node .
gini max = 1-(1/n) ; where 'n' is the no of unique objects in target
no audio for this video????
💕❤️❤️💕
where is day 5
the regressor part should have been clearly explained . That part was poorly explained .
Yaa he rushed it through
notes ?
Hi
Break 1st : 47:26
Break 2nd 58:41
I can't hear you
you need to add few more practice examples. Decision tree section seems hurried towards the end. ** I am not a spammer
with audio 2:54
To whom it may concern,
If probability(P) =0,
Then Gini Entropy becomes = 1,
as per the formula.. Then why it always ranges from 0 to 0.5?
Thank you,
Subhajit
Gini Entropy formula = 1- summation(1 to n)(P)^2 - which includes both P(Yes) and P(No) labels for ex, Gini never becomes 1 even when P =0 , as the square function reduces the probablity even further.
For further information,
www.bogotobogo.com/python/scikit-learn/scikt_machine_learning_Decision_Tree_Learning_Informatioin_Gain_IG_Impurity_Entropy_Gini_Classification_Error.php
Where will we find the recordings of last 3 days sessions
The Gini max value of 0.5 applicable to this example or for all problems? czcams.com/video/dGNJ-feQLC4/video.html
ignore spammers like missing value thing. spammers are like less than 0.5% drop them and ignore them while ur live session chats 🙏 If u dnt care they will not post anything later ! Thanks bro for ur live sessions. We wanted u to continue it.
Sir, please learn video editing skills
😂
This was one of the best Live sessions.