Gradient Boosting and XGBoost in Machine Learning: Easy Explanation for Data Science Interviews

Sdílet
Vložit
  • čas přidán 21. 07. 2024
  • Questions about Gradient Boosting frequently appear in data science interviews. In this video, I cover what the Gradient Boosting method and XGBoost are, teach you how I would describe the architecture of gradient boosting, and go over some common pros and cons associated with gradient-boosted trees.
    🟢Get all my free data science interview resources
    www.emmading.com/resources
    🟡 Product Case Interview Cheatsheet www.emmading.com/product-case...
    🟠 Statistics Interview Cheatsheet www.emmading.com/statistics-i...
    🟣 Behavioral Interview Cheatsheet www.emmading.com/behavioral-i...
    🔵 Data Science Resume Checklist www.emmading.com/data-science...
    ✅ We work with Experienced Data Scientists to help them land their next dream jobs. Apply now: www.emmading.com/coaching
    // Comment
    Got any questions? Something to add?
    Write a comment below to chat.
    // Let's connect on LinkedIn:
    / emmading001
    ====================
    Contents of this video:
    ====================
    00:00 Introduction
    01:01 Gradient Boosting
    02:11 Gradient-boosted Trees
    02:54 Algorithm
    05:53 Hyperparameters
    07:55 Pros and Cons
    09:00 XGBoost

Komentáře • 27

  • @jennyhuang7603
    @jennyhuang7603 Před rokem +3

    For 5:10, why the MSE delta r_i is Y-F(X) instead of 2*(Y-F(X))? or is the coefficent doesn't matter?

  • @anand3064
    @anand3064 Před 6 měsíci +3

    Beautifully written notes

  • @aaronsayeb6566
    @aaronsayeb6566 Před 25 dny

    there is a mistake in the representation of algorithm. the equation for ri, L(Y, F(X)), and grad ri = Y-F(X) can't hold true at the same time. I think ri= Y-F(X) and grade ri should be something else (right?)

  • @annialevko5771
    @annialevko5771 Před 9 měsíci +3

    Hi! I have a question, how does the parallel tree building work? Because based in the gradient boosting it needs to calculate the error from the previous model in order to create the new one, so I dont really understand in which way is this parallelized

    • @shashizanje
      @shashizanje Před 4 měsíci +1

      Its parallelized in such a way that , during formation of tree , it can work parallel....means it can work on multiple independent features parellaly to reduce the computation time....suppose if it has to find root node, it has to check information gain of every single independent feature and then decide which feature would be best for root node...so in this case instead of calculating information gain one by one, it can parallely calculate IG of multiple features....

  • @jet3111
    @jet3111 Před rokem +2

    Thank you for the very informative video. It came up at my interview yesterday. I also got a question on time series forecasting and preventing data leakage. I think it would great to have a video about it.

  • @wallords
    @wallords Před 8 měsíci

    How do you add L1 regularization to a tree???

  • @elvykamunyokomanunebo1441

    Hi Emma,
    I'm struggling to understand how to build a model on residuals:
    1) Do I predict the residuals and then get the mse of the residuals?
    What would be the point/use of that?
    2) Do I somehow re-run the model considering some factor that
    focuses on accounting for more of the variability e.g. adding more
    features(important features) which reduce mse/residual?
    Then re-running the model adding a new feature to account for
    remaining residual until there is no more reduction in mse/residual?

    • @poshsims4016
      @poshsims4016 Před rokem

      Ask Chat GPT every question you just typed. Preferably GPT-4

    • @Heinz3792
      @Heinz3792 Před 4 měsíci +1

      It's important to understand what the residual is. The residual is a vector giving a magnitude of the prediction error AND the direction, i.e. the gradient. Thus, regarding your questions:
      1) we predict the residual with a weak model, h, in order to know in what direction to move the prediction of the overall model F_i(X) so that it is reduced. We assume h makes a decent prediction, and thus we treat it like the gradient.
      2) we then calculate alpha, the regulation parameter, in order to know HOW FAR to move in the direction of the gradient which h provides. I.e., how much weight to give model h. Minimizing the loss function gives us this value, and keeps us from over or undershooting the step size.

  • @kandiahchandrakumaran8521
    @kandiahchandrakumaran8521 Před 2 měsíci

    Excellent video Many thanks.
    Could you kindly make a video for time to event with survival SVM, RSF, or XGBLC?

  • @user-hq4ge6no3p
    @user-hq4ge6no3p Před 2 měsíci +1

    An excellent video

  • @nihalnetha96
    @nihalnetha96 Před měsícem

    is there a way to get the notion notes?

  • @zhenwang5872
    @zhenwang5872 Před rokem

    I usually watch Emma's video when I doing revision.

  • @emmafan713
    @emmafan713 Před rokem +4

    I am confused about the notation, so h_i is a function to predict r_i and r_i is the gradient of the loss function w.r.t the last prediction F(X). so h_i should be similar to r_i why h_i is similar to gradient of r_i

    • @Heinz3792
      @Heinz3792 Před 4 měsíci +1

      I believe there is an error in this video. r_i is the gradient of the loss function w.r.t. the CURRENT F(X), i.e. F_i(X). The NEXT weak model h_i+1 is then trained to be able to predict r_i, the PREVIOUS residual. Alternatively all this could be written with i-1 instead of i, and i instead of i+1.
      TLDR: Emma should have called the first step "compute residual r_i-1", not r_i. And in the gradient formula, she should have written r_i-1.

  • @Leo-xd9et
    @Leo-xd9et Před rokem

    Really like the way you use Notion!

    • @emma_ding
      @emma_ding  Před rokem

      Thanks for the feedback, Leo! I tried out a bunch of different presentation methods before this one, so I'm glad to hear you're finding this platform useful! 😊

  • @emma_ding
    @emma_ding  Před rokem +3

    Many of you have asked me to share my presentation notes, and now… I have them for you! Download all the PDFs of my Notion pages at www.emmading.com/get-all-my-free-resources. Enjoy!

    • @SanuSatyam
      @SanuSatyam Před rokem

      Thanks a lot. Can you please make a video on Time Series Analysis? Thanks in Advance!

  • @objectobjectobject4707
    @objectobjectobject4707 Před 3 měsíci

    Okay subscribed !

  • @riswandaayu5930
    @riswandaayu5930 Před 9 měsíci

    Hallo Miss, thankyou for the knowledge, Miss can I request your file in this presentation ?

  • @PhucHoang-ng4vh
    @PhucHoang-ng4vh Před 8 měsíci +8

    just read out loud, no explanation at all

  • @ermiaazarkhalili5586
    @ermiaazarkhalili5586 Před rokem +1

    Any chance to have slides?

    • @NguyenSon-ew9wn
      @NguyenSon-ew9wn Před rokem +1

      Agree. Hope to have that note

    • @emma_ding
      @emma_ding  Před rokem +1

      Yes! Download all the PDFs of my Notion pages at emmading.com/resources by navigating to the individual posts. Enjoy!

  • @faisalsal1
    @faisalsal1 Před 5 měsíci +2

    She just read the text with zero knowledge about the content. U no good.