Understanding Pipeline in Machine Learning with Scikit-learn (sklearn pipeline)

Sdílet
Vložit
  • čas přidán 11. 09. 2024

Komentáře • 26

  • @AnkitGupta005
    @AnkitGupta005 Před 2 lety +5

    Short and crisp. Thank you!

  • @fabianaltendorfer11
    @fabianaltendorfer11 Před rokem +4

    that's a great introduction to pipelines! Thanks

  • @kianaliaghat7740
    @kianaliaghat7740 Před 2 lety +3

    thanks for your short, useful introduction!
    it helped me a lot

  • @maxwellpatten9227
    @maxwellpatten9227 Před 7 měsíci +2

    This is excellent. Thank you

  • @Hajar1992ful
    @Hajar1992ful Před 2 lety +1

    Thank you for this useful video!

  • @muhammadjamalahmed8664
    @muhammadjamalahmed8664 Před 3 lety +2

    Love your tutorials..

  • @sebacortes8812
    @sebacortes8812 Před rokem +1

    muchas gracias saludos desde chile!!

  • @aszx-tv4pq
    @aszx-tv4pq Před 3 měsíci

    HI there, very happy with this channel could you explain a bit simpler what is pipeline part!

  • @hiba8484
    @hiba8484 Před rokem +1

    Thanks, its really helpfull

  • @nachoeigu
    @nachoeigu Před 2 lety +1

    I have a big one question: What is the difference of build a Machine Learning application with Pipeline and to build a machine learning application with a OOP technique? I see that it is the same.

    • @DrDataScience
      @DrDataScience  Před 2 lety +1

      Everything in Python is defined as a class so we use OOP all the time. Pipeline provides a nice flexible way to combine multiple transformers and an estimator.

  • @adiver_
    @adiver_ Před 8 měsíci +2

    hello
    As you have imported polynomial features and transformed the independent variable(X_train) for it be fitted in a polynomial regression then why did you put linearregression() as the estimator in the last tuple of the list?? shouldn't you have use polyfit function or something else?
    NOTE: I am a beginner here , so the doubts can be silly.

    • @DrDataScience
      @DrDataScience  Před 8 měsíci +1

      Good question! We have already created all the polynomial terms that we need, i.e., x, x^2, x^3, etc. Thus, we can now view this as a linear regression problem with respect to the "new/artificial" features.

    • @adiver_
      @adiver_ Před 8 měsíci

      I appreciate your reply , it cleared exactly what i was asking. Thanks
      @@DrDataScience

    • @adiver_
      @adiver_ Před 8 měsíci

      @@DrDataScience one more thing I need to ask if you can spare some time, I have seen people do parameter scaling using StandardScaler() before polynomial features and estimator in a Pipeline argument, so is the scaling a necessary step or we can skip it??

  • @gabrielmarchioli4669
    @gabrielmarchioli4669 Před 2 lety

    Great video. Helped me a lot

  • @rishidixit7939
    @rishidixit7939 Před 3 měsíci

    Why are all arrays converted to column matrices while applying sklearn

    • @DrDataScience
      @DrDataScience  Před 3 měsíci +1

      Because each column corresponds to a feature or attribute of your data set. Thus, the number of elements in that column vector is equal to the number of samples.

  • @burakakay6632
    @burakakay6632 Před rokem

    Thank you :=}