Komentáře •

  • @nithinisfun
    @nithinisfun Před 3 lety +16

    everytime you code, i learn something new. please never stop coding end-to-end in your videos. thank you, you are amazing!

  • @Phateau
    @Phateau Před 3 lety +1

    Really appreciate the effort you put in the video. This is world class. Thank you

  • @AnubhavChhabra
    @AnubhavChhabra Před 2 lety +3

    Great explanation! Making lives easier one layer at a time :)

  • @ephi124
    @ephi124 Před 3 lety +2

    I am writing a research paper in this area. I can't wait!

  • @mikhaeldito
    @mikhaeldito Před 3 lety +2

    Thank you for sharing your knowledge. This is an amazing tutorial with no inaccessible jargons. 10/10 highly recommend.

  • @priyankasagwekar3408
    @priyankasagwekar3408 Před 2 lety

    This video was really helpful. It was 1 hour bootcamp covering everything about ANN with pytorch- from loading datasets, defining neural network architecture and optimizing the hyperparameters with optuna.

  • @lokeshkumargmd
    @lokeshkumargmd Před 3 lety

    This is first time I am watching your video. Very informative !!!. Thanks for sharing 😇

  • @shaikrasool1316
    @shaikrasool1316 Před 3 lety

    Every time some things new.. thank you so much

  • @sambitmukherjee1713
    @sambitmukherjee1713 Před 2 lety

    Super cool Abhishek. Loved every section, especially the "poor man's early stopping"... ;-)

  • @TheOraware
    @TheOraware Před 3 lety

    wonderful mate , much appreciated for sharing it

  • @bhumikachawla9149
    @bhumikachawla9149 Před 8 měsíci

    Great video, thank you!

  • @RajnishKumarSingh
    @RajnishKumarSingh Před 3 lety

    Love the fun part👌

  • @neomatrix369
    @neomatrix369 Před 3 lety +3

    Love the video, Hyperparam optimisation is one of my favs and this video tops it all, so now I gotta do this on my model training! :tada:

    • @malachinelson2622
      @malachinelson2622 Před 2 lety

      you prolly dont care at all but does anyone know a tool to log back into an Instagram account?
      I stupidly forgot the password. I would love any help you can offer me

    • @kaysencasen9519
      @kaysencasen9519 Před 2 lety

      @Malachi Nelson instablaster :)

    • @malachinelson2622
      @malachinelson2622 Před 2 lety

      @Kaysen Casen i really appreciate your reply. I found the site through google and Im waiting for the hacking stuff atm.
      Takes a while so I will get back to you later with my results.

    • @malachinelson2622
      @malachinelson2622 Před 2 lety

      @Kaysen Casen It worked and I finally got access to my account again. I am so happy:D
      Thanks so much, you really help me out!

    • @kaysencasen9519
      @kaysencasen9519 Před 2 lety

      @Malachi Nelson happy to help =)

  • @yasserahmed2781
    @yasserahmed2781 Před 3 lety

    what a gem

  • @MisalRaj
    @MisalRaj Před 3 lety

    👏👏

  • @sindhujaj5907
    @sindhujaj5907 Před 2 lety

    Thanks for the amazing video! Here in this example will the hidden size and dropout change for each hidden layer or remain same for the hidden layers?

  • @tiendat3602
    @tiendat3602 Před 3 lety

    awesome. But one question that, how to deal with overfit and underfit issue while building the end-to-end fine-tuning model ?

  • @kaspereinarson1061
    @kaspereinarson1061 Před 2 lety

    Thanks for a great video! So just to be clear: you’re using standard 5 fold CV thus optimising for a set of hyper parameters that finds the best loss across (the mean of) all 5 folds. Wouldn’t it be more suitable to split the train data into train / val and then optimize the hyper parameters individually for each fold (nested CV) ?

  • @jeenakk7827
    @jeenakk7827 Před 3 lety +6

    That was a very informative session. Is Hyperparameter tuning covered in your book? I think I should buy a copy!! Thanks

    • @abhishekkrthakur
      @abhishekkrthakur Před 3 lety +5

      Yea. it is but if you just want hyperparameter optimization, watch my other video

  • @avinashmatani9980
    @avinashmatani9980 Před 3 lety

    Do you have any videos, if I want to learn the basics of what you did at the start. Like for eg: at the start you created a class.

  • @priyankasagwekar3408
    @priyankasagwekar3408 Před 2 lety

    For those looking for loading the models and using them on test dataset:
    model = TheModelClass(*args, **kwargs)
    model.load_state_dict(torch.load(PATH))
    model.eval()

  • @kuberchaurasiya
    @kuberchaurasiya Před 3 lety

    Great. Waiting eagerly.
    Will you use (sklearn)pipelines?

  • @MadrissS
    @MadrissS Před 3 lety

    Hi Abhishel, very cool video as always, don't you think we should reset the early_stopping_counter at 0 after a new best_loss is found (line 62 at 41:20 in the video). Thanks !

  • @kannansingaravelu
    @kannansingaravelu Před 2 lety

    Hi Abhishek, just landed up on this video. I am not sure whether you addressed this earlier. I am curious to know your preference of torch as against tensorflow or keras.

  • @stilgarfifrawi7155
    @stilgarfifrawi7155 Před 3 lety

    Great video . . . but when can we get a mustache tutorial?

  • @RajnishKumarSingh
    @RajnishKumarSingh Před 3 lety

    Sir,
    What best trial value tells us after every trial?
    I have used it with lightgbm seems working but doesn't do well with test dataset
    After every trial I calculated accuracy it is giving me approx 0.9942 for every trial not same but 1st two digit after decimal is same.

  • @siddharthsinghbaghel441

    Do you have any blogs??, I like reading more than watching

  • @AayushThokchom
    @AayushThokchom Před 3 lety

    A general question: Is HPO hyped? If ensemble performs much better, should we invest time in HPO given we have limited time?
    Thoughts!!

  • @priyankasagwekar3408
    @priyankasagwekar3408 Před 2 lety

    I have 5 models saved for each fold at the end of execution. If I am not wrong they are essentially the same model saved 5 times.
    I was looking for a way to load the models and use them on test dataset. Pytorch Documentation shows following way,
    model = TheModelClass(*args, **kwargs)
    model.load_state_dict(torch.load(PATH))
    model.eval()
    now initialising the model object (step 1) is an issue in the absence of logs and knowledge of exact architecture of best model.
    Also you need to define optuna sampler seed to reproduce the results.

  • @HabiburRahamaniit
    @HabiburRahamaniit Před 3 lety +1

    Respected sir , I have a question regarding a problem if we have a variable length input dataset and variable length output dataset how would we train or build a neural network model for that dataset?

    • @renatoviolin
      @renatoviolin Před 3 lety

      Maybe a Recurrent Neural Network (RNN), that aim to solve this problem of different input size for each sample.

  • @ankushjamthikar9780
    @ankushjamthikar9780 Před 2 lety

    What is to be done if I want to tune the activation function as well in the neural network? How and where should include the line of code for it?

  • @neomatrix369
    @neomatrix369 Před 3 lety

    Any plans to make videos using other HyperParam Optimisation frameworks? I have a washlist I can share if you like ;)

    • @abhishekkrthakur
      @abhishekkrthakur Před 3 lety

      check out my other video :) and send the list too please

  • @oligibbons
    @oligibbons Před rokem

    Why do you keep the same number of neurons in every layer? How would you change your approach for deep learning models of different shapes?

  • @Prasad-MachineLearningInTelugu

    🧚‍♀️🧚‍♀️🧚‍♀️🧚‍♀️🧚‍♀️

  • @valentinogolob9137
    @valentinogolob9137 Před 3 lety +1

    shouldn''t we set the early_stopping_counter to zero each time the valid_loss is smaller than the best_loss ?

  • @hiteshvaidya3331
    @hiteshvaidya3331 Před 3 lety

    why did you make loss function static?

  • @jonatan01i
    @jonatan01i Před 3 lety

    You could speed up evaluation if you put the prediction in a torch.no_grad() context.

  • @jamesmiller2521
    @jamesmiller2521 Před 3 lety +5

    Where is your GM hoodie? 😤😁

  • @bjaniak102
    @bjaniak102 Před 3 lety

    What is Julian from Trailer Park Boys doing in your thumbnail though?

  • @hasanmoni3928
    @hasanmoni3928 Před 3 lety

    How can I buy your book in Bangladesh?

  • @vasudhajoshi4766
    @vasudhajoshi4766 Před 2 lety

    Hello Sir
    I followed this tutorial to estimate the hyperparameters for my CNN model. When I am freezing the initial layers of my model, I am facing an error in the line:
    "optimizer = getattr(optim, param['optimizer'])(filter(lambda p: p.requires_grad, model.parameters()), lr=param['learning_rate'])"
    where param['optimizer'] is 'optimizer':trial.suggest_categorical('optimizer', ['Adam', "RMSprop"]) and param['learning_rate'] and param['learning_rate']: 'learning_rate':trial.suggest_loguniform("learning_rate",1e-6, 1e-3).
    The error is IndexError: too many indices for tensor of dimension 1.
    Can you please explain why I am facing this error?

    • @Falconoo7383
      @Falconoo7383 Před 2 lety

      I want also for my CNN+LSTM model. If you resolve the error, can you please help me?

  • @mazharmumbaiwala9244
    @mazharmumbaiwala9244 Před 3 lety

    at 34:42, whats the use of `forward` function?

    • @fredoliveira1223
      @fredoliveira1223 Před 2 lety

      Its method of nn.Module, when you define a model the forward function is where you define how the data should pass through the layers of your neural network to make a prediction

  • @rubenallaert9654
    @rubenallaert9654 Před 2 lety

    Hi, where can I find the code?

  • @Raghhuveer
    @Raghhuveer Před 3 lety

    You said that this is just a dummy example, how to use such methods in some bigger problems, say training a RCNN?

  • @nikolabacic9790
    @nikolabacic9790 Před 3 lety +1

    Did not tune random seed smh