Llama/Wizard LM Finetuning with Huggingface on RunPod

Sdílet
Vložit
  • čas přidán 15. 09. 2023
  • A demo I made to show how to fine-tune a WizardLM model with Huggingface and peft.
    Presentation: docs.google.com/presentation/...
    Github: github.com/gmongaras/Wizard_Q...

Komentáře • 7

  • @darrenhinde2971
    @darrenhinde2971 Před 7 měsíci +1

    Thank you for this video, was looking for just this!

  • @kasper52786
    @kasper52786 Před 6 měsíci

    Thanks Gabriel for the amazing video. If anyone is running into the error: "ValueError: Invalid pattern: '**' can only be an entire path component" when loading the squad dataset, a quick and simple fix is to upgrade the datasets package using "pip install -U datasets" command.

  • @user-hw1bq3st7q
    @user-hw1bq3st7q Před 10 měsíci

    Thank you very much for the video, it is very useful information

  • @philtoa334
    @philtoa334 Před 9 měsíci

    Nice talk.

  • @hocklintai3391
    @hocklintai3391 Před 9 měsíci

    is it possible to run the scripts on another machine but use runpod cloud gpu to do the training and inference via some api call? I tried several approaches, but it didn't pan out. Can create a video on that?

    • @gabrielmongaras
      @gabrielmongaras  Před 9 měsíci +1

      Yep, the scripts should work on a different machine as long as you setup your environment properly. You can always train on runpod, then download the pretrained models locally by using huggingface (or another method). What problems are you running into?

    • @luciensanchez-zuber3970
      @luciensanchez-zuber3970 Před 8 měsíci

      I think he's talking about writing the code locally and using runpod gpu's to do the work@@gabrielmongaras. @hocklintai3391 I haven't used runpod yet but you should be able to connect to the machine via ssh from your terminal or tmux or even vscode and then run your scripts.