Fine-Tuning Mistral 7B

Sdílet
Vložit
  • čas přidán 22. 08. 2024
  • This session is led by Chris and Greg!
    You'll learn what you need to know about Mistral 7B, and how to get it fine-tuned for your application!
    Agenda with additional resources: docs.google.co...

Komentáře • 17

  • @AI-Makerspace
    @AI-Makerspace  Před 9 měsíci +6

    Google Colab: colab.research.google.com/drive/1JtrVh--bcPR-CR8QNOyXd3Z5eZt0WgOw?usp=sharing
    Slides: www.canva.com/design/DAFzn7Uynrc/IMrrg6GSL_2NWpAnWXfobQ/edit?DAFzn7Uynrc&

  • @vibhugoel5525
    @vibhugoel5525 Před 7 měsíci

    This made my day. Perfect and clear explanation.

  • @ramsastry8945
    @ramsastry8945 Před 13 hodinami

    Great video. But one thing I am curious about is why is the input to fine tuning in reverse? I mean the asking the peft_model to generate instruction given response. How does one know a priori that is how the input ought to be preprocessed this way? I am trying to build a peft_model using the same base Mitral-7b, but in my case the data set is "fingpt-sentiment-train". This is a tweet with 5 different classes of sentiments. I am just passing the data set as is (with some pre-processing), i.e., give the tweet and get sentiment. Cheers Ram

  • @hungle2514
    @hungle2514 Před 9 měsíci +1

    THank you so much for this tutorial.

  • @seinaimut
    @seinaimut Před 9 měsíci +1

    thanks for this tutorial bro ...

  • @thisurawz
    @thisurawz Před 7 měsíci +3

    Can you do a video on finetuning a multimodal LLM (Video-LlaMA, LLaVA, or CLIP) with a custom multimodal dataset containing images and texts for relation extraction or a specific task? Can you do it using open-source multimodal LLM and multimodal datasets like video-llama or else so anyone can further their experiments with the help of your tutorial. Can you also talk about how we can boost the performance of the fine-tuned modal using prompt tuning in the same video?

    • @AI-Makerspace
      @AI-Makerspace  Před 7 měsíci +2

      We'll add this suggestion to our backlog of potential future events for sure! Keep the ideas coming!

    • @thisurawz
      @thisurawz Před 7 měsíci

      @@AI-Makerspace Thanks

  • @jiehuali3065
    @jiehuali3065 Před měsícem

    I have a question about the tokenizer used in the tutorial. Why is "mistralai/Mistral-7B-v0.1" used instead of "mistralai/Mistral-7B-Instruct-v0.1"? By the way, the model itself uses "mistralai/Mistral-7B-Instruct-v0.1". Thanks.

    • @AI-Makerspace
      @AI-Makerspace  Před měsícem +1

      There's no specific reason - other than (at the time) the tokenizers were effectively the same! This has since changed - and it's recommended to use the `Instruct-v0.1` tokenizer.

  • @horyekhunley
    @horyekhunley Před 5 měsíci

    If i have hardware constraints, can i use a small model such as tiny-llama?
    Also, how can i perform RAG on a csv dataset?

    • @AI-Makerspace
      @AI-Makerspace  Před 5 měsíci

      You could!
      For the RAG question - you could use a CSVRetriever!

  • @manmanzhang7034
    @manmanzhang7034 Před 7 měsíci

    Thanks, this is super , in your generate_reponse(promt), for generate_ids what is the value for pad_token_id? pad_token_id=tokenizer or pad_token_id=tokenizer.eos_token? I actually tried both of them, none of them works, anything I missed here? is there any other parameter after pad_token_id?

    • @AI-Makerspace
      @AI-Makerspace  Před 7 měsíci +1

      pad_token_id=tokenizer.eos_token_id is what you'd want!

  • @consig1iere294
    @consig1iere294 Před 8 měsíci

    New to this stuff. Is it possible for me to use my own gpu to train? If yes, how? Thanks!

    • @AI-Makerspace
      @AI-Makerspace  Před 8 měsíci +3

      With a combination of Quantization strategies (4bit from bitsandbytes, AWQ, and more) plus LoRA (or other adapter methods) it's more than possible to fine-tune large language models on a consumer GPU!
      If it's your own GPU on prem, you'll just have to deal with some hardware config that is more streamlined when leveraging compute from cloud providers!