LLAMA-3 🦙: EASIET WAY To FINE-TUNE ON YOUR DATA 🙌

Sdílet
Vložit
  • čas přidán 19. 05. 2024
  • Learn how to fine-tune the latest llama3 on your own data with Unsloth.
    🦾 Discord: / discord
    ☕ Buy me a Coffee: ko-fi.com/promptengineering
    |🔴 Patreon: / promptengineering
    💼Consulting: calendly.com/engineerprompt/c...
    📧 Business Contact: engineerprompt@gmail.com
    Become Member: tinyurl.com/y5h28s6h
    💻 Pre-configured localGPT VM: bit.ly/localGPT (use Code: PromptEngineering for 50% off).
    Signup for Advanced RAG:
    tally.so/r/3y9bb0
    LINKS:
    Announcement: llama.meta.com/llama3/
    Meta Platform: meta.ai
    unsloth.ai/
    huggingface.co/unsloth
    Notebook: tinyurl.com/4ez2rprt
    Github Tutorial: github.com/PromtEngineer/Yout...
    TIMESTAMPS:
    [00:00] Fine-tuning Llama3
    [00:30] Deep Dive into Fine-Tuning with Unsloth
    [01:28] Training Parameters and Data Preparation
    [05:36] Setting training parameters with Unsloth
    [11:03] Saving and Utilizing Your Fine-Tuned Model
    All Interesting Videos:
    Everything LangChain: • LangChain
    Everything LLM: • Large Language Models
    Everything Midjourney: • MidJourney Tutorials
    AI Image Generation: • AI Image Generation Tu...
  • Věda a technologie

Komentáře • 73

  • @spicer41282
    @spicer41282 Před měsícem +7

    Thank you!
    More fine tuning case studies please on Llama 3!
    Much appreciated 🙏 your presentation on this!

  • @Joe-tk8cx
    @Joe-tk8cx Před 29 dny

    Thank you so much for sharing this was wonderful, I have a question, I am a beginner in LLM model world, which playlist on your channel can I start from ?
    Thank you

  • @lemonsqueeezey
    @lemonsqueeezey Před měsícem

    thank you so much for this useful video!

  • @hadebeh2588
    @hadebeh2588 Před 27 dny +1

    Thank your very much for your great video. I ran the workbook but did not manage to find the GGUF files on Huggingsface. I put in my HF-Token, but that did not work. Do I have to change the code?

  • @mrtwtrn
    @mrtwtrn Před 8 dny +1

    Was having such a hard time training llms before this, thankyou

  • @KleiAliaj
    @KleiAliaj Před 28 dny

    Great video mate. How can i add more than one dataset ?

  • @KleiAliaj-us9ip
    @KleiAliaj-us9ip Před 28 dny

    great video.
    But how to add more than one datasets ?

  • @pfifo_fast
    @pfifo_fast Před 14 dny +9

    This video lacks alot of helpful info... Anyone can just open the examples and read them just the same as you did. I would have liked to be given extra detail and tips about how to actually do fine-tuning... Some of the topics I am struggling with include, how to load custom data, how to use a different prompt template, how to define validation data, when to use validation data, what learning rates are good, how do i determine how many epochs to run... Im sorry buddy, but I have to give this video a thumbs down as it really truly and honestly dosent provide any useful info that isnt already in the notebook.

  • @scottlewis2653
    @scottlewis2653 Před 17 dny

    Mediatek's Dimensity chips + Meta's Llama 3 AI = The dream team for on-device intelligence.

  • @agedbytes82
    @agedbytes82 Před měsícem +1

    Amazing, thanks!

  • @StephenRayner
    @StephenRayner Před 29 dny

    Excellent thank you

  • @loicbaconnier9150
    @loicbaconnier9150 Před 28 dny

    Hello
    ilpossible to generate gguf, compilation problem …
    Did you try it ?

  • @shahzadiqbal7646
    @shahzadiqbal7646 Před 29 dny +3

    Can you make a video on how to use local llama 3 to understand large c++ or c# code base

    • @iCode21
      @iCode21 Před 6 dny

      search for ollama,

  • @metanulski
    @metanulski Před 29 dny +1

    One more comment :-). this Video is about fintung a model, but there is no real explanation why. We finetune with the standard Alpaca dataset, but there is no explanation why. It would be great if you could do a follow up and show us how to create datasets.

  • @VerdonTrigance
    @VerdonTrigance Před 29 dny +1

    How to actually train models? And I mean non-supervised training where I have a set of documents and want to learn on it and probably find author's 'style' or tendency?

    • @PYETech
      @PYETech Před 17 dny +1

      You need to create some process to transfer all the knowledge in these documents in the form of "prompt":"best output". Usually we use an team of agents to do it for us.

  • @danielhanchen
    @danielhanchen Před 29 dny

    Fantastic work and always love your videos! :)

  • @SeeFoodDie
    @SeeFoodDie Před měsícem

    Thanks

  • @jannik3475
    @jannik3475 Před 28 dny

    Is there a way to sort of „brand“ llama 3. So that the model responds to „Who are you?“ a custom answer?
    Thank you!

    • @engineerprompt
      @engineerprompt  Před 28 dny

      Yes, you can just add that as part of the system message

  • @pubgkiller2903
    @pubgkiller2903 Před měsícem +3

    I have already finetune using unsloth for testing purpose.

    • @engineerprompt
      @engineerprompt  Před měsícem +3

      Great, how are the results looking?

    • @pubgkiller2903
      @pubgkiller2903 Před měsícem +2

      @@engineerprompt great results and thanks for your support to AI community

    • @TheIITianExplorer
      @TheIITianExplorer Před měsícem +1

      Bro can you tell me about unsloth, how it is different from the basics of using Qlora?
      And also I used Qlora for Fine-tuning llama 2, can I just paste llama 3 model I'd to use in place of that?
      I hope you understood my question, waiting for your reply 😊

    • @pubgkiller2903
      @pubgkiller2903 Před 29 dny +1

      @@TheIITianExplorer unsloth library is very useful library for finetune using LoRA technique . QLoRA is Quantization and LoRA so if use Unsloth then the same output you will get as unsloth already quantise the LLMs

    • @roopad8742
      @roopad8742 Před 23 dny

      What datasets did you fine tune it on? Have you run any benchmarks?

  • @RodCoelho
    @RodCoelho Před 29 dny

    How do you train a model by adding the knowledge in a book, which will like only have 1 column of text?

    • @engineerprompt
      @engineerprompt  Před 29 dny

      In that case, you will have to convert the book into question answers and format it in the similar fashion. You can use an LLM to convert the book to QA using an LLM

  • @kingofutopia
    @kingofutopia Před měsícem

    Awesome, thanks

  • @metanulski
    @metanulski Před měsícem

    Regarding the save option. Do I have to delete the parts that I dont what, or how does this work?

    • @engineerprompt
      @engineerprompt  Před měsícem

      You can just comment those parts. Put # in front of those lines which you don't need.

  • @georgevideosessions2321

    Have you ever thought about writing a no-code fine-tuning on premise app?

  • @researchpaper7440
    @researchpaper7440 Před měsícem

    great it was quick

  • @dogsmartsmart
    @dogsmartsmart Před 29 dny

    Thank you! but Mac m3 max can use mlx to fine-tune?

  • @DemiGoodUA
    @DemiGoodUA Před měsícem +1

    Hi, nice video. But how to finetune model on my codebase?

    • @engineerprompt
      @engineerprompt  Před 29 dny

      You can use the same setup. Just replace the instruction and input with your code.

    • @DemiGoodUA
      @DemiGoodUA Před 29 dny

      @@engineerprompt how to divide code on "question - answer" pairs? or I can place whole codebase to single instruction

  • @modicool
    @modicool Před 20 dny

    One thing I am unsure of is how to transform my data into a training set. I have the target format: the written body of work, but no "instruction" or "input" of course. I've seen some people try to generate it with ChatGPT, but this seems counter-intuitive. There must be an established method of actually manipulating data into a training set. Where is that piece?

    • @engineerprompt
      @engineerprompt  Před 20 dny

      You will need to have a {input, response} pair in order to fine-tune an instruct model. Unfortunately, there is no way around it unless you are just pre-training the base model.

  • @cucciolo182
    @cucciolo182 Před 29 dny

    Next week Gemini 2 with text to video 😂

  • @ashwinsveta
    @ashwinsveta Před 28 dny

    We fine

  • @CharlesOkwuagwu
    @CharlesOkwuagwu Před měsícem

    Hi, please what if we have already downloaded a gguf file? How do we apply that locally?

    • @engineerprompt
      @engineerprompt  Před 29 dny +1

      I am not sure if you can do that. Will need to do further research on it.

  • @jackdorsey3504
    @jackdorsey3504 Před 15 dny

    Sir, we cannot open the colab website...

  • @user-lz8wv7rp1o
    @user-lz8wv7rp1o Před 25 dny

    great

  • @tamim8540
    @tamim8540 Před měsícem

    Hello can I fine tune it using colab free version?

  • @metanulski
    @metanulski Před měsícem

    So 60 steps is to low. But what it a good number of steps?

    • @engineerprompt
      @engineerprompt  Před měsícem +1

      Usually you want to set epochs to 1 or 2

    • @metanulski
      @metanulski Před 29 dny

      @@engineerprompt So 60 to120 steps max, since one epoch is 60 steps?

  • @asadurrehman3591
    @asadurrehman3591 Před 29 dny

    can i fintune using colab free gpu?

  • @HoneIrimana
    @HoneIrimana Před 29 dny

    They messed up releasing llama 3 because it believes it is sentient

  • @nikolavukcevic360
    @nikolavukcevic360 Před 13 dny

    Why you didnt provide any examples of training. It would make this video 10 times better.

  • @Matlockization
    @Matlockization Před 4 dny

    It's a Zuckerberg free AI........that makes me wonder. And you have to agree to hand over contact info and what else, I wonder ?

  • @user-hn7cq5kk5y
    @user-hn7cq5kk5y Před dnem

    Don't share trash

  • @piffdaddy420
    @piffdaddy420 Před 8 dny

    you really should just make videos in your own language because who the fk can even understand what you are saying?