Developing an LLM: Building, Training, Finetuning

Sdílet
Vložit
  • čas přidán 23. 07. 2024
  • REFERENCES:
    1. Build an LLM from Scratch book: mng.bz/M96o
    2. Build an LLM from Scratch repo: github.com/rasbt/LLMs-from-sc...
    3. Slides: sebastianraschka.com/pdf/slid...
    4. LitGPT: github.com/Lightning-AI/litgpt
    5. TinyLlama pretraining: lightning.ai/lightning-ai/stu...
    DESCRIPTION:
    This video provides an overview of the three stages of developing an LLM: Building, Training, and Finetuning. The focus is on explaining how LLMs work by describing how each step works.
    OUTLINE:
    00:00 - Using LLMs
    02:50 - The stages of developing an LLM
    05:26 - The dataset
    10:15 - Generating multi-word outputs
    12:30 - Tokenization
    15:35 - Pretraining datasets
    21:53 - LLM architecture
    27:20 - Pretraining
    35:21 - Classification finetuning
    39:48 - Instruction finetuning
    43:06 - Preference finetuning
    46:04 - Evaluating LLMs
    53:59 - Pretraining & finetuning rules of thumb
  • Věda a technologie

Komentáře • 51

  • @guis487
    @guis487 Před 11 dny +1

    I am your fan, I have most of your books, thanks for this excellent video ! Another evaluation metric that I found interesting in another channel was to make the LLMs to play chess against each other 10 times.

    • @SebastianRaschka
      @SebastianRaschka  Před 11 dny

      Hah nice, that's a fun one. How do you evaluate who's the winner, do you use a third LLM for that?

  • @tusharganguli
    @tusharganguli Před měsícem +9

    Your articles and videos have been extremely helpful in understanding how LLMs are built. Building LLM from Scratch and Q and AI are resources that I am presently reading and they provide a hands-on discourse on the conceptual understanding of LLMs. You, Andrej Karpathy and Jay Alammar are shining examples of how learning should be enabled. Thank you!

  • @ZavierBanerjea
    @ZavierBanerjea Před 10 dny +1

    What wonderful Tech Minds : { Sebastian Raschka, Yann LeCun, Andrej Karpathy, ...} who share their works and beautiful ideations for Mere mortal like me... Sebastian's teachings are so, so fundamental that takes fear off my clogged mind... 🙏
    Although I am struggling to build LLMs for specific & niche areas, I am confidant of cracking them with great resources like : Build a Large Language Model (From Scratch)!!!

  • @box-mt3xv
    @box-mt3xv Před měsícem +11

    The hero of open source

    • @SebastianRaschka
      @SebastianRaschka  Před měsícem

      Haha, thanks! I've learned so much thanks to all the amazing people in open source, and I'm very flattered by your comment to potentially be counted as one of them :)

  • @chineduezeofor2481
    @chineduezeofor2481 Před 18 dny +1

    Thank you Sebastian for your awesome contributions. You're a big inspiration.

  • @tomhense6866
    @tomhense6866 Před měsícem +1

    Very nice video, I liked it so much that I preordered your new book directly after watching it (to be fair I have read your blog for some time now).

  • @rachadlakis1
    @rachadlakis1 Před měsícem +3

    Thanks for the great knowledge You are sharing

  • @muthukamalan.m6316
    @muthukamalan.m6316 Před měsícem +1

    great content! love it ❤

  • @DataChiller
    @DataChiller Před měsícem +6

    the greatest Liverpool fan ever! ⚽

    • @SebastianRaschka
      @SebastianRaschka  Před měsícem +3

      Haha nice, at least one person watched it until that part :D

  • @haqiufreedeal
    @haqiufreedeal Před měsícem +3

    Oh, my lord, my favourite machine learning author is a Liverpool fan.😎

  • @kartiksaini5847
    @kartiksaini5847 Před měsícem +1

    Big fan ❤

  • @RobinSunCruiser
    @RobinSunCruiser Před měsícem +1

    Hi, nice videos! One question for my understanding. When talking about embedding dimensions such as 1280 in "gpt2-large" do you mean the size of the number vector encoding the context of a single token or the number of input tokens? When comparing gpt2-large and Lama2 the number is the same for the ".. embeddings with 1280 tokens".

    • @SebastianRaschka
      @SebastianRaschka  Před měsícem

      Good question, the term is often used very broadly and may refer to the input embeddings or the hidden layer sizes in the MLP layer. Here, I meant the size of the tokens that are embedded.

  • @tashfeenahmed3526
    @tashfeenahmed3526 Před měsícem

    That's great Dr. Hope you will be doing good.
    I wish if i could download your deep learning book which is published recently. If there is any open source link to download it please mention in comments.
    Thanks and regards,
    Researcher at Texas

  • @sahilsharma3267
    @sahilsharma3267 Před měsícem +4

    When is your whole book coming out ? Eagerly waiting 😅

    • @SebastianRaschka
      @SebastianRaschka  Před měsícem +2

      Thanks for your interest in this! It's already available for preorder (both on the publisher's website and Amazon) and if the production stage goes smoothly, it should be out by the end of of August

  • @alihajikaram8004
    @alihajikaram8004 Před měsícem

    Would
    you make videos about time series and trannsformer?

  • @bashamsk1288
    @bashamsk1288 Před měsícem +1

    in the instruction fine tuning we propagate loss only on output text tokens? or for all tokens from start to EOS?

    • @SebastianRaschka
      @SebastianRaschka  Před měsícem

      That's a good question. You can do both. By default all tokens, but more commonly you'd mask the tokens. In my book, I include the token masking as a reader exercise (it's super easy to do). There was also a new research paper a few weeks ago that I discussed in my monthly research write-ups here: magazine.sebastianraschka.com/p/llm-research-insights-instruction

    • @bashamsk1288
      @bashamsk1288 Před měsícem

      @@SebastianRaschka
      Thanks for the reply
      I just have a general question: do we use masking? For example, was masking used during the instruction fine-tuning of LLaMA 3 or mistral any Open source LLMs? Also, does your book include any chapters on the parallelization of training large language models?

    • @SebastianRaschka
      @SebastianRaschka  Před měsícem

      @@bashamsk1288 Masking is commonly used, yes. We implement it as the default strategy in LitGPT. In my book we do both. I can't speak about Llama 3 and Mistral regarding masking, because while these are open-weight models they are not open source. So there's no training code we can look at. My book explains DDP training in the PyTorch appendix, but it's not used in the main chapters because as a requirement all chapters should also work on a laptop to make them accessible to most readers.

  • @timothywcrane
    @timothywcrane Před měsícem

    I'm interested in SLM RAG with Knowledge graph traversal/search for RAG dataset collection and vector-JIT semantic match for hybrid search. Any repos you think I would be interested in?

    • @timothywcrane
      @timothywcrane Před měsícem

      bookmarked, clear and concise.

    • @SebastianRaschka
      @SebastianRaschka  Před měsícem

      Unfortunately I don't have a good recommendation here. I have only implemented standard RAGs without knowledge graph traversal.

  • @joisco4394
    @joisco4394 Před měsícem

    I've heard about instruct learning, and it sounds similar to how you define preference learning. I have also heard about transfer learning. How would you compare/define those?

    • @SebastianRaschka
      @SebastianRaschka  Před měsícem +1

      Transfer learning is basically involved in everything you do when you start out with a pretrained model. We don't really name or call it out explicitly anymore because it's so common. In instruction finetuning, the loss function is different from preference tuning mainly. Instruction finetuning trains the model to answer queries, and preference finetuning is basically more about the nuance of how these get answered. All preference tuning methods that are used today (DPO, RLHF+PPO, KTO), etc. expect you to have done instruction finetuning on your model before you preference finetune.

    • @joisco4394
      @joisco4394 Před měsícem +1

      @@SebastianRaschka Thanks for explaining it. I need to do a lot more research :p

  • @KumR
    @KumR Před měsícem +1

    Great Video. Now that LLM is so powerful , will regular machine learning & deep learning slowly vanish?

    • @SebastianRaschka
      @SebastianRaschka  Před měsícem +1

      Great question. I do think that special purpose ML solutions still have and will continue to have their place. The same way ML didn't make certain more traditional statistics based models obsolete. Regarding deep learning ... I'd say LLM is a deep learning model itself. But yeah, almost everything in deep learning is nowadays either a diffusion model, transformer-based model (vision transformer and most LLMs), or state space model

  • @ArbaazBeg
    @ArbaazBeg Před 29 dny

    Should we give prompt to LLM when fine tuning for classification with last layer modification or directly pass the input to the LLM like in deberta?

    • @SebastianRaschka
      @SebastianRaschka  Před 29 dny +1

      Thanks for the comment, could you explain a bit more what you mean by passing the input directly?

    • @ArbaazBeg
      @ArbaazBeg Před 23 dny +1

      @@SebastianRaschka Hey, sorry for the bad language. I meant should the chat formats like alpaca etc be applied or we give the text as it is to LLM for classification.

    • @SebastianRaschka
      @SebastianRaschka  Před 23 dny +1

      @@ArbaazBeg Oh I see now. And yes, you can. I wanted to create an example and performance comparison for that to the GitHub repo (github.com/rasbt/LLMs-from-scratch) some time. For that I wanted to first instruction-finetune the model on a few more spam classification instructions and examples though.

  • @mushinart
    @mushinart Před měsícem +1

    Im sold , im buying your book .. would love to chat with you sometime if possible

    • @SebastianRaschka
      @SebastianRaschka  Před 24 dny +1

      Thanks, hope you are liking it! Are you going to SciPy in July by chance, or maybe Neurips end of the year?

    • @mushinart
      @mushinart Před 24 dny

      @@SebastianRaschka unfortunately not,but I'd like to have a zoom/google meet chat with you if possible

  • @MadnessAI8X
    @MadnessAI8X Před měsícem +1

    What we are seeking not only fuzzing code

  • @ramprasadchauhan7
    @ramprasadchauhan7 Před měsícem

    Hello sir, please also make with javascript

  • @kumarutsav5161
    @kumarutsav5161 Před měsícem +1

    🤌

    • @SebastianRaschka
      @SebastianRaschka  Před měsícem

      I take that as a compliment!? 😅😊

    • @kumarutsav5161
      @kumarutsav5161 Před měsícem +1

      @@SebastianRaschka Yes yes! It was supposed to be a compliment only. You are doing great work with our teaching materials :).

  • @redthunder6183
    @redthunder6183 Před měsícem

    Easier said than done unless u got a GPU super computer lying around lol

    • @SebastianRaschka
      @SebastianRaschka  Před měsícem

      ha, I should mention that all chapters in my book run on laptops, too. It was a personal goal for me that everything should work even without a GPU. The instruction finetuning takes about ~30 min on a CPU to get reasonable results (granted, the same code takes 1.24 min on an A100)