LLM2LLM: Synthetic Data for Fine-Tuning (UC Berkeley)

Sdílet
Vložit
  • čas přidán 12. 09. 2024
  • LLM2LLM: Can LLM teach other LLMs new knowledge? How is it done? What is the performance of those AI systems? Can LLMs generate high quality datasets for the fine-tuning of other (smaller) LLMs? For edge devices?
    All questions answered in the latest video on synthetic data generation and synthetic data augmentation.
    #ai #airesearch #newtechnology

Komentáře • 6

  • @Karl-Asger
    @Karl-Asger Před 5 měsíci

    Very excited that I see a video from you on this topic 🎉

  • @scitechtalktv9742
    @scitechtalktv9742 Před 5 měsíci +1

    Very interesting!
    I wonder: if I would like a LLM that is specialized in for example Physics knowledge, how could I use this method to generate such a specialized LLM?

  • @TomM-p3o
    @TomM-p3o Před 5 měsíci

    One glaring omission form this list is a check of data's veracity. Or did I miss that?
    An easy way to do it would be to feed the answers from the originating LLM back to itself and have them evaluated for accuracy, truthfulness.

  • @sadaisystems
    @sadaisystems Před 5 měsíci

    Thanks for the video! What software do you use to create this beautiful presentations?

  • @TomM-p3o
    @TomM-p3o Před 5 měsíci

    Is anybody using LLMs to process original source data, preparing/optimizing it for input?

  • @kevon217
    @kevon217 Před 5 měsíci

    little grasshopper* LLM