Evaluate LLMs with Language Model Evaluation Harness

Sdílet
Vložit
  • čas přidán 11. 05. 2024
  • In this tutorial, I delve into the intricacies of evaluating large language models (LLMs) using the versatile Evaluation Harness tool. Explore how to rigorously test LLMs across diverse datasets and benchmarks, including HellaSWAG, TruthfulQA, Winogrande, and more. This video features the LLaMA 3 model by Meta AI and demonstrates step-by-step how to conduct evaluations directly in a Colab notebook, offering practical insights into AI model assessment.
    Don't forget to like, comment, and subscribe for more insights into the world of AI!
    GitHub Repo: github.com/AIAnytime/Eval-LLMs
    Join this channel to get access to perks:
    / @aianytime
    To further support the channel, you can contribute via the following methods:
    Bitcoin Address: 32zhmo5T9jvu8gJDGW3LTuKBM1KPMHoCsW
    UPI: sonu1000raw@ybl
    #openai #llm #ai
  • Věda a technologie

Komentáře • 8

  • @Techonsapevole
    @Techonsapevole Před 17 dny

    Thanks, great LLM tips

  • @bdoriandasilva
    @bdoriandasilva Před 5 dny

    nice! thank you for the video!

  • @TheIITianExplorer
    @TheIITianExplorer Před 24 dny +2

    I love you man, ❤
    You are awesome, keep uploading 😊

  • @joserfjunior8940
    @joserfjunior8940 Před 24 dny

    I LIKE THIS... nice job man !

  • @muhammedajmalg6426
    @muhammedajmalg6426 Před 23 dny

    nice work

  • @krishnapriya9881
    @krishnapriya9881 Před 23 dny

    PackageNotFoundError: No package metadata was found for bitsandbytes. I am getting this error even though bitsandbytes is installed and my cuda version is 12.1, please help me with this

  • @saumyajaiswal6585
    @saumyajaiswal6585 Před 22 dny

    What about langsmith?It does the same thing right?

  • @araara2142
    @araara2142 Před 24 dny +1

    I need rag chatbot part 2 video, please release, my exam is coming