How to RUN GEMMA with LANGCHAIN and OLLAMA Locally

Sdílet
Vložit
  • čas přidán 29. 08. 2024
  • In this video, I'll show you how to use Gemma with LangChain and Ollama. First, we'll take a look at Ollama. Next, we'll learn how to use an Ollama model with Langchain. Finally, we'll cover how to perform an Ollama Chat model.
    00:01 Intro
    00:50 Installing Ollama
    02:34 LangChain & Ollama
    04:31 Working with LLMs
    06:00 Working with Chat Models
    🔗 Notebook: github.com/Tir...
    🚀 Medium: / tirendazacademy
    🚀 X: x.com/tirendaz...
    🚀 Linkedin: / tirendaz-academy
    ▶️ LangChain Tutorials:
    • LangChain Tutorials
    ▶️ Generative AI Tutorials:
    • Generative AI Tutorials
    ▶️ LLMs Tutorials:
    • LLMs Tutorials
    ▶️ HuggingFace Tutorials:
    • HuggingFace Tutorials ...
    🔥 Thanks for watching. Don't forget to subscribe, like the video, and leave a comment.
    #ai #gemma #generativeai

Komentáře • 5

  • @amoahs7779
    @amoahs7779 Před 2 měsíci

    Thanks so much for this informative tutorial. What keyboard are you using ? It sounds very nice 😊

  • @sagurockstar5633
    @sagurockstar5633 Před 6 měsíci +2

    very informative friend thank you

  • @MavrickMania
    @MavrickMania Před 6 měsíci

    Hi, I have an error while running ollama run gemma:2b --> Error: error loading model. TIA!

    • @TirendazAI
      @TirendazAI  Před 6 měsíci

      Hi, before loading model, you need to start ollama on your computer.