Getting Started with OLLAMA - the docker of ai!!!

Sdílet
Vložit
  • čas přidán 11. 09. 2024
  • chris explores how ollama could be the docker of AI. in this video he gives a tutorial on how to get started with ollama and run models locally such as mistral-7b and llama-2-7b. he looks at how ollama operates and how it works very similarly to docker including the concept of the model library. chris also shows how you can create customized models and how to interact with the built-i fastapi server as well as use the javascript ollama library to interact with the models using node.js and bun. at the end of this tutorial you'll have a great understanding of ollama and it's importance in AI Engineering

Komentáře • 20

  • @bharatarora9036
    @bharatarora9036 Před 7 měsíci +3

    Thank you @Chris For Sharing This. Very Informative

  • @sbudaland
    @sbudaland Před 6 měsíci +2

    You are a great teacher and you speak tech very well in such a way that it encourages one to watch the whole video

  • @NicolaDeCoppi
    @NicolaDeCoppi Před 7 měsíci +5

    Great video Chris! You're one of the smartest person I know!!!

    • @chrishayuk
      @chrishayuk  Před 7 měsíci +2

      Too kind and right back atcha

  • @sollywelch
    @sollywelch Před 7 měsíci +3

    Great video, really enjoyed this! Thanks Chris

    • @chrishayuk
      @chrishayuk  Před 7 měsíci +2

      Thank you, wasn’t the video I intended to record that day, glad it worked well, and you enjoyed it. Thank you

  • @mechwarrior83
    @mechwarrior83 Před 6 měsíci +1

    What a great little underrated channel. I love how you present information in such a clear manner. Instant subscribe!

    • @chrishayuk
      @chrishayuk  Před 6 měsíci +1

      Thank you, glad you enjoyed it. Underrated is perfectly fine with me, channel is really about organising my thoughts, just feel lucky other people find it useful

  • @crabbypaddy5549
    @crabbypaddy5549 Před 6 měsíci

    I installed the llama2:70b wow it is super good, but it is heavy on my machine. it uses up 50 gb ram, and running my 5090x at 70 percent and still it is nearly uses up all of my 3090 GPU. it is a bit slower than the 7b but the answers are so much more complex and nuanced. Im blown away.

  • @zscoder
    @zscoder Před 7 měsíci +1

    Curious how we could setup use case for project context prompts?
    Thanks for this awesome video, subbed 🙌

  • @iamdaddy962
    @iamdaddy962 Před 7 měsíci +6

    really wish your channel got more attention compared to the L4 "influencers"....seems like youtube "programmers" prefer entry level sensationalist memelords )):

    • @chrishayuk
      @chrishayuk  Před 7 měsíci +4

      I’m okay with the level of attention it gets, the channel is my tech therapy. I just feel very lucky that other people don’t mind watching my therapy sessions

    • @iamdaddy962
      @iamdaddy962 Před 7 měsíci +4

      @@chrishayuk i appreciate all REAL senior level wisdom you've bestowed on the internet!! thinking about how the techlead still gets hundreds of thousands of views sometimes makes me have an aneurysm haha

    • @chrishayuk
      @chrishayuk  Před 7 měsíci +2

      Very very kind of you

  • @jocool7370
    @jocool7370 Před 2 měsíci

    Thanks for making this video. I've just tried OLLAMA. It gave wrong answers to 3 of my 4 first (and only) prompts. Uninstalled it.

    • @chrishayuk
      @chrishayuk  Před měsícem

      these things depend on the query and the model

    • @jocool7370
      @jocool7370 Před měsícem

      @@chrishayuk Then they're useless?

    • @chrishayuk
      @chrishayuk  Před měsícem +1

      @@jocool7370 nope, you’ve learned something that model can’t do. That’s the true path to knowledge and understanding. Now go learn more things it can and can’t do, and compare with other models