Using Ollama To Build a FULLY LOCAL "ChatGPT Clone"

Sdílet
Vložit
  • čas přidán 29. 05. 2024
  • In this video, I show you how to use Ollama to build an entirely local, open-source version of ChatGPT from scratch. Plus, you can run many models simultaneously using Ollama, which opens up a world of possibilities.
    Enjoy :)
    Join My Newsletter for Regular AI Updates 👇🏼
    www.matthewberman.com
    Need AI Consulting? ✅
    forwardfuture.ai/
    Rent a GPU (MassedCompute) 🚀
    bit.ly/matthew-berman-youtube
    USE CODE "MatthewBerman" for 50% discount
    My Links 🔗
    👉🏻 Subscribe: / @matthew_berman
    👉🏻 Twitter: / matthewberman
    👉🏻 Discord: / discord
    👉🏻 Patreon: / matthewberman
    Media/Sponsorship Inquiries 📈
    bit.ly/44TC45V
    Links:
    Code From Video - gist.github.com/mberman84/a12...
    Ollama - ollama.ai/
  • Věda a technologie

Komentáře • 390

  • @MakilHeru
    @MakilHeru Před 6 měsíci +14

    This is awesome! I'd love to see more. I feel like this can become something pretty robust with enough time.

  • @xdasdaasdasd4787
    @xdasdaasdasd4787 Před 6 měsíci +13

    Ollama series! This was a great starting video❤ thank you for all your hard work

  • @rakly3473
    @rakly3473 Před 6 měsíci +13

    Every time I need something, you present a tool doing exactly that. Thanks!

  • @avi7278
    @avi7278 Před 6 měsíci +195

    I'm building my own personal AI assistant but every time I start something a week later something better drops. My god, this is impossible. I've got to think better about my abstractions to make some of this stuff more drop-in ready. That might be an interesting video (or series of videos) for you Matthew, if not likely a bit advanced for your audience.

    • @LeonardLay
      @LeonardLay Před 6 měsíci +32

      I'm in the same boat. The tech changes so quickly, my ideas become antiquated as soon as I get something working 😆

    • @matthew_berman
      @matthew_berman  Před 6 měsíci +17

      The nice thing is if you stick with using OpenAI API, that seems to be the standard

    • @LeonardLay
      @LeonardLay Před 6 měsíci +3

      @@matthew_berman I have an Azure account and I'm trying to use it to act as a server for the different models rather than hosting them locally. I'm having so much trouble doing that because the models that are included with Azure aren't the ones I want to try out. Do you have any advice?

    • @DihelsonMendonca
      @DihelsonMendonca Před 6 měsíci +3

      You're lucky. I still have to learn Python. But since ChatGPT is developing too fast, when I learn, my knowledge would be obsolete, because just now we can create a personal assistant using GPTs very easily, do you agree ? 🙏👍

    • @free_thinker4958
      @free_thinker4958 Před 6 měsíci +3

      ​@@DihelsonMendoncame too, once I focus on something then later I find something else exists and with high quality than the previous one hhhh

  • @snuffinperl8059
    @snuffinperl8059 Před 2 měsíci +1

    You created an incredible video, precise, concise, and I couldn't have asked for more!

  • @mossonthetree
    @mossonthetree Před 3 měsíci

    This is so cool! And the fact that they give you an rest endpoint running on a port on the machine is great.

  • @aldoyh
    @aldoyh Před 6 měsíci +15

    Thank you so much Mathew, this is so incredible!

  • @user-kw3sp7lb5c
    @user-kw3sp7lb5c Před 4 měsíci

    Ollama is incredible! Runs fast LLMs. And i see in your channel about autogen and so... agents building and find that i was looking for. I love your channel and your teaching manner. Thanks Mattew!

  • @free_thinker4958
    @free_thinker4958 Před 6 měsíci +4

    This is the type of straightforward high quality content ❤

  • @DB-Barrelmaker
    @DB-Barrelmaker Před 6 měsíci +1

    This was done so! Perfectly. Every part swollen with meaning

  • @zef3k
    @zef3k Před 6 měsíci +4

    Wow, this makes it so extremely accessible. Your video also shows how accessible interacting with these ai's is in general as well. I haven't programmed much since I was younger, but have been wanting to, and this seems like a great jumping off point! Now I just need to wait until the Windows version comes out.

  • @elierh442
    @elierh442 Před 6 měsíci +60

    😮 Please create a video integrating Ollama with autogen!

  • @mashleyelliott4668
    @mashleyelliott4668 Před 5 měsíci

    Thanks! This concise video is exactly what I was looking for to help me take next steps with Ollama!

  • @agntdrake
    @agntdrake Před 6 měsíci +3

    Really great video! The easiest way to get history is to take the `context` which was given in the response and just pass it back as the 'context' field in the request.

  • @dustincoker5233
    @dustincoker5233 Před 6 měsíci +1

    This is so cool! I'd love to see a deeper dive.

  • @donaldparkerii
    @donaldparkerii Před 6 měsíci

    Another great video, I was able to achieve the same in LM Studio running multiple models, on Mac, by spawning instances from the CLI and incrementing the port. Then in my autogen app passing different llm_config objects to the specific assistant agent.

  • @fenix20075
    @fenix20075 Před 6 měsíci +5

    About the privateGPT, I found the accuracy can be improved if the database change from duckDB to elasticsearch.

  • @xdasdaasdasd4787
    @xdasdaasdasd4787 Před 6 měsíci

    You are a god send. Thank you
    Ive been using it through WSL for windows

  • @taeyangoh7305
    @taeyangoh7305 Před 6 měsíci +16

    yes! it would be really interesting how autogen + Ollama goes !😍

    • @BibopGresta1
      @BibopGresta1 Před 6 měsíci +2

      I'm interested, too! I wonder if Autogen is obsolete now that OpenAI unleashed the kraken with the GPTs! What do you think?

    • @alextrebek5237
      @alextrebek5237 Před 6 měsíci

      @@BibopGresta1i think you have yourself a popular follow-up video, given the comments asking about autogen 😉

    • @Gatrehs
      @Gatrehs Před 6 měsíci

      @@BibopGresta1 Unlikely, GPT's are more of a single custom Agent instead of a set of agents working together.

  • @takione5991
    @takione5991 Před 6 měsíci

    Great video! Simple. clear and concise. Thanks for that. An idea for a continuation (as a complete novice on AI) could be how to start a simple training on the model to keep improving on some topic we would like?

  • @srikanthg_in
    @srikanthg_in Před měsícem

    Wow. That's the best 10 minutes I have spent today. Great learning.

  • @wurstelei1356
    @wurstelei1356 Před 6 měsíci +3

    Thanks for this nice video. I would like to see a video about MemGPT implementing the history function instead of just pasting everything in front of a new prompt.
    A good idea could be: PrivateGPT with Huggingfaces model cards in it is passed the prompt with the task to tell the best model for that prompt. Then the prompt is passed via ollama to that model with MemGPT on top of each model. That actually might be the most powerful local solution right now.

  • @AlGordon
    @AlGordon Před 6 měsíci +5

    Nice video! You definitely picked up a new subscriber here. I’d be interested in seeing how to build out a RAG solution with Ollama, and also how to make it run in parallel for multiple concurrent requests.

  • @nickdnj
    @nickdnj Před 6 měsíci +2

    Great Video.. Thank you!. I would love to see a deep dive into using Olama with Autogen, Having each agent use its own model.

  • @bersace
    @bersace Před 6 měsíci

    You are so passionate. And you are right to do so. Thanks !

  • @greeffer
    @greeffer Před 6 měsíci

    Great content bro, you're my new favorite youtuber!

  • @WaefreBeorn
    @WaefreBeorn Před 6 měsíci +8

    this model will allow us to make open source models fast, I love the simultaneous part, please make more tutorials on this once it hits windows without wsl

    • @AaronTurnerBlessed
      @AaronTurnerBlessed Před 6 měsíci

      agree... This OLlama really looks promising Matthew!! Light weight and simple. More plz!!

    • @chrismachabee3128
      @chrismachabee3128 Před 6 měsíci

      I am at WSL now, join me. WSL - Windows Subsystem for Linux. It is at Microsoft Ignite. The title is How to install Linux on Windows with WSL. So, you are on your own now. I have several computer requiring updating. good luck.

    • @WaefreBeorn
      @WaefreBeorn Před 6 měsíci

      @@chrismachabee3128 you are an AI generated comment. Please follow terms of service on CZcams for automated accounts, creator of this bot.

  • @MrAcarlo
    @MrAcarlo Před 6 měsíci

    the video on Ollama is really beautiful. Among other things, I would also start doing benchmarks on the various text generation user interfaces. Ollama allows me, for example, to use my laptop with a small GTX 1060 and Dolphin at incredible speed. the same laptop struggles with Oobabooga. However, after some interactions, the model goes into "overload", as if the RAM is no longer enough. In short, this comment is a too long thank you for your excellent work. And a hope for more videos about Ollama and local models.

  • @michaelwallace4757
    @michaelwallace4757 Před 6 měsíci

    Integrating Ollama and Canopy would be a great video. Having that local retrieval would have many use cases.

  • @prof969chaos
    @prof969chaos Před 6 měsíci +3

    Very interesting, would love to see how well it works with autogen or any of the other multi-agent libraries. Looks like you can import any gguf as well.

  • @scitechtalktv9742
    @scitechtalktv9742 Před 6 měsíci +31

    Building an AutoGen application using Ollama would be wonderful ! Example: one of the agents is a coder, implemented by a LLM specialized in coding etc.

    • @SushilSingh2005
      @SushilSingh2005 Před 6 měsíci +4

      I was about to write this myself.

    • @27dhan
      @27dhan Před 6 měsíci +1

      haha me too!

    • @EduardsRuzga
      @EduardsRuzga Před 6 měsíci +1

      I started writing same comment, and then saw yours :D

    • @MungeParty
      @MungeParty Před 6 měsíci +3

      I'm an autogen application using ollama, I was going to write this comment too.

    • @EduardsRuzga
      @EduardsRuzga Před 6 měsíci

      @@MungeParty O nice to meet you! Why autogen ollama app is interested in this? :D

  • @gbengaomoyeni4
    @gbengaomoyeni4 Před 6 měsíci

    @Matthew_berman: You are very brilliant! I have been watching ollama videos but none of them taughthow to use it with API or structured it the way you did. Keep it coming bro. Thank you so much. God bless!

  • @chorton53
    @chorton53 Před měsícem

    This was a fantastic video ! Cheers for that !

  • @the.flatlander
    @the.flatlander Před 6 měsíci +5

    This is just great and easy as well! Could you show us how to train these models with PDFs and Websites?

  • @MrBravano
    @MrBravano Před 4 měsíci

    Love your videos, much respect and appreciation for all the work you do. I do have one humble suggestion, if you could hide your image just enough to see what you have typed, for instance at 8:49, it would have been great. I know that most CZcams instructors do this, not sure why but please take that into consideration. Either way, thank you for all you bring.

  • @GutenTagLP
    @GutenTagLP Před 6 měsíci +4

    Great video, just a quick note, you actually do not need to all the previous messages and responses as the prompt, the API response contains an array of numbers called the context, just send that in the data of the next request

  • @photorealm
    @photorealm Před 2 měsíci

    Awesome video, they have a WIndow version now (3-30-24), and it installed an ran perfectly.

  • @jeanfrancoisponcet9537
    @jeanfrancoisponcet9537 Před 6 měsíci

    I did comment about it few weeks ago on one of your videos ! Indeed, very useful for autogen (but also for Langchain).

  • @avosc5316
    @avosc5316 Před 18 dny +1

    DUDE! This was an awsome tutorial!

  • @ubranch
    @ubranch Před 6 měsíci +8

    00:01 Building Open-Source ChatGPT using Olama
    01:27 Ollama and Mistol enable running multiple models simultaneously with blazing fast speed.
    02:50 Running multiple models simultaneously with Open-Source ChatGPT is mind-blowing.
    04:14 Building Open-Source ChatGPT From Scratch
    05:40 Creating a new python file called main.py to generate a completion.
    07:00 Adjusting the code to get the desired response and adding a Gradio front end.
    08:35 Built an open-source ChatGPT from scratch using Mistol
    09:56 The conversation history is appended to the prompt in order to generate a response.

  • @LerrodSmalls
    @LerrodSmalls Před 6 měsíci +5

    This was so Dope! - I have been using Ollama for a while, testing multiple models, and because of my lack of coding expertise, I had no understanding that it could be coded this way. I would like to see if you can use Ollama, memGPT, and Autogen, all working together 100% locally to choose the best model for a problem or question, call the model and get the result, and then permanently remember what is important about the conversation... I Double Dare You. ;)

  • @PeterPain
    @PeterPain Před 6 měsíci +1

    Absolutely the best video yet. ollama looks amazing.
    Now show me what options there are for doing similar such things in android apps :)

  • @pedroverde1674
    @pedroverde1674 Před 3 měsíci

    Many thanks it's really useful and really easy because you explain extremely good

  • @Jose-cd1eg
    @Jose-cd1eg Před 6 měsíci

    Amazing job!!! Everyone wants more!!

  • @jkbullitt8986
    @jkbullitt8986 Před 6 měsíci

    Awesome work!!!

  • @chenle02
    @chenle02 Před 6 měsíci

    So mind blowing~! Thanks Dude~!

  • @rogerbruce2896
    @rogerbruce2896 Před 6 měsíci

    Another cool video! I hope that they come up with a windows version soon :) Definitely want the deeper dive. ty

  • @carrolte1
    @carrolte1 Před 6 měsíci +2

    i think the only thing it needs now is to be able to monitor a project folder so you can reference a set of documents. then I could ask it to help me with my specific project and not waste time and tokens feeding it code.

  • @gru8299
    @gru8299 Před 2 měsíci

    Thank you very much! 🤝

  • @michaelbrown8289
    @michaelbrown8289 Před 6 měsíci

    This is so over my head! But I'm following! Very cool!

  • @crobinso2010
    @crobinso2010 Před měsícem +1

    Hi Matt, as someone who watches every video, I'm feeling overwhelmed and am wondering if you could do a "take a step back" episode every once in a while -- where you go over previous content from a broader perspective. For example, what is the difference between LM Studio, Ollama, Jan, AnythingLLM etc and where should someone start? Or go over the "gotchas" and frustrations in the comment sections to highlight those little errors and solutions commentators found but may have been missed by the casual viewer. It would be a review of old content, but with updated fixes, comparisons, and general perspective/advice. Thanks!

  • @kumargupta7149
    @kumargupta7149 Před 24 dny

    Thanks I find it. Great help

  • @Artificialintelligenceo
    @Artificialintelligenceo Před 6 měsíci

    Great vid!

  • @slavrgo
    @slavrgo Před 6 měsíci +2

    Please make a guide on setting it up on the virtual machine, and creating API so we can use it in our apps (even with Make for example)

  • @BillyBobDingledorf
    @BillyBobDingledorf Před 2 měsíci

    The Orca2 language model got the killers question right. When you first ask the question, you may disagree with it's answer, but it justifies itself and does correctly answer the question as asked.

  • @chrisBruner
    @chrisBruner Před 6 měsíci +1

    Wow! Jaw dropping video!

  • @finnews_
    @finnews_ Před 2 měsíci

    I am not a coder, but somehow I achieved this building. Million Thanks!!
    Its a bit slow, but good enough to showcase to friends.
    By anychange we can host this live ? If yes, then How, kindly make a video on that !!!
    Million Thanks again😀🙏

  • @EffortlessEthan
    @EffortlessEthan Před 6 měsíci

    I hope this works as well when they release it for Windows! Switching between models so fast like that is crazy!

  • @NOTNOTJON
    @NOTNOTJON Před 6 měsíci

    And boom goes the dynamite.
    I'll bet integrating this with autogen isn't hard. Heck, you coukd just ask autogen to re-write its own interaction settings to use the various models.
    The interesting bit here would be asking autogen or the main dispatch model to find the best answerable model based on the context of the prompt.
    As always, great vid!

  • @tintin_teaches
    @tintin_teaches Před 6 měsíci

    Please make more videos on these topics in detail.

  • @vadud3
    @vadud3 Před 6 měsíci +3

    This is amazing. I live in terminal and I do python. perfect!

  • @thecoffeejesus
    @thecoffeejesus Před 6 měsíci +1

    This is it. This is officially the beginning of Open Source AGI

  • @yngeneer
    @yngeneer Před 6 měsíci +1

    super video! if you can make something more deeply about memory management, it would be lovely.

  • @mfah2
    @mfah2 Před 6 měsíci +1

    Also remarkable: Cell Phone: Ollama runs in UserLAnd (Linux under Android)!! At least it performs ok with a Mobile Phone with 12GB RAM (Galaxy S20 5G).

  • @mbrochh82
    @mbrochh82 Před 6 měsíci

    loved this, Matthew! Right to the point, super hands on. This looks like an awesome project!

  • @modolief
    @modolief Před 6 měsíci

    Thanks for talking about fully local engines. Do you have a video with hardware recommendations for this?

  • @tanmayjuneja6128
    @tanmayjuneja6128 Před 6 měsíci +1

    Hey Matthew!
    Great video. Please help me with this, would hosting fine-tuned open source models on Sagemaker cost lesser as compared to GPT-4 API? Is there a comparison anywhere on any forum, reddit, etc? I want to fine-tune a model on my data, and I am thinking of going with GPT-3.5-turbo fine-tuning, but it's really expensive at scale. I want to know how do fine-tuned open source models compare to these prices (assuming we get a good efficiency at our desired task after fine-tuning)?
    Would really appreciate any thoughts on this. Thanks a lot!

  • @alamjim6117
    @alamjim6117 Před 2 měsíci

    Great Thank you very much.

  • @renierdelacruz4652
    @renierdelacruz4652 Před 6 měsíci

    Oh my god, what amazing video.

  • @user-hd7wd4nu1o
    @user-hd7wd4nu1o Před 6 měsíci +1

    Thanks!

  • @urglik
    @urglik Před měsícem

    On a related note, I'm using a ollama to run tiny dolphin on my Dell E7240 and i think that's cool AF. But that's not why I decided to write a message. I just found out that if you press the windows key and H there's a built in text to speech engine in Windows 10 and 11 and it even works in the command line so with using tiny dolphin I can at least talk to the AI, though it can't talk back to me and that's OK

  • @ujjwalchetan4907
    @ujjwalchetan4907 Před 6 měsíci

    This video is awesome❤

  • @shuntera
    @shuntera Před 4 měsíci +1

    So many models, we need a model to recommend which model to use in a given situation.

  • @abdulazizalmass
    @abdulazizalmass Před 6 měsíci +1

    Thank you for the info. Kindly, let us know what are the specs on your pc? I have a very slow response on my macbook air from 8GB Memory and CPU of M1

  • @padonker
    @padonker Před 6 měsíci +4

    Can we combine this with fine-tuning where we first add a number of our own documents and then ask questions? NB I'd like to add the documents just once so that between sessions I can ask the model about these documents.

    • @AlperYilmaz1
      @AlperYilmaz1 Před 6 měsíci +2

      Probably you meant RAG. And this should be performed with Modelfile.. Just describe location of your files and then create new model with "ollama create" and then run it with "ollama run"

    • @jason6569
      @jason6569 Před 6 měsíci +1

      Yeah this is also what I want to do but day 2 of googling after a friend asked a question about AI. I went down the rabbit hole and found these videos. I don't know what this means and how to structure documents. Very interesting stuff though and a series of this would be great!

  • @ChuckBaggett
    @ChuckBaggett Před 2 měsíci

    I question the funniness of "What do you get when you mix hot water and salt? A boiling solution."

  • @piyushlamsoge6007
    @piyushlamsoge6007 Před 6 měsíci +1

    Hi matthew,
    You are doing amazing work to teach everyone about real power of AI with support of LLM
    I have a question , what to do if we to build something which works with any kind of documents as like this video model are working does it possible to do such things as well and what if we able to build them is there any way that we can deploy them in production as website or applications
    is there any way please make a video on it
    i'm looking forward to it
    thank you!!!!!

  • @ryutenchi
    @ryutenchi Před 6 měsíci +1

    Can you take a deep dive into using the Modelfiles to make your own model for specialty takes? Where can we find out things like token limits?

  • @WesTheWizard
    @WesTheWizard Před 6 měsíci +1

    Are the models that you can pull quantized or should we still get our models from TheBloke?

  • @Techonsapevole
    @Techonsapevole Před 6 měsíci

    wow, fantastic. OpenSource models and ecosystem is everyday more powerful

  • @dr.mikeybee
    @dr.mikeybee Před 5 měsíci

    Nice. Now I understand why chatbots only allow a few prompts before they start over. They fill up their context window. BTW, it would be great to ad RAG with document and Google search. There's also a way to access Ollama from Siri. That would be ideal.

  • @JinKee
    @JinKee Před 6 měsíci

    4:50 get him to say "It's-a me! Mario!"

  • @hy3na-xyz
    @hy3na-xyz Před 6 měsíci

    cant wait for the autogen expert video!!!

  • @renierdelacruz4652
    @renierdelacruz4652 Před 6 měsíci

    I consider like so other subscribers you could create a video integrating ollama and autogen and the conversation can be stored on database and another video creating a AI personal assistant

  • @orkutmuratyilmaz
    @orkutmuratyilmaz Před 6 měsíci +2

    Ollama FTW! ✌

  • @Equilibrier
    @Equilibrier Před 6 měsíci

    Hi, what are the minimal specs for some of the most popular models ? Is there any model which can ran on 4GB RAM and a slower 2-cores CU, like an i3 ?

  • @jayfraxtea
    @jayfraxtea Před 6 měsíci +1

    Boy, Matthew is so inspiring. Thank you for ruining my weekend plan. I'd interested in the same matter as @padonker: how can we train with own data?

  • @michalchik
    @michalchik Před 6 měsíci +1

    I'm starting out at this. Are these models the only things we can run with set pretraining, or can we pre-train them on our own material? I have documents and old textbooks that I would like the models to absorb into their parameters so I can emphasize certain types of knowledge relevant to the research that I want to do.

    • @JinKee
      @JinKee Před 6 měsíci

      Gpt4all has "localdocs" support to train on your documents

  • @quebono100
    @quebono100 Před 6 měsíci +1

    R.I.P. OpenAI. I tested out ollama before you video, I was also amazed by it

  • @ikjb8561
    @ikjb8561 Před 6 měsíci

    Ollama is cool if you are looking to build a personal assistant on your own PC. If you try to hit a model with multiple requests, be prepared to wait in line.

  • @fungilation
    @fungilation Před 6 měsíci +3

    since Ollama doesn't run on Windows 11 yet. Would LM Studio be the best alternative? How does the 2 compare, for example does LM Studio also do hotswapping between models and queue them sequentially when there's pending query request to multiple models?

  • @_Apep_
    @_Apep_ Před 3 měsíci

    Congratulations, great video, I wonder if I could install a model similar as Claude 2 ( obviouslly if there's a similar that I could install on Ollama) and train it with documents (doc or pdf in Spanish) to create a webchat for questions and answers.

  • @BetterThanTV888
    @BetterThanTV888 Před 6 měsíci

    Thanks for making it approachable. How would this work with Docker? And a portable nvme drive?

  • @mordordew5706
    @mordordew5706 Před 6 měsíci +1

    Regarding the memory issue, can you integrate this with Memgpt? Could you please make a video for that?

  • @carrolte1
    @carrolte1 Před 6 měsíci

    @4:56, I am just gonna call that a fail. The response should have been, "Itsa me! Mario!"

  • @samarbid13
    @samarbid13 Před 6 měsíci +1

    More of Ollama!

  • @jawadmansoor6064
    @jawadmansoor6064 Před 6 měsíci +1

    is there an api end point that i can use, as openai's api replacement?

  • @renierdelacruz4652
    @renierdelacruz4652 Před 6 měsíci

    For the Linux user, I had and issue running the script directly from vs code, so I ran it on a terminal and it's working now, the script it's "python main.py"

  • @HyperUpscale
    @HyperUpscale Před 6 měsíci

    Finally, a good video🥳

  • @udaynj
    @udaynj Před 4 měsíci

    Does most AI models depend on OpenAI GPT behind the scenes? Or are they completely independent code and built and trained separately. Seems to me that a lot of the new open source LLM's are actually using OpenAI GPT behind the scene and depend on Open AI. Is there any open source model that is completely independent of OpenAI or LLAMA, etc?