Easily Run LOCAL Open-Source LLMs for Free

Sdílet
Vložit
  • čas přidán 11. 07. 2024
  • Run locally hosted open source LLM for free.
    LMStudio helps you download and run private Models from huggingFace in a no code environment, it's a solid Free Chatgpt alternative.
    _______ 👇 Links 👇 _______
    lmstudio.ai/
    🤝 Discord: / discord
    💼 𝗟𝗶𝗻𝗸𝗲𝗱𝗜𝗻: / reda-marzouk-rpa
    📸 𝗜𝗻𝘀𝘁𝗮𝗴𝗿𝗮𝗺: / redamarzouk.rpa
    🤖 𝗬𝗼𝘂𝗧𝘂𝗯𝗲: / @redamarzouk
    www.automation-campus.com/
    _______ 👇 Content👇 _______
    00:00 Introduction to LM Studio Update
    00:11 Features of LM Studio
    00:41 New Version 0.2 Features
    01:25 Getting Started with LM Studio
    01:55 Model Exploration and Download
    04:17 Model Management
    04:42 Using AI Chat in LM Studio
    05:44 Advanced Configurations and Parameters
    07:37 Model Inspector and Practical Use Cases
    08:00 Playground: Multi-Model Comparison
    10:11 Json Mode Explanation
    12:20 Local Server and API Usage
    13:56 Conclusion and Future Content
  • Věda a technologie

Komentáře • 19

  • @AJ5
    @AJ5 Před 3 měsíci +1

    Is LMStudio portable? Can I copy it to a USB disk and run it on any machine?

    • @redamarzouk
      @redamarzouk  Před 3 měsíci

      I think you can download it on any usb disk and install it on any machine as long as you have admin rights over that machine.

    • @chipcookie707
      @chipcookie707 Před 3 měsíci

      what sketchy reasons do people want to have a completely private LLM for?

    • @redamarzouk
      @redamarzouk  Před 3 měsíci +1

      @@chipcookie707 nothing sketchy about privacy, private LLM will guarantee your data is not gonna be used for training new models(let’s say you’re building an app and you want to keep the code private).
      You will have more control over the output (more configurations)
      Better performance with the right hardware.
      Specialised LLM will yield better results (there are LLMs only for coding for example )
      And data security, in case those companies are breached you data won’t be there.
      I can go on, on how good private LLMs are but I’ll stop here..

    • @AJ5
      @AJ5 Před 3 měsíci +1

      Wow Reda, that is such a long comprehensive list of reasons. Honestly deserves its own video instead of being buried in this reply comment section

  • @GrantLylick
    @GrantLylick Před 3 měsíci +1

    Loading multiple models is nice but you need a vid card with more than 8 gigs of ram. You can 2 smaller models but it obviously wont be as good as loading 2 7billion parameter models.

    • @redamarzouk
      @redamarzouk  Před 3 měsíci

      That is true, a decent hardware is the main consideration.
      but with a good enough Nvidia card not only you’re gonna be able to run really good models like Mistral 0.2 for your daily use, you’ll be able to benchmark good models as well. I see that as an investment tbh.

  • @ArisimoV
    @ArisimoV Před měsícem +1

    Can you use this for self operating pc ? Thanks

    • @redamarzouk
      @redamarzouk  Před měsícem

      Believe me I tried, but my NVIDIA RTX 3050 4Gb simply can’t withstand filming and running Llava at the same time.
      Hopefully I’ll upgrade my setup soon and be able to do it.

    • @ArisimoV
      @ArisimoV Před měsícem

      So it is possible it's just matter of programing and pc sepecs

  • @handler007
    @handler007 Před 3 měsíci +2

    too bad it does not allow to reference documents for response.

    • @redamarzouk
      @redamarzouk  Před 3 měsíci

      You’re right about that, I thought about it when I was filming.
      But they seem to have a good team in place so maybe they’ll introduce it in the next release.

    • @GrantLylick
      @GrantLylick Před 3 měsíci

      I thought it has embedding. I will have to recheck that.

    • @GrantLylick
      @GrantLylick Před 3 měsíci +1

      Yes, it does. I loaded Llava and asked it to describe a picture.

    • @GrantLylick
      @GrantLylick Před 3 měsíci

      If you want something different you could use Anything llm. It has an easier embedding setup that even allows you to load a web page for it to look at. So use whatever you want to load and serve a model, then just use Anything llm to chat with it.

    • @redamarzouk
      @redamarzouk  Před 3 měsíci +1

      @@GrantLylick thank you for pointing that out.
      I have to do that myself, I always go through huggingface spaces to see how good Llava is getting.

  • @gregisaacson660
    @gregisaacson660 Před 3 měsíci +1

    Im using Oogabooga , is this better??? what would you say. will try it out.

    • @redamarzouk
      @redamarzouk  Před 3 měsíci

      Oogabooga is a really good option to control all configurations of your model.
      But I find LMStudio more user friendly, and you can also start server for multiple models at the same time, so it will help create highly expert agents for specific fields.
      So I do prefer LMStudio now.