Llama 3 FULLY LOCAL on your Machine | Run Llama3 locally

Sdílet
Vložit
  • čas přidán 11. 07. 2024
  • FULLY Local Llama 3, on your machine.
    Run Llama 3-8B in a local server and integrate it inside your AI Agent project.
    _______ 👇 Links 👇 _______
    🤝 Discord: / discord
    💼 𝗟𝗶𝗻𝗸𝗲𝗱𝗜𝗻: / reda-marzouk-rpa
    📸 𝗜𝗻𝘀𝘁𝗮𝗴𝗿𝗮𝗺: / redamarzouk.rpa
    🤖 𝗬𝗼𝘂𝗧𝘂𝗯𝗲: / @redamarzouk
    LMStudio: lmstudio.ai/
    Ollama: ollama.com/
    www.automation-campus.com/
    Introduction Llama3: 00:00
    Run Ollama : 00:36
    Lm Studio Llama3: 01:25
    Run Llama3 Local Server: 03:05
  • Věda a technologie

Komentáře • 7

  • @user-es3rp4lz6m
    @user-es3rp4lz6m Před 2 měsíci +1

    Could you provide a tutorial on how to use LAMA3 to extract specific information from invoices? 😊

    • @redamarzouk
      @redamarzouk  Před 2 měsíci +1

      I'm sure Llama3 70B will be able to do so, but not so sure for the locally runnable Llama3-8B but I will try both and add the video to the backlog

    • @user-es3rp4lz6m
      @user-es3rp4lz6m Před 2 měsíci

      @@redamarzouk Thank you, this is going to be incredibly helpful!😊

  • @rheavictor7
    @rheavictor7 Před 2 měsíci +1

    That's amazing, learning a lot from you man!
    Do you have any project on autogen/crewAi that is a PDF researcher/PDF extractor?
    I'm trying to build this but having a hard time coordinating things on both autogen/crewai...

    • @redamarzouk
      @redamarzouk  Před 2 měsíci

      Thank you!
      An extractor/researcher will make a great video, I'll add it to the backlog!

  • @spiffingbooks2903
    @spiffingbooks2903 Před 2 měsíci

    Yes but what type of specs on one's machine does one need for this to be a practical possibility?

    • @redamarzouk
      @redamarzouk  Před 2 měsíci

      I don't know the exact minimum to be able to run it, but it's recommended to have at least 16GB of RAM in case you don't have a GPU.
      If you have a GPU of 4 to 8 GB it will make so many small models easily runnable including Llama3.
      one last thing, you can always find Q1 to Q4 versions that are less resource demanding and that can be runnable on any modern machine.