Make an Offline GPT Voice Assistant in Python

Sdílet
Vložit
  • čas přidán 27. 06. 2024
  • We make our own offline (local) virtual voice assistant using Python that lets you control your computer and ask it anything!
    This is yet another great example of how open source software can be incredible for anyone. Without having to rely on any API or sending our data to any servers we can make a pretty solid offline virtual voice assistant for free!
    Windows File Path: C:\Users\{username}\.cache\whisper
    Mac File Path: /Users/{username}/.cache/whisper
    Commands:
    curl -o encoder.json openaipublic.blob.core.window...
    curl -o vocab.pbe openaipublic.blob.core.window...
    GPT4All: gpt4all.io/index.html
    Code: github.com/Jalsemgeest/Python...
    Join my Discord at / discord
    Thanks for watching! ❤️
    Timestamps:
    0:00 Intro
    0:39 Speech Recognition
    3:30 Offline Open AI Whisper
    12:00 Text to Speech
    14:20 Local LLM
    23:04 Outtro

Komentáře • 40

  • @iyas5398
    @iyas5398 Před 19 dny

    if u had a problem with the vocab file download so basically its vocab.bpe not vocab.pbe u just need to change this in the curl command and it should work just fine

    • @jakeeh
      @jakeeh  Před 19 dny

      Thanks for the comment!

  • @joshuashepherd7189
    @joshuashepherd7189 Před 4 měsíci +1

    Heyo! Awesome Video! Thanks so much for doing this man. So insightful

    • @jakeeh
      @jakeeh  Před 4 měsíci

      Appreciate it! Really happy you enjoyed it :)

  • @mohanpremathilake915
    @mohanpremathilake915 Před 2 měsíci +1

    Thank you for the great content

    • @jakeeh
      @jakeeh  Před 2 měsíci +1

      Thank you! ❤️/ Jake

  • @EduGuti9000
    @EduGuti9000 Před 4 měsíci +1

    Awesome Video! I am mainly GNU/Linux user and recently I am using also MS Windows, so may be this is a silly question: Are you running that in WSL2? If so, it is easy to use microphone and speakers with Python in WSL2?

    • @jakeeh
      @jakeeh  Před 4 měsíci +1

      Thank you!
      I'm running this on Windows. You might need to tinker around on GNU/Linux a bit more to get it working for the microphone input, but it shouldn't be too bad. I've seen a number of cases where linux users were using the microphone input.
      Happy coding :)

  • @jacklee4691
    @jacklee4691 Před měsícem +1

    Thanks for the awesome video! just curious, if I want to make the python text-to-speech offline more realistic with model (like in hugging face) is it possible?

    • @jakeeh
      @jakeeh  Před měsícem

      Yeah it should be possible! There are some great OLlama models available now too :)

  • @MyStuffWH
    @MyStuffWH Před 4 měsíci +2

    Just out of interest. Do you have a GPU in your machine (laptop/desktop)? That would give some context to the performance you are getting.

    • @jakeeh
      @jakeeh  Před 4 měsíci +2

      Great question! I have an AMD Radeon RX 6800. So certainly not top of the line. Also, in my experience a lot of GPU accelerated things have only worked with NVidia with AMD being a 'TODO' on the developers side :)

  • @vidadeperros9763
    @vidadeperros9763 Před 2 měsíci +1

    Hi Jake. Where do you import pyautogui from?

    • @jakeeh
      @jakeeh  Před 2 měsíci +1

      Hey, you need to install it using pip.
      Run “python -m pip install pyautogui” then you can just import pyautogui in your file.
      Make sure you use the same python when running your file as you do when you install with pip

  • @joshuashepherd7189
    @joshuashepherd7189 Před 4 měsíci +1

    4:36 I think its Video RAM - Basically the RAM available on whichever GPU you're using for inference

    • @jakeeh
      @jakeeh  Před 4 měsíci +1

      Yeah I think you're right. Thanks! :)

  • @inout3394
    @inout3394 Před 2 měsíci

    Thx

    • @jakeeh
      @jakeeh  Před 2 měsíci

      Thanks for your comment!

  • @adish233
    @adish233 Před 4 měsíci +1

    As part of my engineering project , I want to make a similar voice assistant specifically for agriculture which clears farmer's queries and also gives crop suggestions based on the local conditions.Can you please guide me through the project?

    • @jakeeh
      @jakeeh  Před 4 měsíci +4

      Wow! That sounds like a great project! I'm not sure I could guide you through the project, but you may want to try to find a machine learning model that is more specialized on plants and agriculture. You could even look into making one yourself if you have enough training data! :)

    • @joannezhu101
      @joannezhu101 Před měsícem

      @@jakeeh I am so curious to know how to train a domain-knowledge only model, that would be brilliant. There must be a way of doing it, I am also learning AI for fun out side of my day job.

  • @snapfacts41
    @snapfacts41 Před 19 dny

    I think gemma could be a better option than this cus i dont think that it would have the token restrictions that gpt4all had, and its pretty easy to install using ollama. even with an integrated gpu from 5 years ago, i was able to get a comfortable experience with the llm model.

    • @jakeeh
      @jakeeh  Před 14 dny +1

      Yeah at the time llama wasn’t easily available for windows. There are definitely some better things available now

  • @wethraccoon9480
    @wethraccoon9480 Před měsícem

    Please do more advanced versions of this, I am a web dev and would love to start integration my own voice assitance, I'm just a bit newbe to AI

    • @jakeeh
      @jakeeh  Před měsícem

      Thanks for the comment! Yeah, I’d be happy to do some more stuff on this. I think the new version would use OLlama. Although I’d like to go over how to train your own model too.

  • @MyStuffWH
    @MyStuffWH Před 4 měsíci +1

    It is (very) clear you do not have a technical AI background, but you inspired me to try and make my own local assistant. Thanks!

    • @jakeeh
      @jakeeh  Před 4 měsíci +1

      Oh absolutely! I certainly have a technical background, but I am far away from having too much experience outside of scraping the surface of AI. Happy you felt inspired to give things a shot! You've got to start somewhere! :)

  • @lucygelz
    @lucygelz Před měsícem

    is this possible on linux and if so can you make a tutorial or link a text guide to something similar

    • @jakeeh
      @jakeeh  Před měsícem +1

      Yes absolutely you should be able to do this on Linux as well. You could take a look at medium.com/@vndee.huynh/build-your-own-voice-assistant-and-run-it-locally-whisper-ollama-bark-c80e6f815cba which uses OLlama as well which is probably a great option nowadays too :)

    • @lucygelz
      @lucygelz Před měsícem

      @@jakeeh thank you

  • @yashikant5819
    @yashikant5819 Před 3 měsíci

    can you combine it with frontend

    • @jakeeh
      @jakeeh  Před 3 měsíci

      You could absolutely make a front end for this so you could interact with it through a GUI and/or voice.

  • @prabhatadvait6171
    @prabhatadvait6171 Před měsícem

    i'm using linux can you tell me how how to do it in linux

    • @jakeeh
      @jakeeh  Před měsícem

      Which part are you having trouble with in Linux? :)

  • @fnice1971
    @fnice1971 Před 2 měsíci

    I used LM Studio mostly as it loads multi models, with 3x 24gb GPU's 70GB VRAM you can run like 10 models at same time, more polished then GPT4ALL but both work and free.

    • @jakeeh
      @jakeeh  Před 2 měsíci +2

      Oh that sounds great! I certainly don't have those specs, but that does sound great nonetheless.
      Thanks for your comment! :)

    • @joannezhu101
      @joannezhu101 Před měsícem

      @@jakeeh i wonder if it is worth to compare those like Ollama, LM Studio or just the way you've shown in the video, thought i don't quite get how Ollama or LM Studio works (I thought gguf is the only way to work witht local offline method, didn't know what is inside Ollama). Do they really help to speed up things?

    • @jakeeh
      @jakeeh  Před měsícem

      I think it really depends on the hardware of your machine. If they can utilize your GPU then they can likely greatly improve the performance. Although I'm not an expert on them :)