Fastest speech to text transcription, 100% offline - Whisper.cpp | Zero latency
Vložit
- čas přidán 25. 05. 2024
- Today we will see how to download and use whisper offline.
Whisper from openai: github.com/openai/whisper
Whisper.cpp: github.com/ggerganov/whisper.cpp
Models: github.com/ggerganov/whisper....
- - - - - - - - - - - - - - - - - - - - - -
Follow us on social networks:
Instagram: / codewithbro_
---
Support us on patreon: / codewithbro
#whisper #openai #whispercpp #speechtotext #programming #softwaredeveloper #softwareengineer #transcription #developer #iosdeveloper #mobiledevelopment #coding #coder #javascript #developer #computerscience #computersciencestudent #100daysofcode #html #css #programmer #vue #npmpackage #npm #package #CodeNewbies #Code_with_bro #code_withbro #youtubechannel #youtube #youtuber #youtubers #subscribe #youtubevideos #sub #youtubevideo #like #instagram #follow #video #vlog #subscribetomychannel #gaming #music #explorepage #love #smallyoutuber #vlogger #youtubegaming #instagood #gamer #youtubecommunity #likes #explore #youtubelife #youtubecreator #ps #bhfyp #fotiecodes - Věda a technologie
If you have any questions please feel free to drop them below!
Please don't forget to like and subscribe for more interesting content like this🔥
Great video bro. Keep it up 👍
Thanks, really appreciate 🙌🏾
thank you for the amazing content!
Always a pleasure🎉
love it !!!
Glad you love it... Please, don't forget to like and subscribe for more interesting content like this one🔥😎
amazing! what gpu are you running? or it’s on cpu?
Running on macOS M1 chip with 8 core GPU, I believe whisper.cpp makes use of metal on mac
Your english is excellent. may i make a suggestion - python is not pronounced pie-ton but pie-thon - with the 'th' being the same as the 'th' in 'this'
Appreciate the correction!
can you put this offline whisper with a local llm model lets say phi3 to get reply based on whisper? i mean lets see how fast it can actually put out what the llm model will reply, this way you can make an offline ai assistant with no latency in responses and local 100 %
i am actually working on something like this, check out my recent videos on Jarvis. I am building Jarvis so you don't have to
@@codewithbro95 cool nice job keep it up, can you also add a way to use phi3 llm with phidata as well for Local RAG and also options for reading csv , pdf ,word documents as well ? this will give you a lot of views also, we are talking for an actual use of an ai assistant with this abilities !!!
@@gnosisdg8497 definitely something i am looking to work on, stay tuned!!!
I wait same speed TTS(text to speech), it would be great to have
Not sure i understand what you mean!
@@codewithbro95 we have option recognize speech to text in realtime, but text to speech is really slow now
@@snatvb definitely agree with you, inferencing with TTS is very bad at the moment, though I recently stumbled on a really promising project called ChatTTS apparently it’s being built specifically for this purpose, I haven’t tried it though, maybe I will and make a video on it.
@@codewithbro95 yep, I've seen recently. I tried "bark" from suno and it work pretty slow (I have rtx 3070) and sometimes it voices llm imagination text instad of I gave :D
Hi, noob here.. Trying to figure out how to get the `make` working from VSCode terminal, on windows so far I installed MSYS2 added C:\msys64\usr\bin and C:\msys64\mingw64\bin to PATH env variables but... still says command not recognized..