Free and Local AI in Home Assistant using Ollama

Sdílet
Vložit
  • čas přidán 16. 04. 2024
  • ► MY HOME ASSISTANT INSTALLATION METHODS FREE WEBINAR - automatelike.pro/webinar
    ► DOWNLOAD MY FREE SMART HOME GLOSSARY - automatelike.pro/glossary
    ► MY RECORDING GEAR
    MAIN CAMERA: amzn.to/3Ln8qzb
    MAIN & 2ND ANGLE LENS: amzn.to/48bhxMZ
    2ND ANGLE CAMERA: amzn.to/44RjRWs
    SD CARDS: amzn.to/3sT7fRy & amzn.to/3sS0wHu
    MICROPHONE: amzn.to/466Kxne
    BACKUP MIC: amzn.to/468BSkb
    EDITING MACHINE: amzn.to/45LWdvS
    ► SUPPORT MY WORK
    Paypal - www.paypal.me/kpeyanski
    Patreon - / kpeyanski
    Bitcoin - 1GnUtPEXaeCUVWdJxCfDaKkvcwf247akva
    Revolut - revolut.me/kiriltk3x
    Join this channel to get access to perks - / @kpeyanski
    ✅ Don't Forget to like 👍 comment ✍ and subscribe to my channel!
    ► MY ARTICLE ABOUT THAT TOPIC - peyanski.com/home-assistant-o...
    ► DISCLAIMER
    Some of the links above are affiliate links. If you click on these links and purchase an item I will earn a small commission with no additional cost for you. Of course, you don’t have to do so in case you don’t want to support my work!
  • Jak na to + styl

Komentáře • 63

  • @KPeyanski
    @KPeyanski  Před měsícem +1

    Are you going to try this Home Assistant Ollama Integration? And if yes, on what kind of device are you going to install the Ollama software?

  • @bugsub
    @bugsub Před měsícem +1

    Wow! Fantastic tutorial! Really appreciate your channel!

    • @KPeyanski
      @KPeyanski  Před měsícem

      Glad it was helpful and thanks for the kind words!

  • @RocketBoom1966
    @RocketBoom1966 Před měsícem +3

    Thank you, excellent content as usual. I have setup Ollama running in a Docker container on my Unraid server. The server has a low power Nvidia GPU which I make use of to speed up responses.
    Another fun thing to try is to modify the end of the prompt template with something like this:
    Answer the user's questions using the information about this smart home.
    Keep your answers brief and do not apologize. Speak in the style of Captain Picard from Star Trek.
    Yes, my assistant will respond with answers in the style of Captain Picard.

    • @KPeyanski
      @KPeyanski  Před měsícem

      Oh that is very interesting thanks for the info, but how you make the HA Ollama Integration to answer with voice?

    • @RocketBoom1966
      @RocketBoom1966 Před měsícem

      @@KPeyanski I have seen it done, however I have struggled to make it work. My modified prompt template only responds in text form as you explained in your video. Things are moving so fast with these AI integrations, I imagine it won't be long until Home Assistant includes powerful AI tools by default. Exciting times.

    • @KPeyanski
      @KPeyanski  Před měsícem

      exciting times indeed :)

  • @FrankGraffagnino
    @FrankGraffagnino Před měsícem +1

    I _REALLY_ appreciate a tutorial that shows how to do this with a local LLM... very cool. Thanks!

    • @KPeyanski
      @KPeyanski  Před měsícem

      You're very welcome! Are you going to try it and on what device?

    • @FrankGraffagnino
      @FrankGraffagnino Před měsícem +1

      @@KPeyanski probably not yet. But I just love when consumers can be better educated about local control. Thanks!

    • @KPeyanski
      @KPeyanski  Před měsícem

      Yes, I also prefer local. Unfortunately it is not always an option.

  • @joeking5211
    @joeking5211 Před 23 dny

    Looks a fantastic vid. Will keep an eye open for the Windows tutorial and come back then.

    • @KPeyanski
      @KPeyanski  Před 21 dnem

      it is almost the same for windows. You just have to install the ollama windows version and everything else is the same

  • @AlonsoVPR
    @AlonsoVPR Před měsícem +3

    I was waiting for someone to make a video about this! thank you sir!!

    • @KPeyanski
      @KPeyanski  Před měsícem

      Glad it was helpful! On what kind of device are you going to install the Ollama software?

    • @AlonsoVPR
      @AlonsoVPR Před měsícem +1

      @@KPeyanski I don't have enough horsepower for this at the moment, I'm into low consumption at the moment but I'm thinking on getting a proxmox server with a dedicated GPU, At the moment all my house runs on a 2012 i5 Mac mini with 8gb of ram also using proxmox

    • @KPeyanski
      @KPeyanski  Před měsícem +1

      I understand, low power consumption is important but i5 is not that bad and you can try Ollama on it. If it is not OK just delete/uninstall it!

    • @AlonsoVPR
      @AlonsoVPR Před měsícem

      @@KPeyanski Maybe when I get a better server with more ram :P sadly my old mac mini has 8gb of ram soldered to the motherboard and all my services are using about 72% of the ram at the moment:P
      Now I'm struggling on finding a good zigbee mmwave sensor that doesn't spams the network :/ Any recomendations?
      I have tried the TUYA-M100 and the MTG275-ZB-RL. although the MTG275-ZB-RL is way better than the TUYA it's still spamming my zigbee network several times per second

    • @ecotts
      @ecotts Před měsícem

      I'm waiting for someone to make a video about all the data that META stole from your system as a result of the installation and then sold on to some random companies.

  • @miguelcid1965
    @miguelcid1965 Před měsícem

    With llama is it able to turn on lights or entities in general? I read in the integration page of Hassio that with the llama integration it isnt possible, but maybe was that before? Thanks.

  • @BrettVilnis
    @BrettVilnis Před měsícem

    Thanks, excellent video.

    • @KPeyanski
      @KPeyanski  Před měsícem

      Glad you enjoyed it! Are you going to try it?

    • @BrettVilnis
      @BrettVilnis Před měsícem

      @@KPeyanski When voice is working

    • @KPeyanski
      @KPeyanski  Před měsícem

      no idea, hopefully soon

  • @Palleri
    @Palleri Před měsícem +3

    Could you share the prompt template you are using?

  • @floor18fdb
    @floor18fdb Před měsícem

    So for the llama I need a second device to be always on? Is it possibly to install it directly on a hass server

    • @KPeyanski
      @KPeyanski  Před měsícem +1

      No, with this integration this is not possible. At least for now...

  • @michaelthompson657
    @michaelthompson657 Před měsícem

    Im assuming since it can be installed on Linux you could have this on a separate pi on raspberry pi os lite and connect it to your other pi running HA? Just I have HA on a pi 4 and have a spare pi 3, just wondering if the pi 3 would be powerful enough to run ollama?

    • @KPeyanski
      @KPeyanski  Před měsícem

      This is interesting indeed, but I guess you have to try it out. It will be best if you share the result!

    • @michaelthompson657
      @michaelthompson657 Před měsícem

      @@KPeyanski do you think I could install it on raspberry pi os lite? Im very inexperienced with pi os

    • @KPeyanski
      @KPeyanski  Před měsícem

      I don't know, you can try...

    • @michaelthompson657
      @michaelthompson657 Před měsícem

      @@KPeyanski I’m not that good 🤣

  • @PauloAbreu
    @PauloAbreu Před měsícem

    Great tutorial! Thanks. Is English the only language available?

    • @KPeyanski
      @KPeyanski  Před měsícem

      not sure about that, but I think yes!

  • @danninoash
    @danninoash Před měsícem

    Hi, great video first of all, THANKS!!
    What is missing to me is the BT proxy...how do I configure it? it is a must? why this part isn't mentioned in the video? :(

    • @KPeyanski
      @KPeyanski  Před měsícem +1

      BT proxy is not needed at all here. The communication between Home Assistant and Ollama is over the IP network, so just follow the steps from the video and you will have it noting additional is needed

    • @danninoash
      @danninoash Před měsícem

      @@KPeyanski SORRY!! I confused my question with another video of yours - the creating Apple Watch as a device in HA LOL :))

    • @danninoash
      @danninoash Před měsícem

      @@KPeyanski What I wanted to ask here actually is - will I have to put a machine that will be turned on for 24\7?? (whether it's Win\LinuxMacOS)
      I didn't fully understand what should I do with it after I connect my HA with the Ollama integration?
      Qeustion #2 please - does it interrupts somehow to my Alexa or it works alongside it?
      THANKS!!

    • @danninoash
      @danninoash Před měsícem

      ???

  • @jacquesdupontd
    @jacquesdupontd Před 22 dny

    Thanks for the very good video. I know that you can now make a pretty good integration of GPT in HA and have a trigger and speech exchanges. I imagine it's gonne be even easier and perfect (and creepier at the same time) with GPT-4o. I'm sure we'll be able to control devices and have speech and trigger soon for Ollama. I subscribe to your channel

    • @KPeyanski
      @KPeyanski  Před 20 dny +1

      Thanks for subscribing! Yes, integrating GPT into Home Assistant is becoming increasingly seamless, and GPT-4 will likely make it even more intuitive and powerful. It's exciting (and a bit creepy) to think about how advanced and interactive our smart homes can become soon. Stay tuned for more updates!

    • @jacquesdupontd
      @jacquesdupontd Před 20 dny

      @@KPeyanski I'm doing the researches to build some kind of Amazon Echo with Local LLM and maybe with a screen. A bit like the ESP32-S3-BOX but better. Not for commercialisation for now (i'm sure there are tons of projects like that being developped). I'm still not sure about what device to use to handle the local LLM. A GPU is a huge plus but takes too much place. The best would be a Mac Mini M1, Ollama LLMS works wonder on it. I have to check how well works Asahi linux and if i can pack everything in it (personnal home server, Home Assistant, Ollama, Voice assistant)

  • @sirmax91
    @sirmax91 Před měsícem

    can you make it run on raspberry pi 5 and link it to home assiatant

    • @KPeyanski
      @KPeyanski  Před měsícem

      I think yes, but I guess you have to try it.

  • @markrgriffin
    @markrgriffin Před měsícem

    Probably a dumb question, but how do I expose OLLAMA on my network if I install on Windows. Instructions are not very specific

    • @KPeyanski
      @KPeyanski  Před měsícem

      Follow the instructions from the Ollama documentation and add the Ollama IP in your OLLAMA_HOST variable. These are the steps:
      On windows, Ollama inherits your user and system environment variables.
      First Quit Ollama by clicking on it in the task bar
      Edit system environment variables from the control panel
      Edit or create New variable(s) for your user account for OLLAMA_HOST, OLLAMA_MODELS, etc.
      Click OK/Apply to save
      Run ollama from a new terminal window

    • @markrgriffin
      @markrgriffin Před měsícem

      @KPeyanski thanks for the reply. So just add the two variables names? With no values? That's where I'm stuck unfortunately. Do I not need to add a path to OLLAMA_MODELS and an ip for the host as variables?

  • @MichaelDomer
    @MichaelDomer Před měsícem +1

    Get rid of that llama2, version 3 that was just released completely destroys it.

    • @KPeyanski
      @KPeyanski  Před měsícem

      sounds good, are you using it already? And for what exactly?

  • @danl6734
    @danl6734 Před měsícem +6

    Under NO CIRCUMSTANCES is anything facebook related going ANYWHERE near my network, offline/local or not.

    • @KPeyanski
      @KPeyanski  Před měsícem +1

      no problem, you can select another model that have nothing in common with Meta & facebook

    • @andrewtfluck
      @andrewtfluck Před měsícem +2

      Ollama, the tool, is separate from Facebook/Meta. You can run Llama on it, but you have a variety of other LLMs to choose from.

    • @danl6734
      @danl6734 Před měsícem

      @@andrewtfluck WhatsApp WAS a separate tool to Facebook.. not any more.
      Ollama was developed by meta (Facebook) and I'm 99% there's 'call home' beacons in the code somewhere. Also, just out of principle, I will not use anything Facebook related.

    • @Busy_Paws
      @Busy_Paws Před 7 dny

      Paranoia

  • @OrlandoPaco
    @OrlandoPaco Před měsícem

    Add voice!

    • @KPeyanski
      @KPeyanski  Před měsícem

      Yes, voice is needed here... Maybe in the next release!

  • @ecotts
    @ecotts Před měsícem +3

    I will never in my life add anything META related intentionally on any of my systems. Hell No!! 😂

  • @rude_people_die_young
    @rude_people_die_young Před měsícem

    Shouldn’t be hard to do function calling hey

    • @KPeyanski
      @KPeyanski  Před měsícem

      you mean voice function hey or something else?

    • @rude_people_die_young
      @rude_people_die_young Před měsícem

      @@KPeyanski I mean where the LLM emits valid JSON that can be used in commands or API calls. It’s a confusing AI term.