Video není dostupné.
Omlouváme se.

AI on Mac Made Easy: How to run LLMs locally with OLLAMA in Swift/SwiftUI

Sdílet
Vložit
  • čas přidán 16. 08. 2024
  • Running AI on Mac in your Xcode projects just got simpler! Join me as I guide you through running Local Large Language Models (LLMs) like llama 3 locally on your Mac with OLLAMA. In this video, I’ll walk you through installing OLLAMA, an open-source platform, and demonstrate how you can integrate AI directly into your Swift and SwiftUI apps.
    Whether you’re dealing with code, creating new applications, or simply curious about AI capabilities on macOS, this tutorial covers everything from basic setup to advanced model configurations. Dive into the world of local AI and enhance your apps’ functionality without the need for extensive cloud computing resources. Let’s explore the potential of local AI together, ensuring your development environment is powerful and efficient. Watch now to transform how you interact with AI on your Mac!
    👨‍💻 What You’ll Learn:
    - How to install and set up OLLAMA on your Mac.
    - Practical demonstrations of OLLAMA in action within the terminal and alongside SwiftUI.
    - Detailed guidance on optimizing OLAMA for various system configurations.
    - Customizing AI models
    🔍 Why Watch?
    - Understand the benefits of running AI locally versus cloud-based solutions.
    - Watch real-time tests of AI models on a MacBook Pro with an M1 chip, showcasing the power and limitations.
    - Gain insights into modifying AI responses to fit your specific coding style and project requirements.
    If you liked what you learned and you want to see more, check out one of my courses!
    👨‍💻 my macOS development course learn.swiftypl...
    👨‍💻 my Core Data and SwiftUI course learn.swiftypl...
    👩🏻‍💻 SwiftUI layout course
    learn.swiftypl...
    ⬇️ Download ollama: ollama.com/
    ⬇️ Download ollamac project files: github.com/kev...
    #SwiftUI #ai #macos

Komentáře • 18

  • @LinuxH2O
    @LinuxH2O Před 2 dny

    Really informative, something I kind of was in need. Thanks for showing things off.

  • @khermawan
    @khermawan Před 26 dny +3

    Ollamac and OllamaKit creator here! 👋🏼 Great video, Karin!! ❤

  • @Another0neTime
    @Another0neTime Před měsícem +1

    Thank you for the video, and sharing your knowledge.

  • @andrelabbe5050
    @andrelabbe5050 Před měsícem

    I enjoyed the video. Easy to understand and most importantly showing what you can do without to much hassle with a not too powerful MacBook. From the video I believe I have the same model as the one you used. I do like the idea of setting preset for the 'engine'. I do use the Copilot Apps. I can then check how both perform for the same question. I have just tested deepseek-coder-v2 with the same questions as you... Funny thing, it is not exactly the same answer. Also on my 16Gb Mac,,, The Memory activity get a nice yellow colour. Sadly contrary to the Mac in the video, I got more stuff running in the background like Dropbox, etc... Which I cannot really kill just for the sake of it,

  • @KD-SRE
    @KD-SRE Před 10 dny

    I use '/bye' to exit out of the Ollama cli

  • @juliocuesta
    @juliocuesta Před měsícem

    if i understood correctly. The idea could be to create an app for macOS that includes some function that requires a LLM. The app is distributed without the LLM. The user is notified that said function will only be available if download the model. This message should be implemented in a View that contains a button that will download the file and configure the macOS app to start its use.

  • @tsalVlog
    @tsalVlog Před měsícem

    Great video!

  • @guitaripod
    @guitaripod Před měsícem +1

    wondering what it'd take to get something running on iOS. Even with 2B it might prove useful

  • @kamertonaudiophileplayer847

    The awesome video!

  • @officialcreatisoft
    @officialcreatisoft Před měsícem

    I've tried using the LLM's locally, but I only have 8gb of ram. Great video!

    • @SwiftyPlace
      @SwiftyPlace  Před měsícem +1

      Unfortunately, Apple made the base models with 8GB. A lot of people have the same problem as you.

    • @jayadky5983
      @jayadky5983 Před měsícem +1

      I feel like you can still run the Phi3 model on your device.

  • @ericwilliams4554
    @ericwilliams4554 Před měsícem

    Great video. Thank you. I am interested to know if any developers are using this in their iOS apps.

    • @SwiftyPlace
      @SwiftyPlace  Před měsícem +1

      This is not working for iOS. If you want to run LLM on an iPhone you will need to use a smaller model which usually dont perform so well. Most iPhones have less than 8GB Ram. That is also why Apple Intelligence will process more advanced complex task in the cloud

  • @mindrivers
    @mindrivers Před měsícem

    Dear Karin, Could you please advise on how to put my entire Xcode project into a context window and ask the model about my entire codebase?

  • @midnightcoder
    @midnightcoder Před měsícem +2

    Any way of running it on iOS?

    • @EsquireR
      @EsquireR Před měsícem

      Only watchos sorry

  • @bobgodwinx
    @bobgodwinx Před měsícem

    LLMs have a long way to go. 4GB to run a simple question is a no go. The have to reduce it to 20MB and people will start paying attention.