Komentáře •

  •  Před 4 měsíci

    Code › github.com/nonoesp/live/tree/main/0113/google-gemma
    ↓ Timestamps!
    00:00 Introduction
    01:05 Find Models in Hugging Face
    01:28 Terms
    01:57 Install the Hugging Face CLI
    02:21 Login
    02:55 Download Models
    03:51 Download a Single File
    04:50 Download a Single File as a Symlink
    05:25 Download All Files
    06:32 Hugging Face Cache
    07:00 Recap
    07:29 Using Gemma
    08:02 Python Environment
    08:47 Run Gemma 2B on the CPU
    12:13 Run Gemma 7B in the CPU
    13:07 CPU Usage and Generating Code
    17:24 List Apple Silicon GPU Devices with PyTorch
    18:59 Run Gemma on Apple Silicon GPUs
    23:52 Recap
    24:25 Outro
    Thanks for watching!
    Subscribe to this Luma calendar for future live events! lu.ma/nono

  • @tigery1016
    @tigery1016 Před 4 měsíci +2

    Thank you so much! I was able to run gemma-2b-it. great model. love how google is releasing this open source, rather than closed-source (unlike ClosedAI's ChatGPT)

    •  Před 4 měsíci

      Nice! Happy to hear you were able to run Gemma. =)

    • @tigery1016
      @tigery1016 Před 4 měsíci +1

      im still on monterey, so gpu doesnt work
      yeah cant wait to update to sonoma and use the full power of the m1 pro@

  • @nadiiaheckman4213
    @nadiiaheckman4213 Před 2 měsíci +1

    Awesome stuff! Thank you, Nono.

    •  Před 2 měsíci

      Thank, you Nadiia! Glad you found this useful. =)

  • @bao5806
    @bao5806 Před 3 měsíci +3

    "Why is it that when I run 2B it's very slow on my Mac Air M2, usually taking over 5 minutes to generate a response? But on Ollama, it's very smooth?"🤨

    •  Před 3 měsíci

      Hey! It's likely because they're running the models with C++ (llama.cpp or gemma.cpp) instead of running them with Python. It's much faster, and I'm still to try Gemma.cpp. Let us know if you experiment with this!
      Nono

    • @nhinged
      @nhinged Před 3 měsíci +1

      @ can you link gemma.cpp haven't looked google yet but if you can would be nice

    •  Před 3 měsíci

      github.com/google/gemma.cpp

  • @cloudby-priyank
    @cloudby-priyank Před 4 měsíci +1

    hi i am getting this error
    Traceback (most recent call last):
    File "C:\Users\Priyank Pawar\gemma\.env\Scripts
    un_cpu.py", line 2, in
    from transformers import AutoTokenizer, AutoModelForCausalLM
    ModuleNotFoundError: No module named 'transformers'
    i installed transformer but still getting error

    •  Před 4 měsíci

      Hey, Priyank! Did you try exiting the Python environment and activating it again after installing transformers?

  • @LudoviKush
    @LudoviKush Před 4 měsíci +1

    Great Content!

    •  Před 4 měsíci

      Hi Ludovico!
      Thanks so much for letting me know.
      I'm you found the content useful. =)
      Cheers!
      Nono

  • @ferhateryuksel4888
    @ferhateryuksel4888 Před 3 měsíci +1

    Hi, I'm trying to make an order delivery chatbot. I made with GPT by giving APIs but I think it will cost too much. That's why I want to train a model. What you suggest about this?

    •  Před 3 měsíci +1

      Hey!
      I would recommend you try all open LLMs available at the moment and asses which one works better for you in terms of costs for running it locally, speed of inference, and performance. Ollama is a great resource because in one app you can try many of them. Gemma is a great option, but you should look at Llama 2, Mistral, Falcon, and other open models.
      I hope this helps!
      Non

    • @ferhateryuksel4888
      @ferhateryuksel4888 Před 3 měsíci +1

      Thanks, a lot @

  • @alkiviadispananakakis4697
    @alkiviadispananakakis4697 Před 4 měsíci +2

    I have followed everything. I get this error when trying to run on GPU:
    RuntimeError: User specified an unsupported autocast device_type 'mps'
    I have confirmed the mps is available and have reinstalled everything

    •  Před 4 měsíci

      Hey! If you've confirmed mps is available, you must be running on Apple Silicon, right? If you are, and you've set up the Python environment as explained, can you share what machine and configuration you're using? I've only tested this on an M3 Max MacBook Pro.

    •  Před 4 měsíci

      Other people have mentioned the GPU not being available in macOS versions previous to Sonoma. Are you on the latest update?

    • @alkiviadispananakakis4697
      @alkiviadispananakakis4697 Před 4 měsíci

      @Hello! Thank you for your response. Yes i have an M1 MAX with the Sonoma 14.3.1. I also tried all the models in case there was an issue with the number of parameters.

    • @drjonnyt
      @drjonnyt Před 4 měsíci +2

      @@alkiviadispananakakis4697 I had the same issue on M2 Pro. I just fixed it by downgrading transformers to 4.38.1. Now my only problem is it's unbelievably slow to run!

    •  Před 4 měsíci

      Nice! The only thing that may be faster is running gemma.cpp.

  • @maxyan2572
    @maxyan2572 Před 4 měsíci +1

    what is your machine?

    •  Před 4 měsíci +1

      Hi, Maxyan!
      I'm using a MacBook Pro M3 Max (14-inch, 2023) with 1TB SSD, 64GB Unified Memory, and 16 cores (12 performance and 4 efficiency).
      Nono

    • @maxyan2572
      @maxyan2572 Před 4 měsíci +1

      😍thanks! that is very detailed@

  • @AamirShroff
    @AamirShroff Před 2 měsíci

    I am trying to run with cpu. I am getting this error:
    Gemma's activation function should be approximate GeLU and not exact GeLU.
    Changing the activation function to `gelu_pytorch_tanh`.if you want to use the legacy `gelu`, edit the `model.config` to set `hidden_activation=gelu` instead of `hidden_act`. See github.com/huggingface/transformers/pull/29402 for more details.
    Loading checkpoint shards: 0%| | 0/2 [00:00

    •  Před 2 měsíci

      Hey, Aamir! What machine are you running on?

    • @AamirShroff
      @AamirShroff Před 2 měsíci +1

      @ I am using intel core ultra 5 14th gen.

    •  Před 2 měsíci

      I've only run Gemma on Apple Silicon so I can't guide you too much. Hmm.

  • @jobinjose3917
    @jobinjose3917 Před 4 měsíci

    Can you provide the code, where symlinks is not used. Just downloaded as a repo. How to add that repo in the code.
    I have copied the folder in the environment and just added like,
    tokenizer = AutoTokenizer.from_pretrained("./gemma-7b-it")

    •  Před 4 měsíci

      No matter whether you symlink or download the files to the folder, you should be able to load the files in the same way.
      To download the files (without symlink) you can add the flag --local-dir LOCAL_PATH_HERE and not use the --local-dir-use-symlink flag.
      Note that even when you don't symlink, the large files, i.e., the models, will still be symlinked because they are often huge files.
      I hope that helps! =)
      Nono