Thank you so much! I was able to run gemma-2b-it. great model. love how google is releasing this open source, rather than closed-source (unlike ClosedAI's ChatGPT)
im still on monterey, so gpu doesnt work
yeah cant wait to update to sonoma and use the full power of the m1 pro@
Awesome stuff! Thank you, Nono.
hi i am getting this error
Traceback (most recent call last):
File "C:\Users\Priyank Pawar\gemma\.env\Scripts
un_cpu.py", line 2, in
from transformers import AutoTokenizer, AutoModelForCausalLM
ModuleNotFoundError: No module named 'transformers'
i installed transformer but still getting error
Great Content!
Hi, I'm trying to make an order delivery chatbot. I made with GPT by giving APIs but I think it will cost too much. That's why I want to train a model. What you suggest about this?
Hey!
I would recommend you try all open LLMs available at the moment and asses which one works better for you in terms of costs for running it locally, speed of inference, and performance. Ollama is a great resource because in one app you can try many of them. Gemma is a great option, but you should look at Llama 2, Mistral, Falcon, and other open models.
I hope this helps!
Non
I have followed everything. I get this error when trying to run on GPU:
RuntimeError: User specified an unsupported autocast device_type 'mps'
I have confirmed the mps is available and have reinstalled everything
@Hello! Thank you for your response. Yes i have an M1 MAX with the Sonoma 14.3.1. I also tried all the models in case there was an issue with the number of parameters.
@@alkiviadispananakakis4697 I had the same issue on M2 Pro. I just fixed it by downgrading transformers to 4.38.1. Now my only problem is it's unbelievably slow to run!
I am trying to run with cpu. I am getting this error:
Gemma's activation function should be approximate GeLU and not exact GeLU.
Changing the activation function to `gelu_pytorch_tanh`.if you want to use the legacy `gelu`, edit the `model.config` to set `hidden_activation=gelu` instead of `hidden_act`. See github.com/huggingface/transformers/pull/29402 for more details.
Loading checkpoint shards: 0%| | 0/2 [00:00
Can you provide the code, where symlinks is not used. Just downloaded as a repo. How to add that repo in the code.
I have copied the folder in the environment and just added like,
tokenizer = AutoTokenizer.from_pretrained("./gemma-7b-it")
No matter whether you symlink or download the files to the folder, you should be able to load the files in the same way.
To download the files (without symlink) you can add the flag --local-dir LOCAL_PATH_HERE and not use the --local-dir-use-symlink flag.
Note that even when you don't symlink, the large files, i.e., the models, will still be symlinked because they are often huge files.
I hope that helps! =)
Nono
Code › github.com/nonoesp/live/tree/main/0113/google-gemma
↓ Timestamps!
00:00 Introduction
01:05 Find Models in Hugging Face
01:28 Terms
01:57 Install the Hugging Face CLI
02:21 Login
02:55 Download Models
03:51 Download a Single File
04:50 Download a Single File as a Symlink
05:25 Download All Files
06:32 Hugging Face Cache
07:00 Recap
07:29 Using Gemma
08:02 Python Environment
08:47 Run Gemma 2B on the CPU
12:13 Run Gemma 7B in the CPU
13:07 CPU Usage and Generating Code
17:24 List Apple Silicon GPU Devices with PyTorch
18:59 Run Gemma on Apple Silicon GPUs
23:52 Recap
24:25 Outro
Thanks for watching!
Subscribe to this Luma calendar for future live events! lu.ma/nono