The Secret Behind Ollama's Magic: Revealed!
Vložit
- čas přidán 18. 02. 2024
- Ollama is amazing and let's you run LLM's locally on your machine. But how does it work? What are the pieces you need to use? This video covers it all.
Be sure to sign up to my monthly newsletter at technovangelist.com/newsletter
And if interested in supporting me, sign up for my patreon at / technovangelist - Věda a technologie
that 10 second pause at the end of the video was like expecting post-credits scene from a Marvel movie :D
Really enjoyed this video. I especially appreciated the section on creating derivative models. Very informative. Thanks!
Appreciate you taking the time to put this video together!
Excellent content and explanation, your videos get my like when they start, just to be sure I don't forget it.
Awesome and calm explanation. By far the best channel about this topic.
Sorry but you are so cool, so much enjoying your videos and saving me months of learning the docs. This ai world is moving so fast for an old person like me but making me feel young again, but it feels like we are still at 1400k modem stage of dev. Love what your doing.
Great job compiling and explaining this information. Fantastically useful video - keep up the great work brother
Thanks for the in depth review. Good stuff sir!
Hey, Matt! Thank you for such a clear explanation! Cheers!
Thanks! Good architecture video. Amazinf.
And I thought I knew everything. Until I found your video(s).
Thanks again!!
Thanks for this. This helped me click that the model is not the big thing, but rather the weights are. I find that a bit confusing because people talk about training a model, not training the weights.
awesome tutorial
Great job. Thanks to go deep in how the architecture work and where are the possible break points. One question: Do you have any architecture schema about each bloc of ollama and how to interact each others?
The explanation that LLAMA doesn’t upload things was the most interesting for me and it got answered in that video. It’s no wonder based on where llama comes from
It is still unclear to me how Ollama really works, was hoping for a deeper dive in that topic.
Hi Matt great content. I think it would be great if you have some kind of a cookbook with your experiments ? Similar to hugging face. On other note would be great to have vidoes on RAG +open source, running agents . they seem to be hot topics for now
Thanks!
I do have an "anything else" question : it seems like Ollama has its own custom place and way of storing models. Does this mean that if I want to do other stuff with the models, say for instance using EleutherAI's benchmarking stuff, I'll need a copy of all the models? I currently have lots and lots of models, almost 3T worth, so I'd rather not have to have 6T if I can help it...
another great video. I just need to implement rag so i can remember it.
Thanks for the details. Can you create a video on running 70B (or larger models) that doesn't fit entirely in GPU memory?
Amazing presentation. Is there a paper or post discussing the dockeresque style of ollama's manifest?
I don’t think so
Great presentation @Matt, I do have one question as it relates to the saving option, if you save once and then continue asking questions will it continue to save, or will you need to do this after or during each session?
That’s not really the purpose of the save. save creates a new model based on the current state. Once you create the system prompt and parameters, save it so that you can use it later.
Great video, can you make a video on autogen-studio + ollama ? Is there a local model able to run the playground examples ?
Yes. Been meaning to cover that
Thanks for this... just what I needed 👍what would really help would be a list of example prompts, or sequences of prompts, that produce workable code. So far for me there's been almost no good results for anything more than a very simple question (which I likely would never ask)... for example I asked for a popup list that when a list item is clicked, dismisses the popup and returns the item. What i got was a non modal contact form with a select list. Many iterations later still nothing useful. Cheers...
Excelente
This is super helpful as a reference for my efforts to get the Security people to stop freaking out about what demons might be infecting people's computers if they allow something like Ollama to be distributed. Can you comment further on what happens with those "memory" files? I presume they stay local even if someone pushes the model that they're associated with. Also, I'd like to see a short video (or maybe there is one) on the concept of someone "customizing a model and pushing it to the Library". What customizations are available? If customizing doesn't change the weights, why would someone do it? How is a "model" different than "the weights"? These are all things that Security is going to ask.
Hey! Big fan and user of Ollama, thank you for everything you do!
Just out of curiosity, how does Ollama work on M1/M2 Macs which lack NVIDIA gpus?
When I try to run Mixtral through PyTorch, I get an error about not having NVIDIA CUDA. How does Ollama bypass this problem?
the team runs on apple silicon macs, so that came first. Metal and Apple Silicon is super powerful. Ollama doesn't use Pytorch or Python. It avoids a lot of problems.
@@technovangelist So it’s PyTorch that’s dependent on CUDA, not the LLM model itself?
Are there any frameworks besides from Ollama that work with Apple Silicon? Seems like tensorflow doesn’t work on Apple Silicon either?
Asking because I’m trying to fine-tune models locally. Wishing for the day Ollama adds local fine-tuning as a feature!
Hi Matt! Thanks for your videos - u inspired me to get into subject.
How can I use my own data in ollama? I would try to build up some search engine to find products accoding some given attributes. So I have product title, sku, description. Would be good if I could get product detail (including product number) when posting query like "mineral oil, 5 w40" or so.
Ho can I start with that?
If you have structured data and you want to search for things that match then more traditional db lookup is better and faster.
@@technovangelist Well, it is not so structured 🙂 Very often product descriptions are quite chaotic and user type not exactly what's inside descriptions. Anyway, whatever it is - how to use own data?
Why do none of the models work on my end? When I try mistral or llama 2, I get a message stating those terms are considered hate speech. Is there a workaround for this?
For those on Arch linux, don't make the mistake I did if you have Nvidia.. I install the package ollama from AUR, but never noticed the ollama-cuda package which is sooo much faster.
Can someone explain how does Ollama manage to run large models on my server when I can't even run smaller ones? (Ollama can run 34b versions of models and without it I run out of memory already on 7b versions)
I assume without ollama you are trying to work with unquantized models? A 7b unquantized model will take 32gb of vram at least because it’s 4bytes per parameter. Quantization reduces the size of the parameters with almost no impact on its precision. So it can be reduced to needing 3.5gb for best performance or as little as under 2. A 34b model could fit in as little as 9gb.
Question about ollama:
I have 8Gb VRAM.
First request I ask to run mistral7b for example.
The second request I ask to run llama2.
Is it auto changing the model or what? Thanks. Is it able to do what I said? Because if yes, it's very useful for small VRAM like me :D
Yeah. With smaller models this is closer to magic. It dumps the first model from memory and then loads the second and answers the question. With larger models you feel the time taken to load and unload models.
@@technovangelist Woahh.. Why I just know about this......Thank you so much. This is exactly what I need later for my personal project.
how can i point ollama to my own registry where say im hosting models on my own on prem hardware?
That will come soon. The person on the team who wrote the registry I think was able to use his code from docker hub when he created that. But a lot had to change to support ollama. Docker images are minuscule compared to LLMs
If you "export OLLAMA_HOST = YOUR_IP_HERE ollama serve" to serve ollama on another IP like inside your tailnet like i am, the ollama instance you get cannot use the models downloaded form the "normal" "ollama run" instance. you have to download every model again and manage them in two (or more) places. is that a bug or is there a meaning with doing it like that?
Also programs like autogenStudio does not set the "num_ctx" so it defaults to 2048, is there a way to set that on a per model basis like just set it for "mistral:7b", i guess i could create a model file maybe? but then i have to do it for each model/new model i have/get. would be so much easier if it just pulled that from the ollama site as a model specific default value.
ALSO, love your videos just subbed!
Yes, if you setup environment variables the wrong way as you showed, then it’s a different user accessing them. So the models aren't where they are expected to be. I have a couple videos on here that show how to set environment variables for Mac, Linux, and Windows. When you set them properly you won't have any issues.
For the num_ctx question, that’s where a modelfile comes in. You can set that parameter and get the larger context, but be warned, large context takes a huge amount of memory
@@technovangelist thank you, I will go find those videos.
actually, you are right. I should do another that shows that scenario a bit more. thanks
@@technovangelist Can you please provide more context on how to do it, for the num_ctx question
do i have to duplicate the model to add my permanent system
prompt?
Creating a new model based on another with a new system prompt will mean an additional few kb. It will use the same weights layer.
great ~@@technovangelist but how do i do this?
For linux u can just ctrl c out of the ollama serve and thats good enough for the gpu memory.
Yes. Do that and wait 5 minutes.
@technovangelist dosent it force the gpu to clear? On my machine at least it seems to do that.
Because when u kill the ollama parent process the os clears the gpu memory automatically
ahh sorry, you are running ollama serve at the cli rather than the recommended way of running it as a service. Got it. I think most use it the normal way and thats what I cover in the video.
Ollama vs LM studio, what is better and what is difference?
The way I have heard most describe it is that pm studio is great to start with but you quickly hit walls that are difficult to deal with. Ollama will let you do far more with better speed and efficiency. Plus Ollama is open source.
Can you help us out with a detailed video on how to add a new layer to a ollama model 6:05
are you asking about doing the fine tuning, or adding the fine tune adapter to ollama?
@@technovangelist I meant doing the Fine tuning
@@technovangelist And also I'll be very thankful if you provide me with more context on the fine tune adapter to Ollama( Am new to all those concepts, appreciate your advice 😃🫡)
Waiting on your knowledge 😉 😀 @@technovangelist
Discord is Chaos, and I don't like Chaos, but I really like your videos, so thanks.
I think the discord is lining up nicely with what the maintainer team is wanting. It’s not chaos its activity.
You should cover if it’s running in a container run time. I think it is.
If what’s in a container runtime? Ollama? No ollama by default does not use docker. That said some choose to run it in docker and the members of the ollama team created docker desktop and docker hub when they worked at docker
@@technovangelist then I’m curious how the nvidia drivers work and are isolated and don’t conflict with my system ones.
It is using the system ones
Brilliant, thanks, Matt, but you do look a bit thirsty after that video; better keep your bottle closer.
That water bottle lights up when it’s time to drink and I still forget.
@@technovangelist I have two good suggestions for a new video for you :
1),To show a way to use the ollama downloaded nodems by both windows and Linux(Wls2) in the same machine ,avoiding to download twice the same models .
2) what best hardware spec to use ollama on a fast and reliable flash drive using usb3.2 .
Im unable to do the first. I don't have a windows machine... just 6 macs in this house, plus a few proxmox machines mostly running kubernetes. and since i can't get WSL on any cloud instances, I can't get it working. if asking for best hardware spec to use ollama from me....well, based on what I said above... hard to beat a mac studio.
lol at sutfin the waves xD
☀🌊🏄♀
Ollama doesn't support parallel processing. So it can't be used for commercial purpose.
Those are two completely unrelated statements.
A well architected solution will get you everything you want. But plenty are using as is today for commercial use.
Issue with ollama is it does not scale beyond the user using it. Requests get queued
Not everyone has to like it. You don’t like that it does what it’s designed to do. Serve one user really really well rather than slowing it down for everyone.
How does it not run on Windows
Huh? It does run on windows. Last week they released the native app and for months before it worked with WSL
Insert a palworld joke here
hmm, never heard of palworld, and looking it up i don't see the connection. Can you clue me in?
Problem is that even mixtral sucks compared to gpt4
There are some things that gpt does better and others that open source models do better. Even some of the smallest models are on par with what chatgpt offers but far faster, which is a pretty big deal. Now if you just look at benchmarks then it might look like chatgpt wins everywhere, but I haven't seen a benchmark that reflects reality yet.