Unlock Ollama's Modelfile | How to Upgrade your Model's Brain using the Modelfile
Vložit
- čas přidán 31. 05. 2024
- In this video, we are going to analyse the Modelfile of Ollama and how we can change the Brain of the Models in Ollama.
A model file is the blueprint to create and share models with Ollama.
The modelfile includes the following Instructions viz. FROM, PARAMETER, TEMPLATE, SYSTEM, ADAPTER, LICENSE AND MESSAGE.
Link: github.com/ollama/ollama/blob...
Let’s do this!
Join the AI Revolution!
#ollama #modelfile #milestone #AGI #openai #autogen #windows #ollama #ai #llm_selector #auto_llm_selector #localllms #github #streamlit #langchain #qstar #openai #ollama #webui #github #python #llm #largelanguagemodels
CHANNEL LINKS:
🕵️♀️ Join my Patreon: / promptengineer975
☕ Buy me a coffee: ko-fi.com/promptengineer
📞 Get on a Call with me - at $125 Calendly: calendly.com/prompt-engineer4...
❤️ Subscribe: / @promptengineer48
💀 GitHub Profile: github.com/PromptEngineer48
🔖 Twitter Profile: / prompt48
TIME STAMPS:
0:00 Intro
0:30 Download Ollama
1:15 Startup Ollama
4:10 Introducing the Modelfile
5:15 Modelfile in Depth
7:46 System in Modelfile
8:20 Construct Custom Model from Modelfile
9:18 Test the new Custom Model
10:43 Messages in Modelfile
12:57 Next Video Conclusion
🎁Subscribe to my channel: / @promptengineer48
If you have any questions, comments or suggestions, feel free to comment below.
🔔 Don't forget to hit the bell icon to stay updated on our latest innovations and exciting developments in the world of AI! - Věda a technologie
Thanks for taking up the request ... 😊
🤗 Welcome
I use the web ui and I feel it's much easier to manage the modelfiles and the obvious history tracking oc the chat etc etc.
Great Video, Thanks very much.
You are welcome!
Great 👍
Thank you! Cheers!
great thanks
You are welcome!
nice one, questions was how to use mistral_prompt for production purposes OR sending to client ??
Yes. U can push this to your Ollama login under your models. Then anyone will be able to pull the model by saying like Ollama pull promptengineer48/mistral_prompt . I will show the process in the next video on Ollama for sure.
@@PromptEngineer48appreciated mate
What exactly does this do that koboldcpp or sillytavern does not already do in a much simpler way?
basically if i can get the models running on ollama, we open another door of integration.
do you have a video showing how to use crewai and ollama together?
czcams.com/video/GKr5URJvNDQ/video.html
How does modelfile not have a file extension? This keeps me up at night not understanding how that works :)
I will find the reason and give you a night's sleep.
❯ ollama run mistral
>>> does a computer filename must have a extension?
A computer file name does not strictly have to have an extension, but it is a common convention in many computing systems, including
popular operating systems like Windows and macOS. An extension provides additional information about the type or format of the data
contained within the file. For instance, a file named "example.txt" with no extension would still be considered a valid file, but the
system might not recognize it as a text file and may not open it with the default text editor. In contrast, if the same file is saved
with the ".txt" extension, the system is more likely to open it using the appropriate text editor.
One popular file like `Modelfile` without an extension is `Dockerfile`. I think, developers named it like that one...
how do I find out when the model is actually updated? when was it filled with data and how outdated are they?
U will have to put a different name for the model...
@@PromptEngineer48 Thank you. but I asked how to find out the date of relevance when I download someone else's model and not make my own.
if you ollama list command in cmd, you will see all the list of models in your own system
This is great info. One thing I have wanted to do is migrate all my local models to another drive. With Win11 I was using wsl2 with Linux ollama then I installed windows ollama and lost the reference to the local models. I rather not download the models again. In addition would be nice to be able to migrate models to another SSD and have ollama reference the alternate model path.
OLLAMA_MODELS in windows works but only for downloading new models. When I copied models from the original wsl2 location to new location ollama would not recognize the models in the list command
Curious if anyone has needed to relocate the high number models to new location and have ollama able to refence this new model location
Got it
Is it possible to prepare a model with RAG and then save it as a new model?
To prepare a model for RAG, we would need to do finetune the model separately using other tools, get the .bin file or gguf file, then convert to Ollama intergration mode.
@@PromptEngineer48Thanks, I will try to take a deeper look into that, but something says that I won't have enough memory for that :(
Try on runpods
Stupid question. Does this creates a new model file or it just creates a instruction file for the base model to follow instructions?
New Model File
@PromptEngineer48 so the size on driver gets duplicated...? I mean 4gb of llama3 plus an extra 4gb for whatever copy we make?
@@JavierCamacho No the old is not used. just the new one
@@PromptEngineer48 thanks
🙄👍