Mosleh Mahamud
Mosleh Mahamud
  • 40
  • 39 473
Fine Tuning Qwen 2 with Custom Data
Have questions or ideas, meet similar people?
join the discord : discord.gg/R3dPsd2E
Don't fall behind the AI revolution, I can help integrate machine learning/AI into your company.
mosleh587084.typeform.com/to/HSBXCGvX
Notebook links:
Why Fine Tune?
Fine-tuning Qwen 2, a large language model (LLM), is essential for optimal performance and customization. It improves accuracy and efficiency for specific tasks like customer support and content creation. Tailoring the model to industry-specific needs enhances its understanding of specialized terminology and context. Fine-tuning also reduces biases and ensures ethical compliance, providing fair and appropriate responses. Regular updates keep the model relevant with new data and trends. Additionally, it improves interpretability and control, aiding in debugging and continuous improvement. Ultimately, fine-tuning Qwen 2 offers superior user experiences, strategic business advantages, and cost efficiency.
What is Qwen 2?
Qwen 2 is a series of large language models developed by Alibaba Cloud, designed to excel in various AI tasks. The Qwen 2 models range in size from 0.5 billion to 72 billion parameters, making them versatile for applications such as language understanding, generation, multilingual tasks, coding, and mathematics.
The Qwen 2 series boasts significant improvements in performance and efficiency. Leveraging advanced techniques like Group Query Attention, these models deliver faster processing with reduced memory usage. They support extended context lengths up to 128K tokens, enhancing their capability to manage long-form content.
Trained on data in 29 languages, including English, Chinese, German, Italian, Arabic, Persian, and Hebrew, Qwen 2 models excel in multilingual tasks. They have demonstrated superior performance on various benchmarks, surpassing other leading open-source models in language understanding and generation tasks.
Qwen 2 models are also designed with responsible AI principles in mind, incorporating human feedback to align better with human values and safety standards. They perform well in safety benchmarks, effectively handling unsafe multilingual queries to prevent misuse related to illegal activities.
These models are available on platforms like Hugging Face and Alibaba Cloud’s ModelScope, facilitating easy deployment for both commercial and research purposes.
zhlédnutí: 9

Video

NV-Embed-v1: Best Embeddings Model To Use 2024
zhlédnutí 101Před 2 hodinami
Have questions or ideas, meet similar people? join the discord : discord.gg/R3dPsd2E Don't fall behind the AI revolution, I can help integrate machine learning/AI into your company. mosleh587084.typeform.com/to/HSBXCGvX Timestamps: Intro 0:00 MTEB Leaderboard 0:27 Extracting embeddings 1:27 Different embedding methods 3:15 NV-Embed-v1 by NVIDIA NV-Embed-v1 is a generalist embedding model that r...
Deploying Qwen 2 Model With AWS
zhlédnutí 58Před 4 hodinami
This video shows different deploying strategies with Qwen 2 using the easiest method available using 1 cost effective and 1 expensive method. Qwen 2 regardless of size can be deployed on AWS, GCP or azure. Have questions or ideas, meet similar people? join the discord : discord.gg/R3dPsd2E Don't fall behind the AI revolution, I can help integrate machine learning/AI into your company. mosleh587...
Building RAG With Qwen 2
zhlédnutí 863Před 7 hodinami
Have questions or ideas? join the discord : discord.gg/R3dPsd2E Don't fall behind the AI revolution, I can help integrate machine learning/AI into your company. mosleh587084.typeform.com/to/HSBXCGvX Notebook: github.com/mosh98/RAG_With_Models/blob/main/Simple RAG/Qwen2_Lanchain_RAG_DEMO.ipynb Hugginface model card: huggingface.co/Qwen/Qwen2-72B Ollama repo: ollama.com/library/qwen2 The Qwen 2 m...
Nvidia Nim: Deploy Open Source LLMs with 1 click
zhlédnutí 102Před 9 hodinami
Have questions or ideas? join the discord : discord.gg/R3dPsd2E NVIDIA NIM (NVIDIA Inference Microservices) offers numerous benefits for businesses deploying AI models at scale. First, it leverages optimized inference engines tailored to specific models and hardware, enhancing latency and throughput while reducing operational costs and improving user experiences. NIM is part of the NVIDIA AI En...
Classifying Sentences Using Nomic Embed Text
zhlédnutí 72Před 14 hodinami
Don't fall behind the AI revolution, I can help integrate machine learning/AI into your company. AI Freelancing: mosleh587084.typeform.com/to/HSBXCGvX Have questions or ideas? join the discord : discord.gg/R3dPsd2E This video shows how to get embeddings using nomic-embed-text locally! Using Sentences transformer and classifying it using statistical models from sklearn. model card: huggingface.c...
Fine tuning Embeddings Model
zhlédnutí 379Před dnem
Fine tuning with the new Sentence Transformers v3.0. Notebook: github.com/mosh98/RAG_With_Models/blob/main/Fine-Tune/Fine_tuing_embeddings_model_DEMO.ipynb I can help integrate machine learning/AI into your company. mosleh587084.typeform.com/to/HSBXCGvX Have questions or ideas? join the discord : discord.gg/R3dPsd2E This video you will learn 1. Fine tuning embeddings model 2. What types of Data...
Fine-Tuning PaliGemma With Custom Data
zhlédnutí 189Před dnem
I can help integrate machine learning/AI into your company. mosleh587084.typeform.com/to/HSBXCGvX Notebook found here: github.com/mosh98/RAG_With_Models/blob/main/Fine-Tune/Fine_tune_PaliGemma_Demo.ipynb This video is about Fine Tuning PaliGemma with VQA dataset from Hugginface. Unlock the power of AI with PaliGemma, Google's state-of-the-art vision-language model! This video dives deep into th...
Fine Tuning Mistral v3.0 With Custom Data
zhlédnutí 2,2KPřed 14 dny
Don't fall behind the AI revolution! I can help integrate machine learning/AI into your company. mosleh587084.typeform.com/to/HSBXCGvX Mistral Fine tuning: github.com/mistralai/mistral-finetune Have questions or ideas? join the discord : discord.gg/R3dPsd2E Video is about Fine tuning Mistra v3 model with custom data. Mistral v3 is a new model that came out it has many benifits. The Mistral v3.0...
Building RAG with Mistral v0.3
zhlédnutí 996Před 14 dny
Don't fall behind the AI revolution! I can help integrate machine learning/AI into your company. mosleh587084.typeform.com/to/HSBXCGvX Code: github.com/mosh98/RAG_With_Models/blob/main/Simple RAG/Mistralv3_Lanchain_Ollama_RAG.ipynb Video is about getting embeddings from Mistra v3 model using Ollama. Mistral v3 is a new model that came out it has many benifits. The Mistral v3.0 model brings sign...
Get Embeddings From Mistral v0.3 Locally
zhlédnutí 377Před 14 dny
Notebook: github.com/mosh98/RAG_With_Models/blob/main/Simple RAG/Mistralv3_Lanchain_Ollama_RAG.ipynb Video is about getting embeddings from Mistra v3 model using Ollama. Mistral v3 is a new model that came out it has many benifits. The Mistral v3.0 model brings significant advancements in AI technology with its new architectural features, including Sliding Window Attention and Grouped Query Att...
Building AI Assistant for My SEO Work: Crew AI
zhlédnutí 328Před 14 dny
Let me know if i could improve my SEO agent, still working on improving it. Don't fall behind the LLM revolution, I can help integrate machine learning/AI into your company. AI Freelancing: mosleh587084.typeform.com/to/HSBXCGvX What is CREW AI? CrewAI is a powerful framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI enables agents to w...
Text Classification Using Llama 3
zhlédnutí 828Před 21 dnem
Notebook: github.com/mosh98/Embedding_Classification/blob/main/Llama3_Embeddings_classify DEMO.ipynb Don't fall behind the AI revolution, I can help integrate machine learning/AI into your company. AI Freelancing: mosleh587084.typeform.com/to/HSBXCGvX This video shows how to get embeddings using llama3 locally! Using Ollama and classifying it using statistical models from sklearn. Why use Llama...
Get Embeddings From Falcon 2
zhlédnutí 137Před 21 dnem
Don't fall behind the LLM revolution, I can help integrate machine learning/AI into your company. AI Freelancing: mosleh587084.typeform.com/to/HSBXCGvX Code: github.com/mosh98/RAG_With_Models/blob/main/GPT4o_Lanchain_RAG.ipynb Falcon 2 11B paramter model that is supposedly outperforming Llama 3. Falcon-11B model developed by the Technology Innovation Institute (TII). This state-of-the-art langu...
Advanced RAG: Ensemble Retrieval
zhlédnutí 1,6KPřed 21 dnem
Don't fall behind the LLM revolution, I can help integrate machine learning/AI into your company. AI Freelancing: mosleh587084.typeform.com/to/HSBXCGvX Code: github.com/mosh98/RAG_With_Models/blob/main/Simple RAG/GPT4o_Lanchain_RAG.ipynb When building RAG (Retrieval-Augmented Generation) applications, choosing the right retrieval parameters and strategies is crucial. Options range from chunk si...
Building RAG with GPT4o
zhlédnutí 2,3KPřed 28 dny
Building RAG with GPT4o
Generate RAGAS Testset
zhlédnutí 178Před 28 dny
Generate RAGAS Testset
Get Embeddings From Phi 3
zhlédnutí 560Před měsícem
Get Embeddings From Phi 3
Evaluating RAG using Llama 3
zhlédnutí 823Před měsícem
Evaluating RAG using Llama 3
Get Embeddings From Llama3
zhlédnutí 3,5KPřed měsícem
Get Embeddings From Llama3
Building RAG with Llama 3 using LlamaIndex
zhlédnutí 679Před měsícem
Building RAG with Llama 3 using LlamaIndex
Building RAG with Phi 3 Locally
zhlédnutí 1,6KPřed měsícem
Building RAG with Phi 3 Locally
Comparing Phi-3 with Llama3 and More
zhlédnutí 446Před měsícem
Comparing Phi-3 with Llama3 and More
Building RAG with Llama3 Locally
zhlédnutí 1,4KPřed měsícem
Building RAG with Llama3 Locally
Llama 3 vs Claude 3 Benchmark Comparison
zhlédnutí 601Před měsícem
Llama 3 vs Claude 3 Benchmark Comparison
Use Claude 3 with Langchain
zhlédnutí 227Před 2 měsíci
Use Claude 3 with Langchain
Chat Interface on Lanchain
zhlédnutí 105Před 2 měsíci
Chat Interface on Lanchain
Advanced RAG Techniques
zhlédnutí 2,3KPřed 4 měsíci
Advanced RAG Techniques
Simply Explained: Retrieval-Augmented Generation
zhlédnutí 286Před 6 měsíci
Simply Explained: Retrieval-Augmented Generation
Chat with your PDF using GPT-4
zhlédnutí 252Před 9 měsíci
Chat with your PDF using GPT-4

Komentáře

  • @hackedbyBLAGH
    @hackedbyBLAGH Před 4 hodinami

    How about confidential data

  • @TraVoltage
    @TraVoltage Před 12 hodinami

    Disliked, Jokic is the GOAT

  • @dacol9075
    @dacol9075 Před 2 dny

    Thanks. You know how use paligemma without higgingface? For example I download pali models on my pc and i need make inferences using my gpu with not internet connection

  • @ddhruvarora
    @ddhruvarora Před 2 dny

    Hi, thanks for a great video, it worked well but how to save those embedding, like I am using a static data and it won't change

  • @Atsolok
    @Atsolok Před 2 dny

    Why would I use this if I can just copy the link, insert it into ChatGpt and ask the same question?

    • @moslehmahamud9574
      @moslehmahamud9574 Před 2 dny

      You're probably right. This is just a toy example. What if you want to use various complex data sources and care about data security, would you still use chat gpt....

  • @naveenpandey9016
    @naveenpandey9016 Před 2 dny

    Please make more RAG applications with advance techniques with larger context model

  • @natzzu9569
    @natzzu9569 Před 4 dny

    Very nice tutorial, I was actually looking into nim and found your video. I have a few questions I want to ask you, can I get your discord?

    • @moslehmahamud9574
      @moslehmahamud9574 Před 4 dny

      Hi, glad you liked the videos. Lot of people have been asking about my discord: so i made a new discord server. You'll find me there: discord.gg/R3dPsd2E

  • @RedWhiteBlue209
    @RedWhiteBlue209 Před 7 dny

    Could you please post your code used in this tutorial?

  • @farazfitness
    @farazfitness Před 9 dny

    And what if my data is not in that format because I have a few law judgements and it's not possible to format the data in that way

  • @user-ty2gg8vv6m
    @user-ty2gg8vv6m Před 9 dny

    Thanks for the video. But what models with what configuration could be trained with free tier gpu..? Maybe phi3 mini?

    • @moslehmahamud9574
      @moslehmahamud9574 Před 9 dny

      I'll take a look good idea, but colab is the cheapest alternative in the market right now

    • @user-ty2gg8vv6m
      @user-ty2gg8vv6m Před 9 dny

      @@moslehmahamud9574 hmm, okay, thx!

  • @wilfredomartel7781
    @wilfredomartel7781 Před 10 dny

    Great video! is it only for english?

    • @moslehmahamud9574
      @moslehmahamud9574 Před 9 dny

      Thanks, you can train on other languages too, make sure to pick a multi-lingual model.

  • @jennilthiyam1261
    @jennilthiyam1261 Před 10 dny

    HI. thank you for this video. I am using a server, and we are not allowed to use anything on baseline, but have tyo create docker container. I have installed llama in docker following the command docker run -d --gpus=all -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama and i am able to run llama3 from ollama with the command docker exec -it ollama ollama run llama3 Now, could you please tell me how can i follow your way to use ollama for embedding? i want to use llama3 from ollama as embedding model like you did in the video.

  • @HashtagTiluda
    @HashtagTiluda Před 11 dny

    Create a video on how to fine-tune multi-modal LLM models using custom image datasets.

  • @jeandujean7720
    @jeandujean7720 Před 13 dny

    Hi Mosleh i try to get an appointement with you but your link don't work.

    • @moslehmahamud9574
      @moslehmahamud9574 Před 13 dny

      Hi Jean, just opened a slot for you, the link should work now.

  • @RabeeQasem
    @RabeeQasem Před 14 dny

    thank you

  • @user-we1ph2dw2o
    @user-we1ph2dw2o Před 15 dny

    Good video. LBJ is far from GOAT though

    • @moslehmahamud9574
      @moslehmahamud9574 Před 15 dny

      Well, different folks, different strokes ;) Glad you liked the video

  • @Aditya_khedekar
    @Aditya_khedekar Před 16 dny

    Can you implement evaluation with qdrant, ragas or some-other fav framework, langchain, langfuse (open-source alternative to langsmith)

    • @moslehmahamud9574
      @moslehmahamud9574 Před 15 dny

      Hi, thanks for the suggestions i'll write them down. I did make some around ragas though, you can find them in the channel.

    • @Aditya_khedekar
      @Aditya_khedekar Před 12 dny

      @@moslehmahamud9574 looking forward to it also checkout aporia for rag hallucination make video if you can as i don't have a company email to signup

  • @user-jk9gn5ox8v
    @user-jk9gn5ox8v Před 16 dny

    You could have used a pre-trained embedded model for feature extraction. I believe the results would be better than a text generation model such as llama 3 for feature extraction.

    • @moslehmahamud9574
      @moslehmahamud9574 Před 15 dny

      Great idea, I wanted to see if it would perform any good with llama 3

    • @pranavsharma8281
      @pranavsharma8281 Před 14 dny

      Could you suggest some pre trained embedded models for feature extraction/ classification?

    • @RedWhiteBlue209
      @RedWhiteBlue209 Před 7 dny

      @@moslehmahamud9574 Any update on this?

    • @RedWhiteBlue209
      @RedWhiteBlue209 Před 7 dny

      Could you please explain how to do it? I would like to test out this idea? Thanks!

    • @RedWhiteBlue209
      @RedWhiteBlue209 Před 7 dny

      @@moslehmahamud9574 Do you have a video for this?

  • @aryanjain5535
    @aryanjain5535 Před 16 dny

    I couldnt see ur any day I can book, dont u have a discord or something?

  • @hackedbyBLAGH
    @hackedbyBLAGH Před 17 dny

    Thank you. Also why use Mistral 3 vs nomic-embed

    • @moslehmahamud9574
      @moslehmahamud9574 Před 16 dny

      Good question, nomic-embed is a strong embeddings model. However, it could be useful to try mistral 3 model just to experiment. Could be better or worse for different use cases.

    • @xspydazx
      @xspydazx Před 7 dny

      if you have a pretrained model , and your using it ... why would you use some forigin embeddings from another model on your rag system ? are you going to pay for something you already have ? or is your model not good enough to provide you the mebdding it uses for prediction ? perhaps you should also use the models tokenizer as well ? it really sounds silly when people have seprate models for seperate tasks when one model can handle the job ? crazy thinking !

    • @xspydazx
      @xspydazx Před 7 dny

      @@moslehmahamud9574 to use it for embedding alone is not correct but if this is the model you are using as a pretrained then it make sense to use the same embedding as the model itself as they have been trained!: in fact: when you fine tune your model you update these embeddings also so your training your embeddings model also !: so if you have trained you model to handle code or other custom data then obvioulsy your emebdding space is also trained for this ! but the orignial base model maybe very far from your fine tuned model ! hence your embedding space is personalized to the model ! especially fine tuned moels of the same type .... ie my mistral and yours both 7b instruct ...both trained on personal lines... will have different embeddings !

    • @xspydazx
      @xspydazx Před 7 dny

      @@moslehmahamud9574 nomic is a great embeddign for emdding only models .... the question is how to replace your tokenizer with the sentence transformer ... created like this ? can this be the final tokenizer ?

  • @RabeeQasem
    @RabeeQasem Před 17 dny

    Can you do a tutorial on how to fine-tune Falcon 2? There isn't much content on it

  • @jasonzhang6534
    @jasonzhang6534 Před 18 dny

    simple but awesome explanation.

  • @johnbarguti1025
    @johnbarguti1025 Před 19 dny

    By specifying the model in the ollama.embeddings() call and in the OllamaEmbeddings class, what goes on behind the scenes and how is that model utilized in that scenario? Are there advantages to different models specified for the embedding process?

    • @moslehmahamud9574
      @moslehmahamud9574 Před 18 dny

      Very insightful question. I'm assuming the embeddings are extracted from the linear layer at the end of the LLama architecture (this is an assumption, of course). Regarding the advantage part, it depends on your use case, but using the embeddings can be an additional experiment. It could also be useful to check other embeddings models too.

  • @aryanjain5535
    @aryanjain5535 Před 20 dny

    Hey Buddy, I've been noticing u from a while now on this channel Can u help me improve my RAG? I have everything ready and working just wanna optimise it

    • @moslehmahamud9574
      @moslehmahamud9574 Před 18 dny

      Hi, i made some videos on optimizing rag architecture. If you need help with something specific feel free to book a meeting. The link in the description. Will have some slots opening very soon.

  • @TesterOps09
    @TesterOps09 Před 20 dny

    Hi Mosleh, thanks for this. I keep getting error when trying this with llama3. This worked perfectly with llama2? What could be the reason? I have both llama2 and llama3 installed. Actually I tried first with llama3 and then installed and tried with llama2. With llama3 I keep getting error that it cannot establish a connection, even though I could see that llama3 is running on port 11434

    • @moslehmahamud9574
      @moslehmahamud9574 Před 16 dny

      Hmm, quite an unusual problem. Maybe try running llama3 on a different port?

  • @Tokyo_17
    @Tokyo_17 Před 20 dny

    Can you make a video of this in vscode

  • @user-yu4tp6gb4t
    @user-yu4tp6gb4t Před 21 dnem

    what tools do you use to generate the mind map?

  • @ehza
    @ehza Před 21 dnem

    Thank you!

  • @amritsubramanian8384
    @amritsubramanian8384 Před 22 dny

    Gr8 Video, Super Userful :)

  • @alejandrogallardo1414

    You ran LLama 3 8B locally on a Mac?!

  • @WerexZenok
    @WerexZenok Před 26 dny

    Thank your for your explanation. I will upload your video to chat gpt so he can do the understanding part for me.

    • @moslehmahamud9574
      @moslehmahamud9574 Před 25 dny

      Thanks! Any part of the video that was no so easy to understand? Maybe i could improve it

    • @WerexZenok
      @WerexZenok Před 25 dny

      @@moslehmahamud9574 It was just a joke about overusing IA for everything. I don't even know what RAG is. :D

    • @moslehmahamud9574
      @moslehmahamud9574 Před 25 dny

      Yea okey i fell for that 😂

  • @ManthanNarang
    @ManthanNarang Před 27 dny

    great video 🫡 could i expect an advanced tutorial on building an chatbot using gpt-4o api and implementing RAG

    • @moslehmahamud9574
      @moslehmahamud9574 Před 27 dny

      Thanks! I'm working on an advanced rag implementation as of writing. Although, I did make an adavanced rag technique video in my channel. Hope you find it useful

    • @farazfitness
      @farazfitness Před 26 dny

      Do advanced RAGs search from the internet if it cant find it in the data ive provided ???​@@moslehmahamud9574

  • @johnbrandt5158
    @johnbrandt5158 Před 27 dny

    Hey! What are your computer specs? Wondering how that may affect speed, either positively or negatively.

    • @moslehmahamud9574
      @moslehmahamud9574 Před 27 dny

      Hey! Using an M1 macbook pro (2020). Works decent for basic inference. Training is bit of a struggle, as expected. Let me know if you have any tips

  • @techno-j5201
    @techno-j5201 Před 27 dny

    How to solve it

  • @techno-j5201
    @techno-j5201 Před 27 dny

    Getting Error --------------------------------------------------------------------------- ReadTimeout

  • @wilfredomartel7781
    @wilfredomartel7781 Před 29 dny

    😊

  • @hackedbyBLAGH
    @hackedbyBLAGH Před měsícem

    Cool

  • @rohitutube928
    @rohitutube928 Před měsícem

    Hi man! nice explanation. I was also trying to do that at my end. But I am getting some validation errors: raise validation_error pydantic.v1.error_wrappers.ValidationError: 2 validation errors for LLMChain llm instance of Runnable expected (type=type_error.arbitrary_type; expected_arbitrary_type=Runnable) llm Don't know why its happening. Can you please tell me how to resolve this

  • @abhishekgoyal9397
    @abhishekgoyal9397 Před měsícem

    How is different from visual Bert ?

  • @htrnhtrn6986
    @htrnhtrn6986 Před měsícem

    Is it true that the embedding values of the three methods are different for the same sentence?

  • @LuigiBungaro
    @LuigiBungaro Před měsícem

    Thanks for sharing :) In the initial package installation I had also to run: `pip install llama-index-embeddings-ollama` in order to run `from llama_index.embeddings.ollama import OllamaEmbedding`.

    • @moslehmahamud9574
      @moslehmahamud9574 Před měsícem

      Thanks! thats correct! i'll add it in the notebook. Frogot that i installed it before

  • @Marcel.Hasslocher
    @Marcel.Hasslocher Před měsícem

    Em PTBR czcams.com/video/0VtGC_N3Rvk/video.htmlsi=s5QDIeDhtLmYlZxb

  • @RedSky8
    @RedSky8 Před měsícem

    What about without using gpt 4?

  • @vedforeal7835
    @vedforeal7835 Před měsícem

    Hello, what if i wanted to use the rag agent for a crm website how would that work

  • @nfaza80
    @nfaza80 Před měsícem

    I, too, have embarked upon the arduous journey ofRAG, encountering a similar quandary wherein the retrieval mechanism, much to my chagrin, procures contextual information that bears little to no relevance to the query at hand. My hypothesis is that the embedding model, the very foundation upon which this edifice is built, is the root of this discrepancy. As my endeavors lie within the realm of the Indonesian language, I humbly beseech thee for any sagacious counsel or erudite suggestions that may illuminate the path towards a resolution

    • @antonioreyes7296
      @antonioreyes7296 Před měsícem

      Phi3 was specifically trained on a smaller quantity of higher quality data to similarly performing models, this means that there was a focus on english language data in training

    • @moslehmahamud9574
      @moslehmahamud9574 Před měsícem

      As the other guy mentioned, you could train the embeddings model on your domain, but also maybe test your retrival using various evaluation metric. Maybe RAGAS or something similar?

  • @Quizolo
    @Quizolo Před měsícem

    Amazing habibi

  • @changtimwu
    @changtimwu Před měsícem

    Hi! thanks for the great example. Have you tried to ask more questions e.g. "Which team has the most 3-point shots?"?

    • @moslehmahamud9574
      @moslehmahamud9574 Před měsícem

      Hi, Good question! Took the NBA post season stats from espn. Did multiple runs, But it unfortunately did not give a good response. The recall could be it's weak spot.

  • @MarcBossYT
    @MarcBossYT Před měsícem

    Nice

  • @path5940
    @path5940 Před 2 měsíci

    I was scared a little bit when your voice changed lol 2:18 But good video overall, thank you

  • @mrrohitjadhav470
    @mrrohitjadhav470 Před 2 měsíci

    Great ❤ Awesome lesson, Please look at fine-tuning this existing model with many documents for Ollama models. I looked everywhere and couldn't locate one without utilising an API or Langchain.