RAG + Langchain Python Project: Easy AI/Chat For Your Docs

Sdílet
Vložit
  • čas přidán 29. 05. 2024
  • Learn how to build a "retrieval augmented generation" (RAG) app with Langchain and OpenAI in Python.
    You can use this to create chat-bots for your documents, books or files. You can also use it to build rich, interactive AI applications that use your data as a source.
    👉 Links
    🔗 Code: github.com/pixegami/langchain...
    📄 (Sample Data) AWS Docs: github.com/awsdocs/aws-lambda...
    📄 (Sample Data) Alice in Wonderland: www.gutenberg.org/ebooks/11
    📚 Chapters
    00:00 What is RAG?
    01:36 Preparing the Data
    05:05 Creating Chroma Database
    06:36 What are Vector Embeddings?
    09:38 Querying for Relevant Data
    12:47 Crafting a Great Response
    16:18 Wrapping Up
    #pixegami #python

Komentáře • 284

  • @mariannelacerdadutratheodo3141

    I am brazilian software engineering studant, and ive so much to thank you for all the time you had invest on this amazing content that helped me so much!!!!!

    • @pixegami
      @pixegami  Před 2 dny

      Thank you! I’m very glad to hear it was helpful for you ☺️

  • @senthilkumarpalanisamy365

    Excellent video, very well explained in a very simple way. please do post more in Gen AI space.

  • @colegoddin9034
    @colegoddin9034 Před 4 měsíci +37

    Easily one of the best explained walk-throughs of LangChain RAG I’ve watched. Keep up the great content!

    • @pixegami
      @pixegami  Před 4 měsíci +1

      Thanks! Glad you enjoyed it :)

  • @MattSimmonsSysAdmin
    @MattSimmonsSysAdmin Před 5 měsíci +3

    Absolutely epic video. I was able to follow along with no problems by watching the video and following the code. Really tremendous job, thank you so much! Definitely subscribing!

    • @pixegami
      @pixegami  Před 5 měsíci

      Thank you for your comment! I'm really glad to hear it was easy to follow - well done! Hope you build some cool stuff with it :)

  • @elijahparis3719
    @elijahparis3719 Před 5 měsíci +18

    I never comment on videos, but this was such an in-depth and easy to understand walkthrough! Keep it up!

    • @pixegami
      @pixegami  Před 5 měsíci

      Thank you :) I appreciate you commenting, and I'm glad you enjoyed it. Please go build something cool!

  • @wtcbd01
    @wtcbd01 Před 2 měsíci

    Thanks so much for this. Your teaching style is incredible and the subject is well explained.

  • @gustavojuantorena
    @gustavojuantorena Před 6 měsíci +7

    Your channel is one of the best of CZcams. Thank you. Now I'll go watch the video.

  • @insan2080
    @insan2080 Před 21 dnem +1

    This is what I look for! Thanks for the simplest explanation. There are some adjustments on the codebase during the updates but it doesn't matter. Keep it up!

    • @pixegami
      @pixegami  Před 18 dny

      You're welcome, glad it helped! I try to keep the code accurate, but sometimes I think these libraries update/change really fast. I think I'll need to lock/freeze package versions in future videos so it doesn't drift.

  • @jim93m
    @jim93m Před 3 měsíci +2

    Thank you, that was a great walk through very easy to understand with a great pace. Please make a video on LangGraph as well.

    • @pixegami
      @pixegami  Před 3 měsíci

      Thank you! Glad you enjoyed it. Thanks for the LangGraph suggestion. I hadn't noticed that feature before-tech seems to move fast in 2024 :)

  • @StringOfMusic
    @StringOfMusic Před 21 dnem +1

    Fantastic, clear, concise and to the point. thanks so much for your efforts to share your knowledge with others.

    • @pixegami
      @pixegami  Před 18 dny

      Thank you, I'm glad you enjoyed it!

  • @lalalala99661
    @lalalala99661 Před 25 dny +1

    Clean, strucktured, good to follow, tutorial. Thank you for that

    • @pixegami
      @pixegami  Před 18 dny

      Thank you! Glad you enjoyed it!

  • @michaeldimattia9015
    @michaeldimattia9015 Před 5 měsíci +2

    Great video! This was my first exposure to ChromaDB (worked flawlessly on a fairly large corpus of material). Looking forward to experimenting with other language models as well. This is a great stepping stone towards knowledge based expansions for LLMs. Nice work!

    • @pixegami
      @pixegami  Před 5 měsíci

      Really glad to hear you got it to work :) Thanks for sharing your experience with it as well - that's the whole reason I make these videos!

  • @narendaPS
    @narendaPS Před měsícem +1

    this is the best tutorial i have ever seen on this topic, thank you so much, Keep up the good work. Immediately subscribed.

    • @pixegami
      @pixegami  Před měsícem

      Glad you enjoyed it. Thanks for subscribing!

  • @geoffhirst5338
    @geoffhirst5338 Před 2 měsíci

    Great walkthrough, now all thats needed is a revision to cope with the changes to the langchain namespaces.

  • @stanTrX
    @stanTrX Před 5 dny

    Thanks for this good beginner video, telling the basics, easy to follow (finally someone) :))

  • @gustavstressemann7817
    @gustavstressemann7817 Před 3 měsíci +2

    Straight to the point. Awesome!

    • @pixegami
      @pixegami  Před 3 měsíci

      Thanks, I appreciate it!

  • @basicvisual7137
    @basicvisual7137 Před 2 měsíci +1

    Finally a good langchain video to understand better. Do you have a video in mind to use local llm using Ollama and local embeddings to port the code ?

  • @mao73a
    @mao73a Před 27 dny +1

    This was so informative and well presented. Exactly what I was looking for. Thank you!

    • @pixegami
      @pixegami  Před 18 dny

      You're welcome, glad you liked it!

  • @jasonlucas3772
    @jasonlucas3772 Před měsícem +1

    This was excellent. easy to follow, has codes and very useful! Thank you.

    • @pixegami
      @pixegami  Před měsícem

      Thank you, I really appreciate it!

  • @MrValVet
    @MrValVet Před 6 měsíci +3

    Thank you for this. Looking forward to tutorials on using Assistants API.

    • @pixegami
      @pixegami  Před 6 měsíci +1

      You're welcome! And great idea for a new video :)

  • @kwongster
    @kwongster Před 3 měsíci +1

    Awesome walkthrough, thanks for making this 🎉

    • @pixegami
      @pixegami  Před 2 měsíci

      Thank you! Glad you liked it.

  • @rikhavthakkar2015
    @rikhavthakkar2015 Před 2 měsíci +2

    Simple explained and kept an engaging tone.
    I would also look for a use case where the source of vector data is a combination of files (PDF, DOCX, EXCEL etc.) along with some database (RDBMS or File based database)

    • @pixegami
      @pixegami  Před 2 měsíci

      Thanks! That's a good idea too. You can probably achieve that by detecting what type of file you are working with, and then using a different parser (document loader) for that type. Langchain should have custom document loaders for all the most common file types.

  • @theneumann7
    @theneumann7 Před 2 měsíci

    Perfectly explained👌🏼

  • @RZOLTANM
    @RZOLTANM Před měsícem +1

    Really good. Thank very much sir. Articulated perfectly!

    • @pixegami
      @pixegami  Před měsícem

      Thank you! Glad you enjoyed it :)

  • @erikjohnson9112
    @erikjohnson9112 Před 6 měsíci +1

    I too am quite impressed with your videos (this is my 2nd one). I have now subscribed and I bet you'll be growing fast.

  • @mukundhachar303
    @mukundhachar303 Před 2 dny

    Thank you this is an amazing video. Learn lot of things from this video...

  • @PoGGiE06
    @PoGGiE06 Před 2 měsíci +3

    Great explanation. Perhaps one criticism would be using open ai’s embedding library: would rather not be locked into their ecosystem and i believe that free alternatives exist that are perfectly good! But would have loved a quick overview there.

    • @pixegami
      @pixegami  Před 2 měsíci +3

      Thanks for the feedback. I generally use OpenAI because I thought it was the easiest API for people to get started with. But actually I've received similar feedback where people just want to use open source (or their own) LLM engines.
      Feedback received, thank you :) Luckily with somehitng like Langchain, swapping out the LLM engine (e.g. the embedding functionality) is usually just a few lines of code.

    • @PoGGiE06
      @PoGGiE06 Před 2 měsíci

      @@pixegami It's a pleasure :).
      Yes, everyone seems to be using OpenAI by default, because everyone is using chatGPT. But there are lots of good reasons why one might not wish to get tied to open AI, anthropic, or any other cloud-based provider besides the mounting costs if one is developing applications using LLM. E.g. data privacy/integrity, simplicity, reproducibility (e.g. chatGPT is always changing and that is out of your control), in addition a general suspicion of non-open-source frameworks whose primary focus is often (usually?) on wealth extraction, not solution provision. There is not enough good material out there on how to create a basic RAG with vector storage using a local LLM, something that is very practical with smaller models e.g. mistral, dolphincoder, Mixtral 8x7b etc., at least for putting together an MVP.
      Re: avoiding openAI:
      I've managed to use embed_model = OllamaEmbeddings(model="nomic-embed-text").
      I still get occasional 'openAI' related errors, but gather that Ollama has support for mimicking openAI now, including a 'fake' openAI key, so am looking into that as a fix.
      ollama.com/blog/windows-preview
      I also gather that with llama-cpp, one can specify model temperature and other configuration options, whereas with Ollama, one is stuck with the configuration used in the modelfile when the Ollama-compatible model is made (if that is the correct terminology). So I may have to investigate that.
      I'm currently using llama-index because I am focused on RAG and don't need the flexibility of langchain.
      Good tutorial in the llama-index docs: docs.llamaindex.ai/en/stable/examples/usecases/10k_sub_question/
      I'm also a bit sceptical that langchain isn't another attempt to 'lock you in' to an ecosystem that can then be monetised e.g. minimaxir.com/2023/07/langchain-problem/. I am still learning, so don't have a real opinion yet. Very exciting stuff! Kind regards.

  • @ahmedamamou7221
    @ahmedamamou7221 Před měsícem +1

    Thanks a lot for this tutorial! Very well explained.

  • @thatoshebe5505
    @thatoshebe5505 Před 3 měsíci +1

    Thank you for sharing, this was the info I was looking for

  • @israeabdelbar8994
    @israeabdelbar8994 Před 3 měsíci +2

    Very helpful video! Keep going, you are the best!
    Thank you very much, I am looking forward to see a video about Virtuel assistant doing actions. By communicating others applications using API.

  • @stevenla2314
    @stevenla2314 Před 13 dny

    Love your videos. I was able to follow along and build my own RAG. Can you expand more on this series and explain RAPTOR retrieval and how to implement it?

  • @chrisogonas
    @chrisogonas Před měsícem +1

    Well illustrated! Thanks

  • @bec_Divyansh
    @bec_Divyansh Před 11 dny

    Great Tutorial! thanks

  • @matthewlapinta7388
    @matthewlapinta7388 Před 12 dny +1

    This video was pure gold. Really grateful for the concise and excellent walkthrough. I have two additional questions in regards to the metadata and resulting chunk reference displayed. Can you return a screenshot of the chunk/document referenced now that models are multimodal? Also a document title or ability to download such document would also be a cool feature. Thanks so much in advance!

    • @pixegami
      @pixegami  Před 7 dny +1

      Glad you enjoyed it! I think if you want to display images, or link/share resources via the chunk, you can just embed it at chunk creation time into the document meta-data.
      Upload your resource (e.g. image) to something like Amazon S3, then put a download link into the meta-data for example.

  • @quengelbeard
    @quengelbeard Před 3 měsíci +2

    Hi, by far the best video on Langchain - Chroma! :D
    Quick question: How would you update the chroma database if you want to feed it with documents (while avoiding duplication of documents) ?

    • @pixegami
      @pixegami  Před 2 měsíci

      Glad you liked it! Thank you. If you want to add (modify) the ChromaDB data, you should be able to do that after you've loaded up the DB:
      docs.trychroma.com/usage-guide#adding-data-to-a-collection

  • @elidumper52
    @elidumper52 Před 2 měsíci +1

    Super helpful, thank you!

  • @voulieav
    @voulieav Před 4 měsíci +1

    Epic.
    Thank you for sharing this.

  • @lucasboscatti3584
    @lucasboscatti3584 Před 4 měsíci +1

    Huge class!!

  • @serafeiml1041
    @serafeiml1041 Před měsícem +1

    you got a new subscriber. nice work

  • @jianganghao1857
    @jianganghao1857 Před 22 dny +1

    Great tutorial, very clear

  • @MartinRodriguez-sx2tf
    @MartinRodriguez-sx2tf Před měsícem +1

    Muy bueno y esperando el próximo 🎉

  • @williammariasoosai1153
    @williammariasoosai1153 Před 4 měsíci +1

    Very well done! Thanks

  • @aiden9990
    @aiden9990 Před 4 měsíci +1

    Perfect thank you!

  • @theobelen-halimi2862
    @theobelen-halimi2862 Před 4 měsíci +2

    Very clear video and tutorial ! Good job ! Just have a question : Is it possible to use Open Source model rather than OpenAI ?

    • @pixegami
      @pixegami  Před 3 měsíci +1

      Yes! Check out this video on how to use different models other than OpenAI: czcams.com/video/HxOheqb6QmQ/video.html
      And here is the official documentation on how to use/implement different LLMs (including your own open source one) python.langchain.com/docs/modules/model_io/llms/

  • @chandaman95
    @chandaman95 Před 2 měsíci +1

    Amazing video, thank you.

  • @litttlemooncream5049
    @litttlemooncream5049 Před 3 měsíci +1

    helpful if I wanna do analysis on properly-organized documents

    • @pixegami
      @pixegami  Před 2 měsíci

      Yup! I think it could be useful for searching through unorganised documents too.

  • @shapovalentine
    @shapovalentine Před 4 měsíci +1

    Useful, Nice, Thank You 🤩🤩🤩

    • @pixegami
      @pixegami  Před 3 měsíci

      Glad to hear it was useful!

  • @tinghaowang-ei7kv
    @tinghaowang-ei7kv Před měsícem +1

    Nice,how pretty that is it.

  • @kewalkkarki6284
    @kewalkkarki6284 Před 5 měsíci +1

    This is Amazing 🙌

    • @pixegami
      @pixegami  Před 5 měsíci

      Thank you! Glad you liked :)

  • @shikharsaxena9989
    @shikharsaxena9989 Před 28 dny +1

    best explanation of rag

  • @frederikklein1806
    @frederikklein1806 Před 4 měsíci +1

    This is a really good video, thank you so much! Out of curiosity, why do you use iterm2 as a terminal and how did you set it up to look that cool? 😍

    • @pixegami
      @pixegami  Před 4 měsíci +1

      I use iTerm2 for videos because it looks and feels familiar for my viewers. When I work on my own, I use warp (my terminal set up and theme explained here: czcams.com/video/ugwmH_xzkCA/video.html)
      And if you're using Ubuntu, I have a terminal setup video for that too: czcams.com/video/UvY5aFHNoEw/video.html

  • @bcippitelli
    @bcippitelli Před 6 měsíci +1

    thanks dude!

  • @pojomcbooty
    @pojomcbooty Před 2 měsíci +3

    VERY well explained. thank you so much for releasing this level of education on youtube!!

    • @pixegami
      @pixegami  Před 2 měsíci +1

      Glad you enjoyed it!

  • @mohanraman
    @mohanraman Před 2 měsíci

    this is an awesome video. Thank You !! ! Am curious how to leverage these technologies with structured data , like business data thats stored in tables. Appreciate any videos about that.

  • @AlejandroLopez-mm4sg

    Thanks!

  • @corbin0dallas
    @corbin0dallas Před měsícem +1

    Great tutorial, Thanks! My only feedback is that any LLM already knows everything about Alice in wonderland

    • @SongforTin
      @SongforTin Před měsícem +1

      You can create custom apps for Businesses using their own documents = huge Business opportunity If it really works.

    • @pixegami
      @pixegami  Před 18 dny

      Yeah that's a really good point. What I really needed was a data-source that was easy to understand, but would not appear in the base knowledge of any LLM (I've learnt that now for my future videos).

  • @xspydazx
    @xspydazx Před 27 dny

    Question : once loading a vector store , how can we output a dataset from the store to be used as a fine tuning object ?

  • @pampaniyavijay007
    @pampaniyavijay007 Před 25 dny +1

    This very simple and useful video for me 🤟🤟🤟

    • @pixegami
      @pixegami  Před 18 dny

      Thank you! I'm glad to hear that.

  • @slipthetrap
    @slipthetrap Před 5 měsíci +19

    As others have asked: "Could you show how to do it with an open source LLM?" Also, instead of Markdown (.md) can you show how to use PDFs ? Thanks.

    • @pixegami
      @pixegami  Před 5 měsíci +9

      Thanks :) It seems to be a popular topic so I've added to my list for my upcoming content.

    • @danishammar.official
      @danishammar.official Před 2 měsíci

      If made video on above request kindly give link in description it gonna be a good for all users

    • @raheesahmed56
      @raheesahmed56 Před 2 měsíci +3

      Instead of md extension you can simply use txt or pdf extension thats it just replace the file extension

    • @yl8908
      @yl8908 Před měsícem

      Yes, pls share how to work with pdfs directly instead of .mds . Thanks !

  • @user-fj4ic9sq8e
    @user-fj4ic9sq8e Před 2 měsíci

    Hello,
    thank you so much for this video.
    i have a question related of sumuraze questions in LLM documents.for example in vector database have thousands documents with date property, and i want ask the model how much document i received in the last week?

  • @Chisanloius
    @Chisanloius Před 27 dny +2

    Great level of knowledge and details.
    Please where is your Open AI key stored.

    • @pixegami
      @pixegami  Před 18 dny

      Thank you! I normally just store the OpenAI key in the environment variable `OPENAI_API_KEY`. See here for storage and safety tips: help.openai.com/en/articles/5112595-best-practices-for-api-key-safety

  • @nachoeigu
    @nachoeigu Před měsícem +1

    You gained a new subscriber. Thank you, amazing content! Only one question, how about the cost associated with this software? How match it consumes per request?

    • @pixegami
      @pixegami  Před měsícem

      Thank you, welcome! To calculate pricing, it's based on which AI model you use. In this video, we use OpenAI, so check the pricing here: openai.com/pricing
      1 Token ~= 1 Word. So to embed a document with 10,000 words (tokens) with "text-embedding-3-large" ($0.13 per 1M token), it's about $0.0013. Then apply the same calculation to the prompt/response for "gpt-4" or whichever model you use for the chat.

  • @seankim6080
    @seankim6080 Před 2 měsíci

    Thanks so much! This is super helpful to better understand RAG. Only the thing is still not sure how to run this program that I clonned from your github repository via windows terminal. Will try on my own but if you could provide any guidance or sources CZcams links anything like that would be much more appreciated.

  • @user-md4pp8nv7u
    @user-md4pp8nv7u Před měsícem +1

    very great!! thanks you

  • @SantiYounger
    @SantiYounger Před 2 měsíci +2

    thanks for the video, this looks great, but I tried to implement it and seems like the langchain packages needed are no longer available has anyone had any luck getting this to work?
    Thanks

  • @FrancisRodrigues
    @FrancisRodrigues Před 2 měsíci +1

    That's the best and most reliable content about LangChain I've ever seen, and it only took 16 minutes.

    • @pixegami
      @pixegami  Před 2 měsíci +1

      Glad you enjoyed it! I try to keep my content short and useful because I know everyone is busy these days :)

    • @Shwapx
      @Shwapx Před měsícem

      @@pixegamihey great work can we have an updated version with the langchain imports because its throwing all kind of errors of imports which are changed

  • @cindywu3265
    @cindywu3265 Před 3 měsíci +1

    Thanks for sharing the examples with OpenAI Embedding model. I'm trying to practice using the HuggingFaceEmbeddings because it's free but wanted to check the evaluation metrics - like the apple and orange example you showed. Do you know if it exists by any chance?

    • @pixegami
      @pixegami  Před 2 měsíci

      Yup, you should be able to override the evaluator (or extend your own) to use whichever embedding system you want: python.langchain.com/docs/guides/evaluation/comparison/custom
      But at the end of the day, if you can already get the embedding, then evaluation is usually just a cosine similarity distance between the two, so it's not too complex if you need to calculate it yourself.

  • @user-iz7wi7rp6l
    @user-iz7wi7rp6l Před 5 měsíci +1

    first thank you very much and now also tell to apply memory of various kinds

    • @pixegami
      @pixegami  Před 5 měsíci +1

      Thanks! I haven't looked at how to use the Langchain memory feature yet so I'll have to work on that first :)

    • @user-iz7wi7rp6l
      @user-iz7wi7rp6l Před 5 měsíci +1

      @@pixegami ohk i i have implemented memory and other features also also as well as worked with windows also after some monstor errors,, thank once again for the clear working code (used in production)
      hope to see more in future

  • @JJaitley
    @JJaitley Před 3 měsíci +1

    @pixegami What are your suggestions on cleaning the company docs before chunking? Some of the challenges faced are how to handle the index pages in multiple pdfs also the headers and footers. You should definitely make some video related to cleaning a pdf before chunking much needed.

    • @pixegami
      @pixegami  Před 3 měsíci

      That's a tactical question that will vary from doc to doc. It's a great question and a great use-case though for creative problem solving-thanks for the suggestion and video idea.

  • @naveeng2003
    @naveeng2003 Před 4 měsíci +2

    How did you rip the aws documentation

  • @user-wm2pb3hi7p
    @user-wm2pb3hi7p Před 2 měsíci

    how can we make a RAG system which will answer both stuctured and unstructured data.
    for example, user upload a csv and a text file and start asking question, then chatbot has to answer from both database.
    (structured data should store in different database and pass to a tool to process) unstructured should store in the vector database.
    how can we do effectively?

  • @AdandKidda
    @AdandKidda Před 2 měsíci

    hi , thanks for such ultimate knowledge sharing .
    I have a use case:
    1. can we perform some action (call an api) as response ?
    2. how can we use mistral and opensource embedding for this purpose?

  • @lukashk.1770
    @lukashk.1770 Před měsícem +1

    do these tools work with code also? for example when having a big codebase, and qrying that codebase asking about how xyz is implemeted would be really usefull. Or generating doc etc.

    • @pixegami
      @pixegami  Před měsícem

      I think the idea of a RAG app should definitely work with code.
      But you'll probably need to have an intermediate step to translate that code close to something you'd want to query for first (e.g. translate a function into a text description). Embed the descriptive element, but have it refer to the original code snippet.
      It sounds like a really interesting idea to explore for sure!

  • @stanTrX
    @stanTrX Před 5 dny

    I wanna ask you two things: 1 - do i have to chunk everything when i add some new file to my data folder. 2- how can i see the source, page number and filename of the query result?

  • @yangsong8812
    @yangsong8812 Před 3 měsíci +1

    Would love to hear your thoughts if hats on how to use evaluation to keep LLM output in check. Can we set up framework so that we can have an evaluation framework?

    • @pixegami
      @pixegami  Před 2 měsíci

      There's currently a lot of different research and tools on how to evaluate the output - I don't think anyone's figured out the standard yet. But stuff like this is what you'd probably want to look at: cloud.google.com/vertex-ai/generative-ai/docs/models/evaluate-models

  • @moriztrautmann8231
    @moriztrautmann8231 Před měsícem +1

    Thank you very much for the video. It seems the adding of chunks to the chroma database takes a really long time. If i just save the embeddings to a json its takes a few seconds but the to the chroma it takes like 20 minutes...Ist there somthing i am missing? I am doing this only on a document about one page long.

    • @pixegami
      @pixegami  Před 18 dny

      Hey, thanks for commenting!
      It does seem to me something is wrong - the way you're generating embeddings (as a JSON) and via Chroma seems to be doing different things (because they should normally take the same amount of time, if it's for the same amount of text).
      Have you tried using different embedding functions? Or is your ChromaDB saved onto a slower disk drive?

  • @uchiha_mishal
    @uchiha_mishal Před 17 dny +1

    Nicely explained but I had to go through a ton of documentation for using this project with AzureOpenAI instead of OpenAI.

    • @pixegami
      @pixegami  Před 17 dny

      Thanks! I took a look at Azure Open AI documentation on Langchain and you're right-it doesn't exactly look straightforward: python.langchain.com/v0.1/docs/integrations/llms/azure_openai/

  • @vlad910
    @vlad910 Před 5 měsíci +1

    Thank you for this very instructive video. I am looking at embedding some research documents from sources such as PubMed or Google scholar. Is there a way for the embedding to use website data instead of locally stored text files?

    • @pixegami
      @pixegami  Před 5 měsíci +1

      Yes, you can basically load any type of text data if you use the appropriate document loader: python.langchain.com/docs/modules/data_connection/document_loaders/
      Text files are an easy example, but there's examples of Wikipedia loaders in there too (python.langchain.com/docs/integrations/document_loaders/). If you don't find what you are looking for, you can implement your own Document loader, and have it get data from anywhere you want.

    • @jessicabull3918
      @jessicabull3918 Před 2 měsíci

      @@pixegami Exactly the question and answer I was looking for, thanks

  • @NahuelD101
    @NahuelD101 Před 5 měsíci +2

    Very nice video, what kind of theme do you use to make the vscode look like this? Thanks.

    • @pixegami
      @pixegami  Před 5 měsíci +1

      I use Monokai Pro :)

    • @pixegami
      @pixegami  Před 5 měsíci +2

      The VSCode theme is called Monokai Pro :)

  • @user-wi8ne4qb6u
    @user-wi8ne4qb6u Před 5 měsíci +1

    Excellent coding! working wonderful! Appreciate. One question please: what difference if I change from md to pdf?

    • @pixegami
      @pixegami  Před 5 měsíci

      Thanks, glad you enjoyed it. It should still work fine :) You might just need to use a different "Document Loader" from Langchain: python.langchain.com/docs/modules/data_connection/document_loaders/pdf

  • @hoangng16
    @hoangng16 Před měsícem

    I want to hear your thoughts on what approach is likely the better one:
    1. Chop the document into multiple chunks and convert chunks to vectors
    2. Convert the whole document to a vector
    Thank you

    • @pixegami
      @pixegami  Před měsícem

      I think it really depends on your use-case and the content. The best way to know is to have a way to evaluate (test) the results/quality.
      In my own use-cases, I find that a chunk length of around 3000 characters work quite well (you need enough context for the content to make sense). I also like to concatenate some context info into the chunk (like "this is page 5 about XYZ, part of ABC".
      But I haven't done enough research into this to really give a qualified answer. Good luck!

  • @ailenrgrimaldi6050
    @ailenrgrimaldi6050 Před 2 měsíci +1

    Thank you for this video, is NLTK something required to do this?

    • @pixegami
      @pixegami  Před 2 měsíci +1

      The NLTK library? I don't think I had to use it here in the project, a lot of the other libraries might give you all the functionality at a higher abstraction already.

  • @annialevko5771
    @annialevko5771 Před 3 měsíci +1

    Hey nice video, I was just wondering, whats the difference on doing it like this and using chains? I noticed you didnt use any chain and directly used the predict with the prompt 🤔

    • @pixegami
      @pixegami  Před 3 měsíci

      With chains, I think you have a little bit more control (especially if you want to do things in a sequence). But since that wasn't the focus of this video, I just did it using `predict()`.

  • @stanTrX
    @stanTrX Před 4 dny

    another one is, does it also work for pdf or epub files? i have never heard about *.md markdown files before.

  • @jimg8296
    @jimg8296 Před 2 měsíci +1

    Thank you SO MUCH! Exactly what I was looking for. Your presentation was easy to understand and very complete. 5 STARS! Not to be greedy, but I'd love to see this running 100% locally.

    • @pixegami
      @pixegami  Před 2 měsíci +2

      Glad it was helpful! Running local LLM apps is something I get asked quite a lot about and so I do actually plan to do a video about it quite soon.

    • @jessicabull3918
      @jessicabull3918 Před 2 měsíci

      @@pixegami Yes please!

  • @julianm3706
    @julianm3706 Před měsícem +1

    There is something that I did not understand: In the video, you mentioned that the more the score approximates to zero, the more accurate the result will be. But in your GitHub, you have this code:
    if len(results) == 0 or results[0][1] < 0.7:
    print(f"Unable to find matching results.")
    So, it seems that this code does the opposite: if the result is close to zero, then is wrong. Perhaps I understood incorrectly. Could you please help me out?

    • @pixegami
      @pixegami  Před měsícem +1

      I see, that's a good catch! I should have been more clear about that. So those are actually two different "scores" being used.
      1) Distance: The first one I talked about is the "distance" score. It's like either a Euclidean distance or a cosine similarity. For "distance" scores, the closer to 0, the closer they are.
      2) Relevancy: The second score is a "relevancy" score. I don't know how it's calculated, because its wrapped inside a Langchain/ChromaDB helper function. But for that score, the higher the better - they do something to invert it. The scale also varies depending on what embedding is used I think, so it could range from 0 - 1, or even be in ranges like 0 - 10k.
      The are both "scores" but measure different things, and we used different help functions to calculate them. Hope that clarifies it!

  • @user-iz7wi7rp6l
    @user-iz7wi7rp6l Před 5 měsíci

    any one face tesseract error in windows,,it works well at linux
    ?

  • @MichaelChenAdventures
    @MichaelChenAdventures Před 3 měsíci +1

    does the data have to be in a .md format? Also, how do you prep the data beforehand?

    • @pixegami
      @pixegami  Před 2 měsíci

      The data can be anything you want. Here's a list of all the Document loaders supported in Langchain (or you can even write your own): python.langchain.com/docs/modules/data_connection/document_loaders/
      The level of preparation is up to you, and it depends on your use case. For example, if you want to split your embeddings by chapters or headers (rather than some length of text), your data format will need a way to surface that.

  • @canasdruid
    @canasdruid Před 29 dny +1

    What is more advisable if I work with PDF documents, transforming them into text using a library like PyPDFLoader, or transforming them into another format that is easier to read?

    • @pixegami
      @pixegami  Před 18 dny

      I haven't done a deep dive on what's the most optimal way to use PDF data yet. I think it really depends on the data in the PDF, and what the chunk outputs look like. You probably need to do a bit of experimentation.
      If you have specific patterns with your PDFs (like lots of tables or columns) I'd probably try to pre-process them somehow first before feeding them into the document loader.

  • @sherifsheryo623
    @sherifsheryo623 Před 13 hodinami

    can i use pdfs or should i convert it first to markdown format?

  • @fengshi9462
    @fengshi9462 Před 5 měsíci +1

    hi, your video is so good. I just wanna know,if i want to automatically change my document in the production environment and keep the query service don't stop and always use the latest document as the sources, how can i do this by changing the code?❤

    • @pixegami
      @pixegami  Před 5 měsíci +1

      Ah, if you change the source document, you actually have to generate a new embedding and add it to the RAG database (the Prisma DB here). So you would have to figure out which piece of document changes, then create a new entry for that into the database. I don't have a code example right now, but it's definitely possible.

  • @spicytuna08
    @spicytuna08 Před 2 měsíci

    would this work for a general question such as this: please summarize the book in 5 sentences?

  • @hoangng16
    @hoangng16 Před měsícem +1

    Thank you for a great video. What if I already did word embedding and in the future I have some updates for the data?

    • @pixegami
      @pixegami  Před měsícem

      Thanks! I'm working on a video to explain techniques like that. But in a nutshell, you'll need to attach an ID to each document you add to the DB (derived deterministically from your page meta-data) and use that to update entries that change (or get added): docs.trychroma.com/usage-guide#updating-data-in-a-collection

  • @sunnysk43
    @sunnysk43 Před 6 měsíci +3

    Amazing video - directly subscribed to your channel ;-) Can you also provide an example with using your own LLM instead of OpenAI?

    • @pixegami
      @pixegami  Před 6 měsíci +1

      Yup! Great question. I'll have to work on that, but in the meantime here's a page with all the LLM supported integrations: python.langchain.com/docs/integrations/llms/

  • @user-cc3ev7de9v
    @user-cc3ev7de9v Před 2 měsíci

    which model you are using in this ?

  • @mohsenghafari7652
    @mohsenghafari7652 Před měsícem +1

    Hi dear friend .
    Thank you for your efforts .
    How to use this tutorial in PDFs at other language (for example Persian )
    What will the subject ?
    I made many efforts and tested different models, but the results in asking questions about pdfs are not good and accurate!
    Thank you for the explanation

    • @pixegami
      @pixegami  Před měsícem

      Thank you for your comment. For good performance in other languages, you'll probably need to find an LLM model that is optimized for that language.
      For Persian, I see this result: huggingface.co/MaralGPT/Maral-7B-alpha-1

  • @FrancisRodrigues
    @FrancisRodrigues Před 2 měsíci +1

    pls, I'd like to see a Recommendation model (products, images, etc) based on our different sources, it could be scraping from webpages. Something to use in e-commerce.

    • @pixegami
      @pixegami  Před 2 měsíci

      Product recommendations are a good idea :) Thanks for the suggestion, I'll add it to my list.

  • @kashishvarshney2225
    @kashishvarshney2225 Před 5 měsíci +1

    hai bro i am creating a chatbot which takes data from third party api which means there is less data but dynamic data for every call so should i use RAG approach if not then suggest me a batter approach

    • @pixegami
      @pixegami  Před 5 měsíci

      Hmm, I think RAG probably isn't the right approach. But it depends... For example if you get +3000 characters of dynamic data each call, then it might still be helpful to generate a vector DB on the spot so you can use RAG to narrow down the answer. But it's going to make each call a lot slower.
      But if you have way less data than that (say

  • @mlavinb
    @mlavinb Před 4 měsíci +1

    Great content! Thanks for sharing.
    Can you suggest a Chat GUI to connect?

    • @pixegami
      @pixegami  Před 4 měsíci

      If you want a simple, Python based one, try Streamlit (streamlit.io/). I also have a video about it here: czcams.com/video/D0D4Pa22iG0/video.html

  • @AjibadeYakub
    @AjibadeYakub Před 22 dny +1

    This is great work, Thank you
    How can I use the result of a sql query or a dataframe, rather than text files

    • @pixegami
      @pixegami  Před 18 dny

      Yup, looks like there is a Pandas Dataframe Document loader you can use with Langchain: python.langchain.com/v0.1/docs/integrations/document_loaders/pandas_dataframe/