RAG + Langchain Python Project: Easy AI/Chat For Your Docs
Vložit
- čas přidán 29. 05. 2024
- Learn how to build a "retrieval augmented generation" (RAG) app with Langchain and OpenAI in Python.
You can use this to create chat-bots for your documents, books or files. You can also use it to build rich, interactive AI applications that use your data as a source.
👉 Links
🔗 Code: github.com/pixegami/langchain...
📄 (Sample Data) AWS Docs: github.com/awsdocs/aws-lambda...
📄 (Sample Data) Alice in Wonderland: www.gutenberg.org/ebooks/11
📚 Chapters
00:00 What is RAG?
01:36 Preparing the Data
05:05 Creating Chroma Database
06:36 What are Vector Embeddings?
09:38 Querying for Relevant Data
12:47 Crafting a Great Response
16:18 Wrapping Up
#pixegami #python
I am brazilian software engineering studant, and ive so much to thank you for all the time you had invest on this amazing content that helped me so much!!!!!
Thank you! I’m very glad to hear it was helpful for you ☺️
Excellent video, very well explained in a very simple way. please do post more in Gen AI space.
Easily one of the best explained walk-throughs of LangChain RAG I’ve watched. Keep up the great content!
Thanks! Glad you enjoyed it :)
Absolutely epic video. I was able to follow along with no problems by watching the video and following the code. Really tremendous job, thank you so much! Definitely subscribing!
Thank you for your comment! I'm really glad to hear it was easy to follow - well done! Hope you build some cool stuff with it :)
I never comment on videos, but this was such an in-depth and easy to understand walkthrough! Keep it up!
Thank you :) I appreciate you commenting, and I'm glad you enjoyed it. Please go build something cool!
Thanks so much for this. Your teaching style is incredible and the subject is well explained.
Your channel is one of the best of CZcams. Thank you. Now I'll go watch the video.
This is what I look for! Thanks for the simplest explanation. There are some adjustments on the codebase during the updates but it doesn't matter. Keep it up!
You're welcome, glad it helped! I try to keep the code accurate, but sometimes I think these libraries update/change really fast. I think I'll need to lock/freeze package versions in future videos so it doesn't drift.
Thank you, that was a great walk through very easy to understand with a great pace. Please make a video on LangGraph as well.
Thank you! Glad you enjoyed it. Thanks for the LangGraph suggestion. I hadn't noticed that feature before-tech seems to move fast in 2024 :)
Fantastic, clear, concise and to the point. thanks so much for your efforts to share your knowledge with others.
Thank you, I'm glad you enjoyed it!
Clean, strucktured, good to follow, tutorial. Thank you for that
Thank you! Glad you enjoyed it!
Great video! This was my first exposure to ChromaDB (worked flawlessly on a fairly large corpus of material). Looking forward to experimenting with other language models as well. This is a great stepping stone towards knowledge based expansions for LLMs. Nice work!
Really glad to hear you got it to work :) Thanks for sharing your experience with it as well - that's the whole reason I make these videos!
this is the best tutorial i have ever seen on this topic, thank you so much, Keep up the good work. Immediately subscribed.
Glad you enjoyed it. Thanks for subscribing!
Great walkthrough, now all thats needed is a revision to cope with the changes to the langchain namespaces.
What changes have ben done, I cant get this to work :-(
Thanks for this good beginner video, telling the basics, easy to follow (finally someone) :))
Straight to the point. Awesome!
Thanks, I appreciate it!
Finally a good langchain video to understand better. Do you have a video in mind to use local llm using Ollama and local embeddings to port the code ?
This was so informative and well presented. Exactly what I was looking for. Thank you!
You're welcome, glad you liked it!
This was excellent. easy to follow, has codes and very useful! Thank you.
Thank you, I really appreciate it!
Thank you for this. Looking forward to tutorials on using Assistants API.
You're welcome! And great idea for a new video :)
Awesome walkthrough, thanks for making this 🎉
Thank you! Glad you liked it.
Simple explained and kept an engaging tone.
I would also look for a use case where the source of vector data is a combination of files (PDF, DOCX, EXCEL etc.) along with some database (RDBMS or File based database)
Thanks! That's a good idea too. You can probably achieve that by detecting what type of file you are working with, and then using a different parser (document loader) for that type. Langchain should have custom document loaders for all the most common file types.
Perfectly explained👌🏼
Really good. Thank very much sir. Articulated perfectly!
Thank you! Glad you enjoyed it :)
I too am quite impressed with your videos (this is my 2nd one). I have now subscribed and I bet you'll be growing fast.
Thank you! 🤩
Thank you this is an amazing video. Learn lot of things from this video...
Great explanation. Perhaps one criticism would be using open ai’s embedding library: would rather not be locked into their ecosystem and i believe that free alternatives exist that are perfectly good! But would have loved a quick overview there.
Thanks for the feedback. I generally use OpenAI because I thought it was the easiest API for people to get started with. But actually I've received similar feedback where people just want to use open source (or their own) LLM engines.
Feedback received, thank you :) Luckily with somehitng like Langchain, swapping out the LLM engine (e.g. the embedding functionality) is usually just a few lines of code.
@@pixegami It's a pleasure :).
Yes, everyone seems to be using OpenAI by default, because everyone is using chatGPT. But there are lots of good reasons why one might not wish to get tied to open AI, anthropic, or any other cloud-based provider besides the mounting costs if one is developing applications using LLM. E.g. data privacy/integrity, simplicity, reproducibility (e.g. chatGPT is always changing and that is out of your control), in addition a general suspicion of non-open-source frameworks whose primary focus is often (usually?) on wealth extraction, not solution provision. There is not enough good material out there on how to create a basic RAG with vector storage using a local LLM, something that is very practical with smaller models e.g. mistral, dolphincoder, Mixtral 8x7b etc., at least for putting together an MVP.
Re: avoiding openAI:
I've managed to use embed_model = OllamaEmbeddings(model="nomic-embed-text").
I still get occasional 'openAI' related errors, but gather that Ollama has support for mimicking openAI now, including a 'fake' openAI key, so am looking into that as a fix.
ollama.com/blog/windows-preview
I also gather that with llama-cpp, one can specify model temperature and other configuration options, whereas with Ollama, one is stuck with the configuration used in the modelfile when the Ollama-compatible model is made (if that is the correct terminology). So I may have to investigate that.
I'm currently using llama-index because I am focused on RAG and don't need the flexibility of langchain.
Good tutorial in the llama-index docs: docs.llamaindex.ai/en/stable/examples/usecases/10k_sub_question/
I'm also a bit sceptical that langchain isn't another attempt to 'lock you in' to an ecosystem that can then be monetised e.g. minimaxir.com/2023/07/langchain-problem/. I am still learning, so don't have a real opinion yet. Very exciting stuff! Kind regards.
Thanks a lot for this tutorial! Very well explained.
Glad it was helpful!
Thank you for sharing, this was the info I was looking for
Glad it was helpful!
Very helpful video! Keep going, you are the best!
Thank you very much, I am looking forward to see a video about Virtuel assistant doing actions. By communicating others applications using API.
Glad you enjoyed it! Thanks for the suggestion :)
You're welcome
@@pixegami
Love your videos. I was able to follow along and build my own RAG. Can you expand more on this series and explain RAPTOR retrieval and how to implement it?
Well illustrated! Thanks
Thank you!
Great Tutorial! thanks
This video was pure gold. Really grateful for the concise and excellent walkthrough. I have two additional questions in regards to the metadata and resulting chunk reference displayed. Can you return a screenshot of the chunk/document referenced now that models are multimodal? Also a document title or ability to download such document would also be a cool feature. Thanks so much in advance!
Glad you enjoyed it! I think if you want to display images, or link/share resources via the chunk, you can just embed it at chunk creation time into the document meta-data.
Upload your resource (e.g. image) to something like Amazon S3, then put a download link into the meta-data for example.
Hi, by far the best video on Langchain - Chroma! :D
Quick question: How would you update the chroma database if you want to feed it with documents (while avoiding duplication of documents) ?
Glad you liked it! Thank you. If you want to add (modify) the ChromaDB data, you should be able to do that after you've loaded up the DB:
docs.trychroma.com/usage-guide#adding-data-to-a-collection
Super helpful, thank you!
Glad it was helpful!
Epic.
Thank you for sharing this.
Thank you!
Huge class!!
you got a new subscriber. nice work
Thank you! Welcome :)
Great tutorial, very clear
Glad it was helpful!
Muy bueno y esperando el próximo 🎉
Thank you!
Very well done! Thanks
Glad you liked it!
Perfect thank you!
Glad it helped!
Very clear video and tutorial ! Good job ! Just have a question : Is it possible to use Open Source model rather than OpenAI ?
Yes! Check out this video on how to use different models other than OpenAI: czcams.com/video/HxOheqb6QmQ/video.html
And here is the official documentation on how to use/implement different LLMs (including your own open source one) python.langchain.com/docs/modules/model_io/llms/
Amazing video, thank you.
Thank you!
helpful if I wanna do analysis on properly-organized documents
Yup! I think it could be useful for searching through unorganised documents too.
Useful, Nice, Thank You 🤩🤩🤩
Glad to hear it was useful!
Nice,how pretty that is it.
This is Amazing 🙌
Thank you! Glad you liked :)
best explanation of rag
Thank you!
This is a really good video, thank you so much! Out of curiosity, why do you use iterm2 as a terminal and how did you set it up to look that cool? 😍
I use iTerm2 for videos because it looks and feels familiar for my viewers. When I work on my own, I use warp (my terminal set up and theme explained here: czcams.com/video/ugwmH_xzkCA/video.html)
And if you're using Ubuntu, I have a terminal setup video for that too: czcams.com/video/UvY5aFHNoEw/video.html
thanks dude!
VERY well explained. thank you so much for releasing this level of education on youtube!!
Glad you enjoyed it!
this is an awesome video. Thank You !! ! Am curious how to leverage these technologies with structured data , like business data thats stored in tables. Appreciate any videos about that.
Thanks!
Great tutorial, Thanks! My only feedback is that any LLM already knows everything about Alice in wonderland
You can create custom apps for Businesses using their own documents = huge Business opportunity If it really works.
Yeah that's a really good point. What I really needed was a data-source that was easy to understand, but would not appear in the base knowledge of any LLM (I've learnt that now for my future videos).
Question : once loading a vector store , how can we output a dataset from the store to be used as a fine tuning object ?
This very simple and useful video for me 🤟🤟🤟
Thank you! I'm glad to hear that.
As others have asked: "Could you show how to do it with an open source LLM?" Also, instead of Markdown (.md) can you show how to use PDFs ? Thanks.
Thanks :) It seems to be a popular topic so I've added to my list for my upcoming content.
If made video on above request kindly give link in description it gonna be a good for all users
Instead of md extension you can simply use txt or pdf extension thats it just replace the file extension
Yes, pls share how to work with pdfs directly instead of .mds . Thanks !
Hello,
thank you so much for this video.
i have a question related of sumuraze questions in LLM documents.for example in vector database have thousands documents with date property, and i want ask the model how much document i received in the last week?
Great level of knowledge and details.
Please where is your Open AI key stored.
Thank you! I normally just store the OpenAI key in the environment variable `OPENAI_API_KEY`. See here for storage and safety tips: help.openai.com/en/articles/5112595-best-practices-for-api-key-safety
You gained a new subscriber. Thank you, amazing content! Only one question, how about the cost associated with this software? How match it consumes per request?
Thank you, welcome! To calculate pricing, it's based on which AI model you use. In this video, we use OpenAI, so check the pricing here: openai.com/pricing
1 Token ~= 1 Word. So to embed a document with 10,000 words (tokens) with "text-embedding-3-large" ($0.13 per 1M token), it's about $0.0013. Then apply the same calculation to the prompt/response for "gpt-4" or whichever model you use for the chat.
Thanks so much! This is super helpful to better understand RAG. Only the thing is still not sure how to run this program that I clonned from your github repository via windows terminal. Will try on my own but if you could provide any guidance or sources CZcams links anything like that would be much more appreciated.
very great!! thanks you
Glad you liked it!
thanks for the video, this looks great, but I tried to implement it and seems like the langchain packages needed are no longer available has anyone had any luck getting this to work?
Thanks
That's the best and most reliable content about LangChain I've ever seen, and it only took 16 minutes.
Glad you enjoyed it! I try to keep my content short and useful because I know everyone is busy these days :)
@@pixegamihey great work can we have an updated version with the langchain imports because its throwing all kind of errors of imports which are changed
Thanks for sharing the examples with OpenAI Embedding model. I'm trying to practice using the HuggingFaceEmbeddings because it's free but wanted to check the evaluation metrics - like the apple and orange example you showed. Do you know if it exists by any chance?
Yup, you should be able to override the evaluator (or extend your own) to use whichever embedding system you want: python.langchain.com/docs/guides/evaluation/comparison/custom
But at the end of the day, if you can already get the embedding, then evaluation is usually just a cosine similarity distance between the two, so it's not too complex if you need to calculate it yourself.
first thank you very much and now also tell to apply memory of various kinds
Thanks! I haven't looked at how to use the Langchain memory feature yet so I'll have to work on that first :)
@@pixegami ohk i i have implemented memory and other features also also as well as worked with windows also after some monstor errors,, thank once again for the clear working code (used in production)
hope to see more in future
@pixegami What are your suggestions on cleaning the company docs before chunking? Some of the challenges faced are how to handle the index pages in multiple pdfs also the headers and footers. You should definitely make some video related to cleaning a pdf before chunking much needed.
That's a tactical question that will vary from doc to doc. It's a great question and a great use-case though for creative problem solving-thanks for the suggestion and video idea.
How did you rip the aws documentation
how can we make a RAG system which will answer both stuctured and unstructured data.
for example, user upload a csv and a text file and start asking question, then chatbot has to answer from both database.
(structured data should store in different database and pass to a tool to process) unstructured should store in the vector database.
how can we do effectively?
hi , thanks for such ultimate knowledge sharing .
I have a use case:
1. can we perform some action (call an api) as response ?
2. how can we use mistral and opensource embedding for this purpose?
do these tools work with code also? for example when having a big codebase, and qrying that codebase asking about how xyz is implemeted would be really usefull. Or generating doc etc.
I think the idea of a RAG app should definitely work with code.
But you'll probably need to have an intermediate step to translate that code close to something you'd want to query for first (e.g. translate a function into a text description). Embed the descriptive element, but have it refer to the original code snippet.
It sounds like a really interesting idea to explore for sure!
I wanna ask you two things: 1 - do i have to chunk everything when i add some new file to my data folder. 2- how can i see the source, page number and filename of the query result?
Would love to hear your thoughts if hats on how to use evaluation to keep LLM output in check. Can we set up framework so that we can have an evaluation framework?
There's currently a lot of different research and tools on how to evaluate the output - I don't think anyone's figured out the standard yet. But stuff like this is what you'd probably want to look at: cloud.google.com/vertex-ai/generative-ai/docs/models/evaluate-models
Thank you very much for the video. It seems the adding of chunks to the chroma database takes a really long time. If i just save the embeddings to a json its takes a few seconds but the to the chroma it takes like 20 minutes...Ist there somthing i am missing? I am doing this only on a document about one page long.
Hey, thanks for commenting!
It does seem to me something is wrong - the way you're generating embeddings (as a JSON) and via Chroma seems to be doing different things (because they should normally take the same amount of time, if it's for the same amount of text).
Have you tried using different embedding functions? Or is your ChromaDB saved onto a slower disk drive?
Nicely explained but I had to go through a ton of documentation for using this project with AzureOpenAI instead of OpenAI.
Thanks! I took a look at Azure Open AI documentation on Langchain and you're right-it doesn't exactly look straightforward: python.langchain.com/v0.1/docs/integrations/llms/azure_openai/
Thank you for this very instructive video. I am looking at embedding some research documents from sources such as PubMed or Google scholar. Is there a way for the embedding to use website data instead of locally stored text files?
Yes, you can basically load any type of text data if you use the appropriate document loader: python.langchain.com/docs/modules/data_connection/document_loaders/
Text files are an easy example, but there's examples of Wikipedia loaders in there too (python.langchain.com/docs/integrations/document_loaders/). If you don't find what you are looking for, you can implement your own Document loader, and have it get data from anywhere you want.
@@pixegami Exactly the question and answer I was looking for, thanks
Very nice video, what kind of theme do you use to make the vscode look like this? Thanks.
I use Monokai Pro :)
The VSCode theme is called Monokai Pro :)
Excellent coding! working wonderful! Appreciate. One question please: what difference if I change from md to pdf?
Thanks, glad you enjoyed it. It should still work fine :) You might just need to use a different "Document Loader" from Langchain: python.langchain.com/docs/modules/data_connection/document_loaders/pdf
I want to hear your thoughts on what approach is likely the better one:
1. Chop the document into multiple chunks and convert chunks to vectors
2. Convert the whole document to a vector
Thank you
I think it really depends on your use-case and the content. The best way to know is to have a way to evaluate (test) the results/quality.
In my own use-cases, I find that a chunk length of around 3000 characters work quite well (you need enough context for the content to make sense). I also like to concatenate some context info into the chunk (like "this is page 5 about XYZ, part of ABC".
But I haven't done enough research into this to really give a qualified answer. Good luck!
Thank you for this video, is NLTK something required to do this?
The NLTK library? I don't think I had to use it here in the project, a lot of the other libraries might give you all the functionality at a higher abstraction already.
Hey nice video, I was just wondering, whats the difference on doing it like this and using chains? I noticed you didnt use any chain and directly used the predict with the prompt 🤔
With chains, I think you have a little bit more control (especially if you want to do things in a sequence). But since that wasn't the focus of this video, I just did it using `predict()`.
another one is, does it also work for pdf or epub files? i have never heard about *.md markdown files before.
Thank you SO MUCH! Exactly what I was looking for. Your presentation was easy to understand and very complete. 5 STARS! Not to be greedy, but I'd love to see this running 100% locally.
Glad it was helpful! Running local LLM apps is something I get asked quite a lot about and so I do actually plan to do a video about it quite soon.
@@pixegami Yes please!
There is something that I did not understand: In the video, you mentioned that the more the score approximates to zero, the more accurate the result will be. But in your GitHub, you have this code:
if len(results) == 0 or results[0][1] < 0.7:
print(f"Unable to find matching results.")
So, it seems that this code does the opposite: if the result is close to zero, then is wrong. Perhaps I understood incorrectly. Could you please help me out?
I see, that's a good catch! I should have been more clear about that. So those are actually two different "scores" being used.
1) Distance: The first one I talked about is the "distance" score. It's like either a Euclidean distance or a cosine similarity. For "distance" scores, the closer to 0, the closer they are.
2) Relevancy: The second score is a "relevancy" score. I don't know how it's calculated, because its wrapped inside a Langchain/ChromaDB helper function. But for that score, the higher the better - they do something to invert it. The scale also varies depending on what embedding is used I think, so it could range from 0 - 1, or even be in ranges like 0 - 10k.
The are both "scores" but measure different things, and we used different help functions to calculate them. Hope that clarifies it!
any one face tesseract error in windows,,it works well at linux
?
does the data have to be in a .md format? Also, how do you prep the data beforehand?
The data can be anything you want. Here's a list of all the Document loaders supported in Langchain (or you can even write your own): python.langchain.com/docs/modules/data_connection/document_loaders/
The level of preparation is up to you, and it depends on your use case. For example, if you want to split your embeddings by chapters or headers (rather than some length of text), your data format will need a way to surface that.
What is more advisable if I work with PDF documents, transforming them into text using a library like PyPDFLoader, or transforming them into another format that is easier to read?
I haven't done a deep dive on what's the most optimal way to use PDF data yet. I think it really depends on the data in the PDF, and what the chunk outputs look like. You probably need to do a bit of experimentation.
If you have specific patterns with your PDFs (like lots of tables or columns) I'd probably try to pre-process them somehow first before feeding them into the document loader.
can i use pdfs or should i convert it first to markdown format?
hi, your video is so good. I just wanna know,if i want to automatically change my document in the production environment and keep the query service don't stop and always use the latest document as the sources, how can i do this by changing the code?❤
Ah, if you change the source document, you actually have to generate a new embedding and add it to the RAG database (the Prisma DB here). So you would have to figure out which piece of document changes, then create a new entry for that into the database. I don't have a code example right now, but it's definitely possible.
would this work for a general question such as this: please summarize the book in 5 sentences?
Thank you for a great video. What if I already did word embedding and in the future I have some updates for the data?
Thanks! I'm working on a video to explain techniques like that. But in a nutshell, you'll need to attach an ID to each document you add to the DB (derived deterministically from your page meta-data) and use that to update entries that change (or get added): docs.trychroma.com/usage-guide#updating-data-in-a-collection
Amazing video - directly subscribed to your channel ;-) Can you also provide an example with using your own LLM instead of OpenAI?
Yup! Great question. I'll have to work on that, but in the meantime here's a page with all the LLM supported integrations: python.langchain.com/docs/integrations/llms/
which model you are using in this ?
Hi dear friend .
Thank you for your efforts .
How to use this tutorial in PDFs at other language (for example Persian )
What will the subject ?
I made many efforts and tested different models, but the results in asking questions about pdfs are not good and accurate!
Thank you for the explanation
Thank you for your comment. For good performance in other languages, you'll probably need to find an LLM model that is optimized for that language.
For Persian, I see this result: huggingface.co/MaralGPT/Maral-7B-alpha-1
pls, I'd like to see a Recommendation model (products, images, etc) based on our different sources, it could be scraping from webpages. Something to use in e-commerce.
Product recommendations are a good idea :) Thanks for the suggestion, I'll add it to my list.
hai bro i am creating a chatbot which takes data from third party api which means there is less data but dynamic data for every call so should i use RAG approach if not then suggest me a batter approach
Hmm, I think RAG probably isn't the right approach. But it depends... For example if you get +3000 characters of dynamic data each call, then it might still be helpful to generate a vector DB on the spot so you can use RAG to narrow down the answer. But it's going to make each call a lot slower.
But if you have way less data than that (say
Great content! Thanks for sharing.
Can you suggest a Chat GUI to connect?
If you want a simple, Python based one, try Streamlit (streamlit.io/). I also have a video about it here: czcams.com/video/D0D4Pa22iG0/video.html
This is great work, Thank you
How can I use the result of a sql query or a dataframe, rather than text files
Yup, looks like there is a Pandas Dataframe Document loader you can use with Langchain: python.langchain.com/v0.1/docs/integrations/document_loaders/pandas_dataframe/