The Power of Graph RAG Unleashed | GraphRAG End-to-End Implementation With

Sdílet
Vložit
  • čas přidán 29. 08. 2024
  • #RAG #ai #generativeai #openai #azureopenai #datascience
    Retrieval-augmented generation (RAG) is a technique to search for information based on a user query and provide the results as reference for an AI answer to be generated. This technique is an important part of most LLM-based tools and the majority of RAG approaches use vector similarity as the search technique. GraphRAG uses LLM-generated knowledge graphs to provide substantial improvements in question-and-answer performance when conducting document analysis of complex information.
    By combining LLM-generated knowledge graphs and graph machine learning, GraphRAG enables us to answer important classes of questions that we cannot attempt with baseline RAG alone. We have seen promising results after applying this technology to a variety of scenarios, including social media, news articles, workplace productivity, and chemistry. Looking forward, we plan to work closely with customers on a variety of new domains as we continue to apply this technology while working on metrics and robust evaluation.
    Business email id- sktech.programming@gmail.com
    do mail here
    For Guidance - topmate.io/sai_kumar_reddy_n?SocialProfile
    video 1- • Python Tutorials From ...
    previous video link - • OOP's Concept In Pytho...
    Do Support the channel friends.
    telegram link- t.me/saikumarr...
    article link- / graphrag-graphs-retrei...
    Github project implementation link - github.com/Tak...
    Microsoft blog link - www.microsoft....
    And also Guys follow me on social media links are available below.
    Instagram- / sai_kumar_datascientist
    LinkedIn- / sai-kumar-reddy-n-data...
    twitter- / 123saikumar9036
    Time stamps:
    00:01:30 Introduction
    00:02:00 What is Graph RAG and how is it different from traditional RAG
    00:20:03 Coding Implementation Of GraphRAG

Komentáře • 28

  • @jaishaliniramakrishnan2814
    @jaishaliniramakrishnan2814 Před měsícem +1

    Thank you very much for the live demo.

  • @themax2go
    @themax2go Před 22 dny +1

    wow best intro ever 🤣🤣🤣🤣💖💖

  • @rockypunk91
    @rockypunk91 Před měsícem +2

    it shows the capability but not the usability in the production level. for instanse I do not want to reindex every documents each time there is an addition or a document is updated.

  • @NikhilKrishna-x1e
    @NikhilKrishna-x1e Před měsícem +2

    How to extract the context from where LLM is generating the answer as mentioned in each answer [Data: Entities(XY)]

  • @fatemadalal4370
    @fatemadalal4370 Před 29 dny +2

    That was really helpful. Can you tell why are we using lancedb here? What information is being stored in it? How exactly do we use it during local search?

    • @SAIKUMARREDDYN
      @SAIKUMARREDDYN  Před 29 dny

      lancedb is not mentioned in my code or explained. i explained mostly about graphrag that's all. but LanceDB is a cutting-edge, serverless vector database designed for developers who need a scalable, efficient, and easy-to-manage database solution. It supports storage of actual data alongside embeddings and metadata, unlike most existing vector databases that store only embeddings and metadata separately. This allows for a more comprehensive data management experience.

  • @mohammadghasemifard9491
    @mohammadghasemifard9491 Před měsícem +1

    Thank you so much, can you make a video from deploying graphrag accelerator method? it is about deployment of graphrag on azure resource group and creating an API key for that.

  • @chrispioline8469
    @chrispioline8469 Před měsícem +1

    Great Video. You did this setup using python library. Can this be done through .NET?

    • @SAIKUMARREDDYN
      @SAIKUMARREDDYN  Před měsícem

      I geuss so. But not sure. Mostly it's comparable with python i guess

  • @themax2go
    @themax2go Před 22 dny +1

    i haven't watched the vid yet, but i experimented w/ graphrag when it came out ~5 mo ago... and it didn't work (back then) ~90% of the time (well, my implementation at least) and it was sloooooooow AKA computationally expensive; then triplets (sci-phi triplex) came out - have you experimented with that? my findings are that it's super fast (well, compared to graphrag) an maybe about on par (i didn't do a deep-dive / exhaustive test yet). what's your thought on that?

    • @themax2go
      @themax2go Před 22 dny

      ps "global" used to fail at ~90%, not local, w/ graphrag.

  • @chrispioline8469
    @chrispioline8469 Před měsícem

    What are the four files (claim_extraction, community_report, entity_extraction, summarize_description) within the Prompts folder for?

    • @SAIKUMARREDDYN
      @SAIKUMARREDDYN  Před měsícem

      They are the prompt files which guide the llm model to create the graph representation of the text that you upload.
      In short, you're using prompt engineering technique on the llm model that you're using to create a graphical representation of the text file.

    • @chrispioline8469
      @chrispioline8469 Před měsícem

      @@SAIKUMARREDDYN Those are optional to edit right. Do we need to edit all those files for every input file we test?
      Also, during initial test run, I didn't comment those files reference in settings.yml. (In settings.yml there were references to these four files). So for a different input, we will be having different prompt files. Won't that cause issue?

  • @user-gx5nq7bi9m
    @user-gx5nq7bi9m Před měsícem +1

    What if you want to plug different types of existing neo4j graphdatabase instead of a sample text? I'd love to know how to give the llm access to the already existing nodes and edges

    • @SAIKUMARREDDYN
      @SAIKUMARREDDYN  Před měsícem

      For now Microsoft has opensource this code. Let me try I am looking for it.

  • @SamiSabirIdrissi
    @SamiSabirIdrissi Před měsícem +1

    10 minutes for a response ?

  • @chrispioline8469
    @chrispioline8469 Před měsícem +1

    what is the cost incurred for indexing... Any details or reference on that?

    • @SAIKUMARREDDYN
      @SAIKUMARREDDYN  Před měsícem

      Cost might be approximately 1$. That's what I got on my bill cycle. And it's recommended to terminate azure openai once usage is done.

    • @themax2go
      @themax2go Před 22 dny

      0 (only cost of electricity) if you run inference locally

  • @phadashish
    @phadashish Před měsícem +1

    azure openai is safe were i wanto give my complte application data to azure openai for creating rag application

    • @SAIKUMARREDDYN
      @SAIKUMARREDDYN  Před měsícem

      No any closed source models are safe. As they will be API based. So there may be data retraining. If you want to ask any LLM on coding question and help only opensource can. I recommend asking syntax and understanding and write. But never update your data to those LLM models. If it's high secure

  • @user-jb2xg5wx1z
    @user-jb2xg5wx1z Před měsícem +1

    Hi i am working in an heathcare related project i need open api key for that .I am a student i couldnt afford it .I am a final year student .could you help me to use yourkey?

    • @SAIKUMARREDDYN
      @SAIKUMARREDDYN  Před měsícem

      That's really not possible. Only one thing you can do is using Azure openai instead of openapi key. Or alternatively using Ollama to use opensource models, or groq API key else nvidia build program in which you get API key of other models not openai
      Nvidia link - czcams.com/video/LeFmIpkLNCY/video.htmlsi=BsYbvbfnAnlGlQkF