ML301 Using LLMs to Build a ChatBot on Your Content.

Sdílet
Vložit
  • čas přidán 6. 12. 2023
  • ML301: ChatGPT: The Future of Content Interaction Description
    In this video, we will look at one of the most promising use cases of Large Language Models today - Building a Q&A system with your own content in the background. For example, a customer asked us if we could build this use case: Patients have questions for their health system like ‘How should I prepare for my knee surgery”? They want to build a chatbot to answer their questions. However, they don’t want the answers to come from the general internet like ChatGPT would do, they want the answers to come from our website content. Can we do that?
    There are many more use cases like this including:
    Q&A of a library of pdf documents, like product documentation (“How do I connect a jupyter notebook to IRIS. Give me some example code”)
    Q&A on a service issue tracking system (“What’s the best way to clean up a FHIR database?”)
    Automatically answering RFP/Tender questions based on a library of previous tender responses. (“Do you have high availability?”)
    The examples go on and on and on. Is something like this possible? And how would it work? Watch this video and find out.
    Link to the materials: www.donwoodlock.com/ml301-Dec...

Komentáře • 8

  • @NavidBarati
    @NavidBarati Před 29 dny

    This was by far the most useful video I have seen in a while on AI topic, well done everybody

  • @fbytgeek
    @fbytgeek Před 4 měsíci

    Subscribed! Thank you!

  • @herculesgixxer
    @herculesgixxer Před 3 měsíci

    great job guys, very easy to follow along

  • @bhariharan12345
    @bhariharan12345 Před 4 měsíci +1

    Very easy to understand..

  • @mdabutalha3165
    @mdabutalha3165 Před 3 měsíci

    Great job

  • @adamcole918
    @adamcole918 Před 5 měsíci +1

    Excellent explainer Don, Marta and Georgia!

  • @bigplumppenguin
    @bigplumppenguin Před 3 měsíci

    Great end-to-end tutorial and demo. One question, what's the website crawler you recommend to use? and is splitting crawled content into smaller chucks is mandatory? if so, it is due to input size limitation at the embedding creation stage?