Building Reliable LLM Apps with OpenAI (Instructor Tutorial)

Sdílet
Vložit
  • čas přidán 8. 09. 2024

Komentáře • 50

  • @jxnlco
    @jxnlco Před měsícem +11

    Creator of instructor here! Thanks so much for this video

  • @mzafarr
    @mzafarr Před 4 měsíci +4

    Please keep on making unique content like this that solves pains of gen AI developers for which solutions aren't that straightforward.

  • @treflatface
    @treflatface Před 2 měsíci +1

    I like how you are so comprehensive in covering the different branches of scenarios and possibilities, going over the trade-offs. All these delivered so systematically and articulately as well. Well done. I RARELY subscribe to tutorial channels but yours is an instant subscribe after 2 videos.

  • @isaihernandez4136
    @isaihernandez4136 Před 4 měsíci +8

    This is gold. Thanks for sharing Dave!

  • @baksoy
    @baksoy Před 7 hodinami

    Great walkthrough and thank you for pointing out Instructor - a great library!

  • @faustoalbers6314
    @faustoalbers6314 Před měsícem

    Well deserved compliments in the comments, you have explained the greatest ability of LLMs very clear. Let's chat at the next Amsterdam AI Builders Meetup!

  • @kapilkhond6339
    @kapilkhond6339 Před měsícem

    This is really wonderful tutorial! I hope this video becomes popular soon. Could you share such more tutorial talking about how to make LLM application output consistent responses with production grade solution that will scale!

  • @daffertube
    @daffertube Před měsícem

    I had been wondering about these problems for a while. This video is 100% gold!

  • @farhanafridi8694
    @farhanafridi8694 Před 4 měsíci +2

    Wow! Knowledge bomb.
    Please make more videos like this.

  • @sapiensmagno
    @sapiensmagno Před měsícem

    Great content. The progressive approach of explaining the problem and narrowing down the solutions is perfect 👌

  • @paulmiller591
    @paulmiller591 Před 3 dny

    Very helpful thank you David

  • @AliAbassi1
    @AliAbassi1 Před 3 měsíci

    The exact video i needed with Pydantic and Instructor - Thank you Dave!

  • @user-vm7xx3wi8cbz
    @user-vm7xx3wi8cbz Před 3 měsíci

    Dave.. your content is so specific for us GenAI devs. I LOVE it. Please keep it up!

    • @daveebbelaar
      @daveebbelaar  Před 3 měsíci +2

      More to come!

    • @user-vm7xx3wi8cbz
      @user-vm7xx3wi8cbz Před 3 měsíci

      @@daveebbelaar I have a follow up question. If you want to "prompt" the LLM to output AI generated emails in a specific format (e.g. intro paragraph/hook of 30 words max, main body of e.g. 50 words max and a CTA of 15 words max) what would be your suggested approach? The traditional way of just giving an example when prompting is very unreliable in this regard but wondering which of your discussed approaches would be best.

  • @pedroaquino3042
    @pedroaquino3042 Před 2 měsíci

    Really helpful video Dave, thank you for sharing this information!

  • @johannesseikowsky8197
    @johannesseikowsky8197 Před 4 měsíci

    man, you're a really good teacher!

  • @Sam-oi3hw
    @Sam-oi3hw Před 21 dnem

    thanks for this awesome video

  • @AndrewChildsza
    @AndrewChildsza Před 4 měsíci

    This was great, thanks. I've had questions about this previously

    • @daveebbelaar
      @daveebbelaar  Před 4 měsíci

      Thanks! The different methods can definitely be confusing at first.

    • @AndrewChildsza
      @AndrewChildsza Před 4 měsíci

      ​@@daveebbelaar They certainly can!
      I was wondering, do you know of a way to make a RAG using something like Flowise AI work with tools? Eg, have a RAG chatbot that is able to call on functions (POST to a webhook), for example when it sees fit to? I have attempted to configure this in Flowise, but always get stuck at merging the RAG and the tool together...
      I suspect something like the solutions you cover in this video could work for that sort of requirement... 🙏

  • @mzafarr
    @mzafarr Před 4 měsíci +1

    THANK YOU!

  • @ashish-blessings
    @ashish-blessings Před měsícem

    Love this!

  • @maqboolurrahimkhan
    @maqboolurrahimkhan Před 3 měsíci

    Thanks Dave, love ur content and channel

  • @SriniVasan-hv8cq
    @SriniVasan-hv8cq Před 2 měsíci

    Absolutely fantastic! Thanks for sharing @daveebballar! Can we make this work with a local llm - e.g. ollama?

  • @micbab-vg2mu
    @micbab-vg2mu Před 4 měsíci

    Great content - thank you for sharing:)

  • @dvx24
    @dvx24 Před 3 měsíci

    Insane content. Thank you.

  • @varung4223
    @varung4223 Před 4 měsíci

    This content is awesome !!!!!

  • @slcpunk_
    @slcpunk_ Před 3 měsíci

    Very helpful!

  • @comptedodoilya
    @comptedodoilya Před 12 dny

    please how do you use the interactive python execution in vs code??

  • @bulutosman
    @bulutosman Před 3 měsíci

    You are great Dave, helping us a lot. Thank you for your effort here.
    Does Instructor library also work with Assistant API of Open AI instead of Chat completion API? I mean instead of client.chat.completions.create, using client.beta.threads.runs.create format. Does this work with Instructor as well? One another question is, are you really using Chat Completion API for your project with your real world client that you mention in the video? If so, why don't you use assistant API? Is not that easier? Is there any drawbacks of Asisstant API over Chat Completion API?

    • @AlwynCornforth
      @AlwynCornforth Před 3 měsíci

      yea i would like to know as well since we using threads and runs this solution does not work unless you build around chat completions

  • @alichch1241
    @alichch1241 Před 3 měsíci

    Hey there new to channel and pretty new to AI still in learning process :) tbh I think this video is soo advanced for me to grasp the idea :) but I have some insights on it can you correct me if Iam wrong :)
    My insight: "You are building a software for responses depending on pretrained LLM models " ?

    • @danielogunlolu
      @danielogunlolu Před 2 měsíci

      yea. He is basically building a wrapper around chat-gpt that does really specific task with more accuracy an efficiency .

  • @GarthVanSchalkwyk
    @GarthVanSchalkwyk Před 2 měsíci

    Hi Dave does your company also make apps for math education. Where can we find details of your company

  • @mzafarr
    @mzafarr Před 4 měsíci

    Won't it be same if I simply pass schema inside the system message rather than using instructorGPT/function calling thing?

  • @ce7882
    @ce7882 Před 4 měsíci

    Is there an example of the content filtering for JavaScript? I can see instructor has a JavaScript version but can’t see any information or examples on content filtering. Would appreciate any help!

  • @bs_general
    @bs_general Před 4 měsíci

    @daveebbelaar, if I'm not mistaken, I think "max_retries=1" means retries are allowed once. If you don't want to allow any retries, it needs to be "max_retries=0", correct?

    • @daveebbelaar
      @daveebbelaar  Před 4 měsíci

      Hmm, while that would make sense, I am not sure. I tried many examples with max_retries=1, and they all failed. I can't see anything in the docs about this. It would require further testing and looking at the API calls.

  • @Vimalnarain
    @Vimalnarain Před 3 měsíci

    can we use it with runs as well

  • @bs_general
    @bs_general Před 4 měsíci

    Hi, thanks for this tutorial. But the Git repo is not available. It shows 404 error. Thanks

    • @daveebbelaar
      @daveebbelaar  Před 4 měsíci

      Ah, it was still set to private. It's fixed now - thanks!

    • @bs_general
      @bs_general Před 4 měsíci

      @@daveebbelaar Yeah, its working now. Thanks 👍

    • @mulderbm
      @mulderbm Před 4 měsíci +1

      Great Stuff there. Really like the use case as it is not new, message classification, but how to do this with an LLM instead of a local ML model and do it reliably!

  • @redxcted_
    @redxcted_ Před 3 dny

    is it just me or is it just always returning a confidence score of .9?

    • @daveebbelaar
      @daveebbelaar  Před 3 dny +1

      Prompt it to be more specific E.g., give conditions on makes it a 0.5 or 0.9