Anyone can build GENERATIVE UI with AI SDK 3.0

Sdílet
Vložit
  • čas přidán 31. 05. 2024
  • A quick look into new AI SDK 3.0 updates. #reactjs #vercel #frontenddevelopment #ai #typescript
    Docs: sdk.vercel.ai/docs/concepts/a...
    Demo: sdk.vercel.ai/demo
    All the action: github.com/vercel/ai/blob/mai...
    ✨ Chat-with-pdf app built using AI SDK:
    • Let's Build a "Chat Wi...
    ✨ My Links -
    Discord: / discord
    X.com: / rajeshdavidbabu
    Want a shoutout?? : www.passionfroot.me/rajesh-babu
    Github: github.com/rajeshdavidbabu
    Timecodes:
    0:00 - Intro
    1:57 - AI SDK 3.0 Updates
  • Věda a technologie

Komentáře • 17

  • @GAllium14
    @GAllium14 Před 2 měsíci +6

    That's crazy 🔥🔥🔥

    • @raj_talks_tech
      @raj_talks_tech  Před 2 měsíci +2

      Ikr. Best use-case for RSCs just got unlocked !

  • @sucoder
    @sucoder Před 2 měsíci +1

    Generative UI 🎉 hearing it first time here

  • @anupkubade2486
    @anupkubade2486 Před 2 měsíci +1

    Does it mean, the SDK returns the react component like Weather or StockPrice directly to client. Or it just returns the supportive data to create your own Weather or StockPrice components?

  • @user-ih5gm7mp9w
    @user-ih5gm7mp9w Před 2 měsíci

    Its technically cool but can someone explain the use case? If I have a service or site what is the benefit of having llm gen components? Is it seperation of concerns witb dynamic display? (I.e you could have thousands of possible components depending on the context, a level of dynamism not practical natively)?
    Im new to all this so hope to hear from the wise old timers

    • @raj_talks_tech
      @raj_talks_tech  Před 2 měsíci +1

      Its definitely not ground-breaking rather more of an experience. Next.js has been pushing the idea of "fetching data on the server" for quite sometime. And this enables us to not stream any-data to the client and then build the UI rather get user-experience fixed by streaming components as the data becomes available.
      This could mean that you could also render view in parts. In the parts we used to think of data and UI as separate entities, now the lines are blurry.

  • @jamespratama9730
    @jamespratama9730 Před měsícem

    This video is great!
    How can the AI responses be saved to the backend? Do you just save the content as a string, and presumably it will be formatted as React code so that when the user returns it can be fetched and rendered as before?

    • @raj_talks_tech
      @raj_talks_tech  Před měsícem

      You dont have to save the rendered code. Just the responses in as jsonb I guess. Should work for more parts.
      The rendering is handled by the framework so u don't have to worry about it

    • @jamespratama9730
      @jamespratama9730 Před 16 dny

      ​@@raj_talks_tech Thank you Raj! I watched your other videos on creating virtualized list. Wondering if you've found the best approach to creating virtualized list in combination with AI UI State? With Virtuoso, it seems not possible to invert EndReached and put it on top, unless you pay for their premium chat license. + On UIState the the messages have complex UIs that aren't so simple as a regular list with different generated UIs depending on the message.

    • @raj_talks_tech
      @raj_talks_tech  Před 15 dny

      @@jamespratama9730 Nope I believe virtuoso completely open-source. Can you check this codebase for startReached code codesandbox.io/p/sandbox/adoring-bhabha-hl168?file=%2Fsrc%2FApp.tsx

    • @jamespratama9730
      @jamespratama9730 Před 15 dny

      @@raj_talks_tech Awesome thanks so much!

  • @justbemeditation1860
    @justbemeditation1860 Před 2 měsíci +4

    Please wake me up 😢

  • @maskedvillainai
    @maskedvillainai Před 2 měsíci +2

    Please start by learning machine learning fundamentals before popping put a bunch of regurgitated Ai models for which the goal of solving any problem at all in products has become a fairy tell. This solves nothing more than playing with tools that did the legwork already and want you to think you’re doing something complicated and brag worthy .

  • @evanethens
    @evanethens Před 2 měsíci +1

    What about code privacy? How to scale here ?

    • @raj_talks_tech
      @raj_talks_tech  Před 2 měsíci +1

      As long as you dont send any data to your ChatGPT APIs privacy shouldn't be a concern here. If you are sending some data to ChatGPT or any LLM API abstraction layer then you most likely have to take care of it.
      With regards to scale, it depends on your system architecture