Build a Copilot with Azure AI Studio

Sdílet
Vložit
  • čas přidán 16. 07. 2024
  • In this episode of the #AzureEnablementShow, host Aaron Stark is joined by Natalie Mickey and Ivor Berry (‪@ivorjberry‬) from Microsoft Azure, to talk about how you can use Azure AI Studio to build your own Copilot.
    Natalie will share all the amazing documentation provided by Microsoft to help developers get access to Azure AI Studio and get the training they need to start coding!
    Next, Ivor will walk through one of the learn modules and then demonstrate how to use Azure AI Studio and prompt flow to build a custom copilot. He then shows how to connect a prompt flow to a searchable index of brochures from a fictional travel agency and how to chat with the generative AI model using the context from the search resource. He also demonstrates how to deploy the prompt flow as an endpoint.
    Resources
    • Collection: Build cutting-edge, market-ready, responsible applications for your organization with AI aka.ms/azenable/160/06
    • Learning path: Create custom copilots with Azure AI Studio aka.ms/azenable/160/03
    • Module: Introduction to Azure AI Studio aka.ms/azenable/160/01
    • Module: Get started with prompt flow in Azure AI Studio aka.ms/azenable/160/02
    • Technical documentation: What is AI Studio? aka.ms/azenable/160/04
    • Tutorial: Deploy a web app for chat on your data in the Azure AI Studio playground aka.ms/azenable/160/05
    Related episodes
    • Make Your Private Copilots Production-ready aka.ms/azenable/155
    • Develop Your Copilot Skills aka.ms/azenable/151
    • Develop your AI coding skills aka.ms/azenable/137
    • Watch the Azure Enablement Show aka.ms/AzureEnablementShow
    Chapters
    0:00 Introduction
    0:27 Copilot overview
    1:17 Access requirements
    1:33 Skilling resources
    2:05 Learning module
    2:50 Azure AI Studio
    3:40 Prompt flow creation
    5:58 Configuration
    6:51 Testing the flow
    8:25 Deployment
    9:07 Resources
  • Věda a technologie

Komentáře • 9

  • @dennism3906
    @dennism3906 Před měsícem +2

    Thanks Folks that was nice and quick

  • @nuclear_AI
    @nuclear_AI Před 2 měsíci +2

    This is AMAZING!
    Initial thoughts 💭
    1- How much data can be embedded?
    2 - Does response quality decline linearly with embedded database size?
    3 - Can this handle multidimensional Daya (xls/CSV)?
    🤔
    All thoughts/answers/responses highly appreciated.
    🤜🤛

    • @ivorjberry
      @ivorjberry Před 2 měsíci +1

      Hi @nuclear_AI! Somewhat long response, but hope this helps provide some clarity
      1 - amount of data in your vector store varies by data source you use, and the pricing tier of your search resource. For example, if you use AI Search in the Basic tier, you have 15GB of storage and 3 partitions (5GB each) with approximately 1.1 billion floats per partition. A lot of that is determined by how your data is chunked - larger chunks mean less embedded vectors, however smaller chunks offer more granular (and accurate) responses (but take up more space)
      2 - response quality doesn't decline with embedded database size. Since all of your grounding data is stored as individual vectors in your database, the input is then vectorized and compared to nearby vectors - all part of the magic providing semantic results.
      There is possibility with too much data you'll have too many vectors in a small multi-dimensional area, and since the top responses are what are returned by the search resource, the data you *actually* want could get missed if the user input isn't specific enough. Again, this can vary greatly depending on how your data is cracked and chunked when writing to your vectorized index.
      3 - Similar to standard indexes, how you delimit files like CSV/xls plays a role in how well that data is represented in the vector index. You can specify the delimiter in a custom index to split it however makes most sense for your data. When working with these multidimensional files, you'll likely need to test and iterate on your chunking methods to determine which way to split the files works the best. There is a great tutorial on the docs page that generates embeddings from a CSV that you might find helpful: learn.microsoft.com/en-us/azure/ai-services/openai/tutorials/embeddings?tabs=python-new%2Ccommand-line&pivots=programming-language-python

  • @JonathanJournal
    @JonathanJournal Před 11 hodinami

    you have an endpoint... then...? can connect to copilot studio?

  • @Kinnoshachi
    @Kinnoshachi Před měsícem +2

    ❤ Microsoft learn

  • @dmb-uk
    @dmb-uk Před 2 dny +1

    Misleading title as you have NOT built a Copilot. You just played within the Azure AI Studio playground.
    Where is the Copilot that can be used by users?

  • @farhannahbadruzaman5357
    @farhannahbadruzaman5357 Před měsícem +2

    Hi there, I'm quite new to Microsoft and trying to understand more about the AI capabilities. Can we create copilots that work alongside Azure Dev Ops. For example i'd like to create a copilot that can generate user stories and then create those tickets on my board. I'm also really interested in creating a copilot that specialises in test generation, would it be possible to set a copilot up so that it reads my repo as a data source then generate e2e tests for it. Maybe it even outputs the tests in a new branch back on the repo?

    • @codersexpo1580
      @codersexpo1580 Před měsícem

      Great questions! I have the same ones and looking for answers. Have you found any information on this yourself yet?

  • @JaredArms
    @JaredArms Před 2 měsíci +2

    🧠🧠