Prompt Engineering | How can I talk to AI Efficiently?
Vložit
- čas přidán 26. 07. 2024
- This video covers how to leverage prompts to talk to AI more efficiently
📌 Related Links
================
🔗 AI Primer Playlist - • AI Primer
⏱ Chapter Timestamps
====================
00:00 - Intro
00:30 - Agenda
01:01 - What is Prompt Engineering?
02:31 - Zero Shot Learning
04:58 - One Shot Learning
07:01 - Few Shot Learning
08:03 - Tokens, BPE and Vectors
11:00 - AI Hallucinations
12:05 - Food for thought
12:45 - Summary
Join this channel by contributing to the community:
/ @techprimers
📌 Related Playlist
================
🔗 AI Primer Playlist - • AI Primer
🔗Spring Boot Primer - • Spring Boot Primer
🔗Spring Cloud Primer - • Spring Cloud Primer
🔗Spring Microservices Primer - • Spring Microservices P...
🔗Spring JPA Primer - • Spring JPA Primer
🔗Java 8 Streams - • Java 8 Streams
🔗Spring Security Primer - • Spring Security Primer
💪 Join TechPrimers Slack Community: bit.ly/JoinTechPrimers
📟 Telegram: t.me/TechPrimers
🧮 TechPrimer HindSight (Blog): / techprimers
☁️ Website: techprimers.com
💪 Slack Community: techprimers.slack.com
🐦 Twitter: / techprimers
📱 Facebook: TechPrimers
💻 GitHub: github.com/TechPrimers or techprimers.github.io/
🎬 Video Editing: FCP
---------------------------------------------------------------
🔥 Disclaimer/Policy:
The content/views/opinions posted here are solely mine and the code samples created by me are open sourced.
You are free to use the code samples in Github after forking and you can modify it for your own use.
All the videos posted here are copyrighted. You cannot re-distribute videos on this channel in other channels or platforms.
#PromptEngineering #AIPrimer #GenerativeAI - Jak na to + styl
I think we need to train the model to integrate with a realtime weather API for these kind of prompts to fetch realtime information. Also the prompts should be fine tuned to ask basic info from users like country/location.
Hi Surya, Training the model is a costly operation and does not fit a realtime usecase where you need to plug into external services/dependencies. Hence Retrieval Augmented Generation (RAG) got introduced. we will be looking at the RAG Architecture in detail in the next video.
Looking forward to more videos on AI/LLMs!
Regarding the food for thought question:
I think if there is enough past data available like the temperature on this day at this time for the past few years along with other weather related data-points like an incoming storm, etc. then LLMs can make a reasonable prediction about the current temperature. Ofcourse the accuracy might be off but I am expecting to atleast get a close-enough result in most cases.
And now that I think of it, this is similar to technical analysis for stocks.
Good one buddy. This is how we were all using Machine Learning.
However, after the introduction of RAG, we can plug in internal knowledge base or source of truth into the LLM flow. My next video will cover that. Thanks for commenting.
It's look like getting Info from data-source --> analysed like decision--> processed --> updated Knowledge base sink --> Generating Observations.
Spot on Kunal!
🙂🙏👍
1 view