What are the LLMâs Top-P + Top-K ?
VloĹžit
- Äas pĹidĂĄn 5. 08. 2024
- đš VIDEO TITLE đš
What are the LLMâs Top-P + Top-K ?
âď¸VIDEO DESCRIPTION âď¸
In this video, we delve into the concepts of Top-P and Top-K in Large Language Models (LLM) as applied in the field of Data Science. Understanding these concepts is crucial for optimizing model performance and achieving the desired results. Stay tuned to gain insights into how Top-P and Top-K influence the outcomes of LLM models.
đ§âđťGITHUB URL đ§âđť
No code samples for this video
đ˝OTHER NEW MACHINA VIDEOS REFERENCED IN THIS VIDEO đ˝
What are the LLMâs Top-P and TopK ? - ⢠What are the LLMâs Top...
What is the LLMâs Temperature ? - ⢠What is the LLMâs Temp...
What is LLM Prompt Engineering ? - ⢠What is LLM Prompt Eng...
What is LLM Tokenization? - ⢠What is LLM Tokenizati...
What is the LangChain Framework? - ⢠What is the LangChain ...
CoPilots vs AI Agents - ⢠AI CoPilots versus AI ...
What is an AI PC ? - ⢠What is an AI PC ?
What are AI HyperScalers? - ⢠What are AI HyperScalers?
What is LLM Fine-Tuning ? - ⢠What is LLM Fine-Tuning ?
What is LLM Pre-Training? - ⢠What is LLM Pre-Training?
AI ML Training versus Inference - ⢠AI ML Training versus ...
What is meant by AI ML Model Training Corpus? - ⢠What is meant by AI ML...
What is AI LLM Multi-Modality? - ⢠What is AI LLM Multi-M...
What is an LLM ? - ⢠What is an LLM ?
Predictive versus Generative AI ? - ⢠Predictive versus Gene...
What is a Foundation Model ? - ⢠What is a Foundation M...
What is AI, ML, Neural Networks and Deep Learning? - ⢠What is AI, ML, Neural...
AWS Lambda + Amazon Polly #001100 - ⢠AWS Lambda + AWS Polly...
AWS Lambda + Amazon Rekognition #001102 - ⢠AWS Lambda + AWS Rekog...
AWS Lambda + Amazon Comprehend #001103 - ⢠AWS Lambda + AWS Compr...
Why canât you have AI driven Text Extraction? #001106 - ⢠Why canât you have AI ...
Which Amazon ML / AI Service should you Use ? #001110 - ⢠Which Amazon ML / AI S...
Why canât I do Generative AI in AWS? #001112 - ⢠Why canât I do Generat...
Why care about Foundation Models? #001113 ⢠Why care about Foundat...
Why play in Amazon Bedrock playgrounds? #001114 ⢠Why play in Amazon Bed...
Get a ChatGPT API Key Now! #001000 - ⢠Get a ChatGPT API Key ...
AWS Lambda + ChatGPT API #001001 - ⢠AWS Lambda + ChatGPT A...
Lambda + ChatGPT + DynamoDb #001002 - ⢠Lambda + ChatGPT + Dyn...
Your own Custom AWS Website + ChatGPT API (part 1 of 5) #001003 - ⢠Your own Custom AWS We...
Your own Custom AWS Website + ChatGPT API (part 2 of 5) #001004 - ⢠Your own Custom AWS We...
Your own Custom AWS Website + ChatGPT API (part 3 of 5) #001005 - ⢠Your own Custom AWS We...
Your own Custom AWS Website + ChatGPT API (part 4 of 5) #001006 - ⢠Your own Custom AWS We...
Your own Custom AWS Website + ChatGPT API (part 5 of 5) #001007 - ⢠Your own Custom AWS We...
đ KEYWORDS đ
#LLM
#LargeLanguageModel
#LLMTemperature
#NLP
#NaturalLanguageProcessing
#DataScience
#MachineLearning
#DataAnalysis
#DeepLearning
#LanguageModels
#AI
#ArtificialIntelligence
#RankingAlgorithms
#NeuralNetworks
#DeepLearning
#DeepNeuralNetworks
#TransformerModels
#Top-K
#Top-P
A really helpful video, thanks.
Glad it was helpful!.... Trying to get better with each video... If you have areas you would like to see me cover please feel free to share... thank you...
Thanks man! Superb teaching
Glad it was helpful! Trying to make each video better and better⌠my goal is to help busy tech professionals get up on all of this exciting technologyâŚ
clear and concise...thanks very much.
Thanks for watching! Working hard to create direct clear and concise videos âŚ.. appreciate your feedback !
Good stuff here. Keep on!
Appreciate it! Trying to get better and better with each video⌠thanks for the feedbackâŚ
Awesome explanation !!
Glad you liked it! Trying to get that much clarity in a concise way on every videoâŚâŚ. Thanks for feedback.
Very helpful and clear. Thank you.
Glad it was helpful! Hyper focused on clear direct videos that break topics down in a way that easy understand ⌠appreciate your feedback!
Great video!
Glad you enjoyed it. Thank you for taking time to provide feedbackâŚ.
ÂĄExcelente explicaciĂłn! Muchas gracias.
gracias seĂąor... agradezco sus comentariosâŚđ
Excellent content and educational. I appreciate putting such efforts into making it simple for others to understand the concepts. A small suggestion, if you donât mind - the background sound is dominating your voice; it's better to lower the BGM sound a lot or remove it altogether.
I really appreciate your feedback... yes, my son has indicated the BGM is a little to loud and your feedback confirms his thoughts as well.. I appreciate positive feedback but also helpful critical feedback in the spirit of getting better... thank you sir... let me know if there are AI/LLM topics you would like to me cover in the near future.. thank you
Great content.
Thank you âŚ. Let me know if there are AI / LLM topics you are interested inâŚ
thanks for sharing, can you make a video on GANS?
Thanks for asking ... yes give me a few weeks to research and I will get something out for you...
Thanks for making this video. Gave very good clarity.
I have one doubt, How does model picks up the final token to return. Even if these different techniques are changing the size of candidate pool, token with maximum probability should always stay at the top. If so, how these parameters impact the final output?
Good question.... So, the tokens, in the "Candidate Pool", are assigned a statistical probability. For example the first token might be 0.50 or 50%. The second might be 0.25 or 25%, etc.... These statistical probabilities come from the pre-training phase, when LLM trained on raw data. When the next word is selected the first word, in this example, will be selected 50% of the time. The second word will be select %25 of time, etc... When Temperature is 0 you get special case where, where all token probabilities go to zero, except the highest probability token, which is always selected. This gives deterministic output. When Temperature goes up, the probabilities are adjusted to give more tokens, with smaller probabilities, in the candidate pool, higher probabilities which give those tokens more chances to get selected.. Is this explanation helpful ?