Tuning Your AI Model to Reduce Hallucinations
Vložit
- čas přidán 28. 05. 2024
- Discover watsonx → ibm.biz/discover-ibm-watsonx
You've probably heard a lot about AI hallucination, but is that a permanent flaw with gen AI or can it be diminished or eliminated through prompting techniques? In this video, IBM Distinguished Engineer Suj Perepa explains what AI hallucination is and why it occurs and provides 5 prompting techniques that can be used to contain it.
00:00 - Description & examples of AI hallucination
01:22 - Types of hallucinations & why they occur
02:47 - Prompting techniques for containing AI hallucinations
03:08 - Temperature
04:59 - Role assignment
05:53 - Specificity
06:53 - Content grounding
07:42 - Instructional dos and don’ts
Get started for free on IBM Cloud → ibm.biz/sign-up-now
Subscribe to see more videos like this in the future → ibm.biz/subscribe-now
#ai #llm #aiprompt
Thanks for info
Need to test.
3:28 how the temperature in high value could lead greedy results, while it is normalizing the selection chance and lead to get more random creative results than greedy (highest possible one)
as far as i know, temperature 0 makes the model to be more deterministic and repeatative. In many documents i have read that temp 1 leads to creative answer. can you provide links that supports you theory?
You are the best
After tuning much enough, you realize the only thing that had been hallucinating all the time was yourself
🤣🤣🤣🤣
😂
Please voiceover into English