EVERYTHING You Need To Get Started With Local AI, LM Studio, Anything LLM, & RAG
Vložit
- čas přidán 9. 07. 2024
- Artificial Intelligence is becoming more and more prevalent in our everyday lives. In this video Jordan will breakdown how to get started with Local A.I. He explains A.I. terms such as LLMs or Large Language Models, Generative AI, Tokens, & RAG. He will discuss the difference between local and cloud based ai. Also, he will go over the importance of prompts and show you how to create the best prompt for the most accurate response.
We will be going deeper into a.i. and a.i. products on this channel (of course we will still cover all other tech and 3d printing) so make sure to subscribe so you don't miss any updates!
What are YOUR thoughts on LLMs? Have you used LLMs? If so, what for? Let us know in the comments below!
#microcenter #ai #artificialintelligence #llm #rag #largelanguagemodels #prompting #token #prompt #localai #llama3 #lmstudio #anthropic #contextwindow #generativeai #tech #futuretech
____________________________________________________________
CHAPTERS:
00:00 - Intro
00:31 - Meet AL
01:45 - A.I. Jargon
05:28 - LM Studio
07:39 - The Chat Window
08:15 - Large Language Models & Tokens
09:16 - Context Windows
11:54 - System Prompt Engineering
13:34 - Setting Up System Prompt
18:31 - Prompt Results
19:53 - AI Made Website Coding
21:43 - Talking to Your A.I. Model for More Results
23:52 - Story Telling Prompt
27:46 - Keys To An Effective Prompt
29:21 - RAG or Retrieval Augmented Generation
34:35 - Local A.I. System on Your Network
35:20 - Outro - Věda a technologie
BEST video on the subject!! With time, this will hit high number of viewers..
This should be the top video that everyone should watch before beginning their journey for local AI. I wish this video was around 5 months ago. Great video! you got a sub
i agree outstanding video was exactly what i was looking for
subscribed. great video well explained
Loved the vidio. Please upload the anything llm vid soon!!
I have 6GB VRAM (RTX 4050) and can run a 9B Q6 LLM (Dolphin-Yi) perfectly fine. Its pushing near max but works great. Its %100 uncensored as far as I know and near as good as any other LLM Iv ever used locally or online. Would be nice if i had more than 256GB VRAM though.
I can attest that for all the cuda cores and memory bandwidth for the 4080 Super, 16gb is nothing when it comes to generative AI.
What is the total price of the hardware you're using?
Is that Cooler Master Masterframe 700 case ?
Yeah that's the one, we use it as a test bench since it's easy to swap components. We'll be moving AL into a new case, stay tuned for that video!
@@microcentertech what other advantges do you see in that case besides the easy to swap out ?