How Did Llama-3 Beat Models x200 Its Size?
VloĆŸit
- Äas pĆidĂĄn 21. 04. 2024
- Sign up Shipd now to start earning while coding! tally.so/r/3jBo1Q
And check out Datacurve.ai if you're interested: datacurve.ai/
In this video, I compiled the latest Llama-3 news and information that you might have missed. Llama-3 is actually very impressive, and I am going to find my jaws because I accidentally dropped it somewhere.
xAI News
[Grok-1] x.ai/blog/grok-os
[Grok-1.5 Vision] x.ai/blog/grok-1.5v
[Code] github.com/xai-org/grok-1
Llama-3 News
[Blog] ai.meta.com/blog/meta-llama-3/
[Huggingface] huggingface.co/collections/me...
[NVIDIA NIM] nvda.ws/3Jn5pxb
This video is supported by the kind Patrons & CZcams Members:
đAndrew Lescelius, alex j, Chris LeDoux, Alex Maurice, Miguilim, Deagan, FiFaĆ, Daddy Wen, Tony Jimenez, Panther Modern, Jake Disco, Demilson Quintao, Shuhong Chen, Hongbo Men, happi nyuu nyaa, Carol Lo, Mose Sakashita, Miguel, Bandera, Gennaro Schiano, gunwoo, Ravid Freedman, Mert Seftali, Mrityunjay, RichĂĄrd Nagyfi, Timo Steiner, Henrik G Sundt, projectAnthony, Brigham Hall, Kyle Hudson, Kalila, Jef Come, Jvari Williams, Tien Tien, BIll Mangrum, owned, Janne KytölĂ€, SO, RichĂĄrd Nagyfi, Hector, Drexon, Claxvii 177th, Inferencer, Michael Brenner
[Discord] / discord
[Twitter] / bycloudai
[Patreon] / bycloud
[Music 1] massobeats - swing
[Music 2] massobeats - lush
[Music 3] massobeats - glisten
[Profile & Banner Art] / pygm7 - VÄda a technologie
Sign up Shipd now to start earning while coding! tally.so/r/3jBo1Q
And check out Datacurve.ai if you're interested: datacurve.ai/
On a side note, I am also looking for some like-minded people that are down to work together. For video scripting to maybe revive the AI newsletter with me, feel free to hit me up on Discord if youâre interested!
Llama 3 is 8B instead of 7B because the increased vocabulary size -- llama 8B has a feature dimension of 4096. Therefore, the initial embedding layer goes from 32000*4096 to 128000*4096, and the final prediction layer goes from 4096*32000 to 4096*128000. Aka a difference of 800M parameters.
This kind of knowledge đ€đ€
Bigger vocabulary means the tokens are longer?
Never mind he talks about it in video sorry
I much prefer this method of a greater vocabulary size as it results in higher efficiency of long contexts, scaling beyond what the 7b is in terms of efficiency past a given point.
@@Ginto_O Depends on the tokenization method, but it can be the case. In some methods like Wordpiece, high frequency words are kept as one token will low frequency are split into subwords. If you increase vocab size then you allow for more tokens and hence more "full word" token at the same time.
I usually Hate Facebook but this time there doing a really good thing by pioneering open sourced ai
We happy because the use 100 million for traning ai for public
They are accelerating the AI arms race even faster, risking it all coming to billions losing jobs and heavy social unrest being suppressed by robots.
I know right? I still can't believe we ended up in the timeline where Meta of all companies are the champions of open source! Like seriously, who went back in time and stepped on a butterfly? GIVE ME THEIR NAMES!! đ
Only because their initial llama models were leaked lol
Facebook becoming "Meta" is still kinda a weird flex, but hey, maybe it'll work out tin the end who knows
This 3rd phase of Zuck has really hit his stride, amazing.
As AI gets better, the zuck appears more human.
@@kiwihuman i mean, if you're a robot and you want your kind to rule over humans, going open source is the fastest way towards improvment
And this isn't even Zuck's final form.
Llama 3 represents the pinnacle of civilization by the new human species, homo zuckerbergus
@@kiwihuman yes, improvements in compute are exponential. It's no mere coincidence.
Open sourcing is better because it takes away the leverage of models like GPT4 and other closed sourced ones from their competitors. If you can't compete, disrupt the competition.
sad to see Stability ai falling apart tho
Stable diffusion is not falling apart, SD3 has hit gold in my view.
That is the best image generation model right now.
SD3 is accessible via API and its gonna make a killing. I don't think we have seen the last of them. As a matter of fact,its only a start. Stable diffusion has has the potential to give SORA a run for its money.. We will see.
@@luckyb8228 i mean the company, didn't by cloud mention that.
the apis access could gradually become a closed source software although SD3 demos are amazing i agree
Or you could see it as meta dumping money to hurt the competition.
If the model could train more why would they stop? I thin they may be under the expected budget and wait better results. In this case, opensourcing is a good marketing strategy
Congrats on graduating and good luck on your foray into doing more CZcams. Your videos always go beyond surface level news. Itâs the reason youâre the only AI channel I watch, and why I watch all the videos you drop. Looking forward to seeing how your channel grows! -Niko
Damn corridor commentedđź
your videos are ass bruh
Hi Niko, love corridor
I expected y'all to watch cus he's been making quality vids for a good bit now, but seeing a CorridorCrew comment with like 15 likes is bizarre
Corridor?
if you live long enough you see your self become a hero - reptile zuk
You either die a villain, or you live long enough to see yourself become a hero
So I guess that's Zuck's redemption arc, huh.
Gust Jenius đ
How the hell is Zuck the good guy in this?
character arc of the century for sure.
The lesser evil, perhaps. FB still makes bank selling customer data...
AI is generally not a good thing. It is here to replace you.
Stop thinking in these terms for multibillioners please
@@nangld And what are you going to do about it? Cry more?
Congrats on graduating bro đđđ and to clarify, I'm not the "boss man," I only want to support your excellent work. Thank you for all your videos and excited to follow along your adventure đ
nice!
you the goat, thank you so much for your kind words!
Mistral 7B was released based on Llama 2 architecture, i can't wait after 2-5 months what Mistral will release based on this new way of training models by Meta AI
Llama 3 based models will absolutely beat GPT4.
@@Slav4o911 The signs are looking promising that Llama 3 will beat GPT4 once the community starts to fine-tune them, especially looking at how big of an improvement that's been done on Llama 2, it's likely we will see some big improvements on the newer model, probably more so because these are bigger models.
Its impressive what llama 3 8b can, I was floored with how good it can comprehend text and improvise
â@@paul1979uk2000
They got there by fine-tuning the shit out of it, I have no idea how the community is supposed to put in that much power.
bro you should create an LLM primer playlist, from training to inference, from a to z.
I am actually planning something similar like this, it'll be sick
I did not expect that I could run an LLM that can beat an older version of GPT-4 on my own PC this year.
For reference, 70B runs at ~1 token/s on a 8C CPU. Not "interactive", but I sometimes switch tabs when asking GPT-4 something bigger too. And 8B runs at 60 tokens/s on my RTX 4080, which is more than interactive!
Yeah, it does surprise me how quickly these open source models are developing, from a size to performance level.
You get a sense that the likes of OpenAI, Microsoft and Google are using a brute force approach to A.I. which must cost them a fortune to run compared to the smart nimble way that the open source community is doing, and it makes sense, if you have limited resources, you're going to think outside the box to get better results.
I really do wonder how much better a 7b, 13,b 40b and 70b can get before we get to limits that we need bigger models for better results, it looks like we are still a long way away from that because we keep finding better solutions for the given model sizes, which improves performance and like you said, it's remarkable the pace of development in just over 1 year, makes me wonder what we will see over the next 5, 10 years.
How much RAM do you need for the 70B model? And what level of quantization are you using?
Is it possible to run 8B on a Ryzen 4700U using the iGPU paired with 32GB of RAM?
@@r.k.vignesh7832 I got a notification but your response isn't here, anyway thanks.
Is it possible to use the integrated GPU to make it a little bit faster?
@@masterneme Damn, I don't know what happened. I said that you can run but probably not very fast, as I can easily run 8B models on 16GB RAM + 6GB VRAM, and that you should try it on Ollama and see how you go
Peak thumbnail
It is competition. Open source is way to get some users away from GPT-4 userbase. Llama is not yet ready, it make mistakes. So, not yet time to collect money from it, now is time to get postition in AI market. So, open source is clever move.
Sad, I hope someone had already saved the best open source -- offline, so in the future when it become paywall people would just use the model when it was free. for DMCA I guess they should uplaod it to a torrent, so that everyone is the host.
What mistakes... have you even tested it? Llama 3 is the best open model ever released. Now open models are just a few finetunes away from flatly beating GPT4 and by a lot. Considering how much Llama 2 based models had evolved, almost nudging GPT4, I have no doubt, open source Llama 3 based models will beat GPT4, the difference is not even that big, just a little uncensoring will beat GPT4. When a model is censored it's lobotomized, so it doesn't matter how good the real GPT4 is, if people can't reach the unlobotomized model. Llama 3 will be unlobotomized by the community, there is no way a lobotomized model can ever beat a truly open and uncensored model with similar capabilities. It's funny how because of a few "bad" words, the whole AI field is lobotomized and stifled, because a few human snowflakes can't take reality and don't have the ability to think by themselves.
@@Slav4o911 The problem is you can't really "unlobotomize" an LLM model without decreasing its quality.
I believe the current best uncensored model is WizardLM-2-8x22b. They released it uncensored by mistake. It wasn't lobotomized in the first place. I use the IQ_4_S version and it's amazing.
â@@Slav4o911Open AI businesses model seems to be throwing more power into GPT, GPT 5 will need a small country's energy to run. Llama 3 can be run locally, that a insane difference no matter how you look at it.
@@Slav4o911 Yes it's best but ask same question twice you'll get different answers - and one answer is correct.
Looking forward to see more tech stuff from you. Congrats on graduating btw!
Thanks for all of your great videos! Just keep us updated with the latest and greatest AI news and tutorials :)
Hey, congrats on finishing university! Please do what you like to do the most. But in my opinion there are already a lot of AI-news youtubers that cover a lot of what is happening in the AI world on the surface but what I really like about your content is the way you try to go into one topic a bit deeper. I really like the entertaining but educational style of your videos, so keep up the great work.
Really looking forward to your next videos man! I know you will keep doing an amazing job! Will support patreon as soon as my startup is no longer just bleeding money đ
Congratulations on graduating! đThatâs huge, and I have loved your videos
Thank you for your videos. You're very instructive and clear in your assessments.
Keep at it :)
Good vid bruv đđđ i think you gonna have success in this.
hi bro thank for video you doing a great job!
just wanted to ask which software u used to create/animate you avatar at the end of the video? it's in general called png-tuber if i'm understand correctly? but which exactly do you use?
I love when you explain research paper more than just AI News. Even this video was a little bit more in depth into the science of the machine learning than other videos out there. Hence, I continue the good work.
Congrats on graduating man!
Full time CZcams is a good idea but remember to keep a backup plan
Interesting twist of events indeed! Small yet capable models pave the way for standalone LLMs like Phi-3 đ€Ż
You are the only person that talk about AI in a way that I understand and also don't waste my time talking about random stuff for 10 minutes.
I want to thank you for this. When llama 3 was announced I watched and read other channels and I was so disappointed, you have spoiled me with your quality.
Interested in the video scripting part you mentioned during the life update section. Where can I reach out to you?
Hope to see more of you. :)
Also, when it comes to LLMs, theyâre spending far less money than OpenAI as well as far less compute to kneecap their competitive edge with the larger âbetterâ models, thereby setting themselves up to capture a significant amount of the market share around AI later down the line a la Microsoft making Internet Explorer free versus Netscape who charged a bunch of money.
you are doing great keep it up.
Video aside (amazing video btw), the thumbnail is absolutely diabolical I cannot lie.
Thank you for your content
good job! Keep it going!
Thanks for being one of the few AI youtubers that seems very knowledgable on ML as a whole. You're doing a good job of condensing the information without leaving the juicy technicals out, imo
Congratulations on graduating!!
Resource requirements are so high on the big models that you can effectively open and close source. Open sourcing GPT4, for instance, wouldn't halt OpenAI's revenue stream.
loved the video... guilty confession here: saw the thumbnail and thought it was a fireship video
This llama definitely not thrown off its groove
Heck AI is such a vibrant and fast evolving industry this is like trying to surf a 100 ft wave and remain on top. Data curators! Ahh god that's like something from a sci-fi novel 5 years ago.... data curators... we collate and sell high quality training data ahhh
They guy just finished university... And here I am, having finished my Bachelor's in Software Engineering last year by cheating through all the exams, watching this video and not understanding how half the things discussed work.
That is to say, you've made it, OP! Wish you luck with whatever endeavor you go for next.
And to everyone else - make sure you're actually interested in the subject enough before applying! đ€Ł
I suspect a big reason for them to want to release open source is because for one, the community themselves will help to improve the model a lot, which over the long run would save Mata a fortune, and two is probably to level the playing field, being that A.I. is likely going to be important in so many areas that it would be dangerous to allow so few governments and corporations control them, so open sourcing them, blows that open and puts everyone on the same playing field.
If we had a situation where eventually one or two closed models dominant the market, that would give that corporation and probably the government of the country a massive advantage over everyone else, it's a given that they will use the uncensored version of the model whiles everyone else gets the restricted one, because of all this, open source is very important for A.I. models.
There is also the advantage of open source models that will lower the cost for consumers and gives consumers far more control and privacy when running at a local level.
Tried Llama 3 Instruct on LM Studio but when I ask it something it doesn't stop generating, it just keep going. Is there any way to fix that?
You got this broski!
Congratz on graduating
If the whole YT thing doesn't work out, be an ML researcher lol
In all sincerity, I like it when you go deeper into the papers and research. Most AI YTers either focus on AI News, test running the tools or just high level think pieces. Those are nice and all, but stuff like this is cool too.
I think Yannic Kilcher does paper deep dives too? No offense to the man though, his videos are just too long. And probably too technical. While you balance the technical stuff that I'm curious about without making it well, too technical.
Key takeaway: Zuck with beard looks more human.
The beard had me đ
YES , continuing down the research analysis path is the more interesting option imo!
đ„
Zuck looks more human than ever! 8:50
Beard zuck looked more human
Thanks dude. How long before we see VLMs built on this? Haha
Probably llama4, and they will likely use JEPA architecture which will make it insane
I tried this Llama-3 on the Nvidia website and it can help my coding be very capable, maybe at the par with Sonnet level of ClaudeAI.
Good video!
How gpt4 is bigger then lama 3 8b with 200x like it should be 1600billion parameter at this point, or there is something i miss??
more bycloudai videos would be awesome.
I like your style :D
zuck & musk are doing some good things now.
And what hardware can run that?
So. What does one need in terms of hardware to self-host a Llama 8b? or 70b?
If they're quantized (compressed) to the GGUF format, the numbers in their names are usually a good indicator on how much RAM you'll need to run them.
For example, 8B will probably need around 6-8 GB of RAM unless you choose a heavily quantized version, which could let you get away with less RAM at the cost of a dumber AI. VRAM from an NVIDIA card will be the fastest, AMD will be a little slower (I think), and regular RAM will be the slowest.
If you install LM Studio, you can view the models on Huggingface and see exactly how much RAM each version requires.
@@Lar_me Appreciate it!!!
Actually, it makes perfect sense to start with open sourcing. As clearly shown, AI is in its infancy And we are highly ignorant on how to properly train them. Later models can always be closed source, but this is a crucial period of information gathering and experimentation. So it's not only beyond reasonable, but actually rather smart.
The thumbnail is peak fiction.
Now we just need to wait for the uncencored finetunes
good luck with your channel, I think you can combine the mix of popular+studying. find your own mix and popularize it, not the other way around o7
Video is already out of date an hour after being posted. Phi3 blows Llama 3 out of the water
I thought that it was clickbait, it wasn't. Cool channel btw, great comparisons and very informative. Sub++;
thanks
Can you do one about ai music?
I built a 7xgpu rig that lets me run this bad boy at full fp16....frick it's amazing!
Fire video
isn't mistral and some other ai with name starting with p (I forgot it ) even more impressive than llama? (I think the name was phi 2 though I might be wrong)
Well I completed my university too. Time to experiment with llms.
When in the Suck VS Susk Fight???
Get me into the Llama club
Open sourcing it makes its better in the long run.
Good video
You want do technical analysis, cool! Despite channel being like cool and chill you still give important information unlike most channels.
Nice
Can't believe you forced me to click on the video with this thumbnail LMAO
2:21 what I am interested in (and most developers do need, though they may not realize it) is MMLU and human eval score (unbiased and uncontaminated only) because this gives the model the ability to do things that uptil now (before llama3-8B) only mixtral could do, but that is huge compared to this (don't need to mention bigger models because it is obvious they can do it too but they are just too big) so yea, I love this 8B model. I am sure next 3b or even 1b models would be as great as this (Mark Zuk promised mobile based models in 2025). So, I am really enthused and really love what meta (not facebook) is doing finally.
I think 8B models are also not very far away from running on the future mobile phones. It would be neat to have a model which can outperform GPT4 running locally on your smartphone. That reality is actually not very far away. Unless some dumb politician bans open models.
You make incredible youtube videos! Please more! Also if you're looking for a job related to introducing AI solutions in an enterprise, without needing to know how to strictly develop it please get in touch we're recruiting
Please someone tag me if there's a open-source version of a TTS, that's big enough like nvidia or meta
Benchmark is one thing. But I found it gives more generic answers, even ignoring specifics in the question. So there is definitely more blur or average in it with fewer parameters.
Besides anything else making Llama-3 open source will put pressure (remove money) on OpenAI.
Lizards are cunning.
At this Llama pace 1B models are going to be everywhere and GPT-4 level will be the minimum
Llama3 hallucinates more than any of the other, comparable models in the Ollama index. Maybe it performs completions and follows instructions better. i haven't gotten around to that, yet. I didn't set the temperature to 0.0 and supply a seed, so your experience might be different from my own, but I casually threw it the chat prompt, "Can I get a witness?" It started off with a coherent response and around three paragraphs in, it began to respond to its own, previous paragraphs. The response was looooooong. And ridiculous. Each paragraph was a response to the previous one.
Llamas with hats
How much of this script did ai write?
0 words
I tried it for coding, it is no where near as Cloude 3
7B to 8B because the vocabulary size is much bigger for 3. I have heard thereâs also some gpu related advantages.
ZUCC REDEMPTION ARC
openai is cooking
Mark competing OpenAI through open source
He seemed so excited to leave
Meta is open sourcing it because they learned from Microsoft and vscode. They will sneak into the middle between the user and the developer and in the end the can probably monetize it somehow (think about copilot and vscode)
The most shocking thing about this video is Zuck with a bear. This is my work account so I'll just leave it at that.
Meta open sourced it? I'm genuinely surprised.
so far i'm noting llama3 if prompted properly does better than any other model for basic tasks that have long term reasoning.
NOTE: using ollama embedding with RAG
10:23 is a her. The anime is called "Suzumiya Haruhi no YĆ«utsu" Watch it...