More Cores vs. More Cache, What Boosts Gaming FPS?
Vložit
- čas přidán 1. 06. 2024
- Support us on Patreon: / hardwareunboxed
Join us on Floatplane: www.floatplane.com/channel/Ha...
Buy relevant products from Amazon, Newegg and others below:
GeForce RTX 3060 Ti - geni.us/yqtTGn3
GeForce RTX 3070 - geni.us/Kfso1
GeForce RTX 3080 - geni.us/7xgj
GeForce RTX 3090 - geni.us/R8gg
Radeon RX 6800 - geni.us/Ps1fpex
Radeon RX 6800 XT - geni.us/yxrJUJm
Radeon RX 6900 XT - geni.us/5baeGU
Radeon RX 5700 XT - geni.us/L6Mb2JK
Radeon RX 5700 - geni.us/LZ2jn7
Video Index:
00:00 - Welcome back to Hardware Unboxed
05:43 - Test System
06:55 - Rainbow Six Siege
09:26 - Horizon Zero Dawn
10:02 - Watch Dogs Legion
10:24 - Cyberpunk 2077
10:45 - F1 2020
11:15 - Shadow of the Tomb Raider
12:15 - Final Thoughts
Read the written article on TechSpot: www.techspot.com/article/2308...
More Cores vs. More Cache, What Boosts Gaming FPS?
Disclaimer: Any pricing information shown or mentioned in this video was accurate at the time of video production, and may have since changed
Disclosure: As an Amazon Associate we earn from qualifying purchases. We may also earn a commission on some sales made through other store links
FOLLOW US IN THESE PLACES FOR UPDATES
Twitter - / hardwareunboxed
Facebook - / hardwareunboxed
Instagram - / hardwareunboxed
Music By: / lakeyinspired - Věda a technologie
Quick note, I incorrectly said RTX 3090 at 9:45 and 12:12. I meant to say RX 6900 XT.
Thanks Steve
Honestly Steve this is the stuff I sub for, excellent work mate.
These serious issues made the video unwatchable, please reload! :)
Was this video sponsored by Gigabyte Aorus? ;)
@@FiveFiveZeroTwo no
"Cash generally provides the largest performance gains"
Nvidia Marketing Department: "Write that down, write that down!"
hahahaha
AMD: "Okay, let's try more cache... and stack it on the CPU."
@@c.h.4126 It worked the 12900ks is the only cpu that beats it but only just
@@ronanwaring3408that too only with ddr5
@@baldevrajkhera8592 no 12900k doesn't need ddr5 to get max performance under 100$ Samsung b die ddr4 4400mhz memory is enough to get max performance out of 12900k
5600G reviews brought the impact of cache into focus, showing that a 6C/12T Zen 3 with half the cache performs more like a 3600 than a 5600X in games.
Which is not bad since 3600 has 32mb of cache
@@riba2233 There are architectural gains for sure, but what matters is how much cache can a single core can address. In this case, a single core can address 16MB of cache in both 5600G and 3600 since the 5600G is monolithic and the 3600 has 2 CCXs in a single CCD, so the 32MB of cache is split up into two chucks of 16MB.
So total cache size doesn't matter. What matters is how much cache can a single core address. Like 5900X and 5950X have twice the amount of TOTAL L3 cache as 5800X and 5600X but they barely perform any better. Because again that 64MB of cache is split between two CCDs and a single CCD has 32MB so even in Ryzen 9 a single core can only address 32MB, not the whole 64MB.
3500 showed the same. 3500 was basically a 3500XT with half Cache and performed way worse.
Just imagine how Ryzen APUs would perform with 3D stacked cache onto it. DDR4 wouldn't even be much of an issue anymore, it would be the perfect cost effective way to do 1080p high gaming at over 60 fps. DDR5 is great, but everything related to it will be so expensive, the motherboards and the memory itself, that I struggle to see that as a budget option in the coming years.
@@riba2233 in the 3600 each core can access only 16mb of cache
in the 5600G each core can also access 16mb cache
its like comparing the 5800x with the 5950x, the 5950x has double the cache but only half of it can be accessed by each core (ofc the first comparison was between ccxs and the second between chiplets but the amount each core accesses is still comparably the same)
"For best performance go for MAX cores and MAX cache for MAXIMUM power" - Jeremy Clarkson, 1998, perhaps
More POWAHHHH!
"Speeeeed and Power solves many things"
@@olifyne6761 and hammer
Sometimes, your genius is almost, frightening
Top gear? Nah, TOP CORE
"Cash provides the largest performance gains" - Ain't that right
I always snicker when i hear it pronounced, 'cash-ayy'! :)
It's nice when someone does the experimentation and comes up with real world answers, rather than just rehashing erroneous information.
I love this channel and Gamers Nexus for this.
Lol you mean Tech Deals?
@@Angel7black 🤫 shhh I’m not willing to turn off all my other programs like Corsair iCue when I game 😂 That way I get to justify my 5900X and agree with all 3 channels.
Definitely applies towards TechDeals. I dropped them a year ago. They went downhill when he added his useless wife to the channel.
It’s nice research however is missing real-world user experience which is where more cores shine better
"More Cores vs More Cache, What Boosts Gaming FPS?"
R. G. B.
Gotcha! Now you have to revise your editorial direction yet again.
There's cash in my CPU? No wonder it's so expensive 🤑
I do fear for future CPU performance in the "cashless society" everyone keeps talking about
Hint, if you open it up you will find gold
Now i get it why AMD has become greedy, even after providing half the cash in their 5600g and 5700g APUs they are still asking for more cash than what we expected 🤔
ayy lmao
I just smashed my 3600 with a hammer, not a single dime in sight. I feel betrayed...
These “for science” videos really butter my rolls
I like how you back everything with experiments instead of speculation.
well that's what they do on this channel . it's harder to debate/debunk actual findings than "well I think this is how it's spose to work ."
No matter what the subject is, if there isn't data to back it up, the opinions are not worth taking seriously.
@@Toastmaster_5000 That's why I unsuscribed to tech deals yt channel, all talk with no proof
This is an incredibly complex discussion without angering multiple groups of people with varying perspective, very impressed
No matter the findings , you're going to piss off some 1 .
No i'll disagree. I built my third build back when "4 cores is enough for gaming" was the mantra. that was maybe 7 years ago and i'm still on the same system. I hate my choice, wish I got the 4c8t instead of the 4c4t and I won't be making that mistake again. I'm doing 8 cores.
I'm seriously looking forward to the testing of AMD APUs with 3D V-cache next generation. If it's possible to have an APU with a similar amount of cache as the standard desktop part, combined with RDNA cores, using DDR5 memory, that's pretty exciting for mini builds.
I wouldn't hold my breath for stacked cache on APUs unless AMD also goes the chiplet route for those products.
DDR5 Zen4 rdna2 APU with let's say 12cu could be a serious weapon at 1080p. If you could somehow squeeze 18cu then utilizing FSR 1440p could be achievable
Yep, I’m looking forward to a 5900xt with 3D V-cache of 192mb… it should give incredible performance boost in games like Overwatch, where you’re trying to maximize frames at the lowest settings and are therefore entirely CPU & Memory bound.
Stacked cache seems to increase the price of cpu significantly so unlike to see those in low end apus… but yeah… v-cache version of future amd apu would make good budget gaming option for 720p gamers!
I want to switch from Intel to amd but not sure if it's worth it
So you're saying L3 cache is important for games... You might call it "game cache" then? 😁
L3 cache is important for a lot of things too, gaming is but only one potential areas where it makes a difference
@@DuyLeNguyen I realise that, it was just funny to think back to the amd marketing 😉
@@geraar88 Probably best just to ignore marketing entirely, I don't think you'll miss out on anything
@@DuyLeNguyen Marketing is indeed marketing, but the cache definitely helped a lot in performance in games. So while the marketing name was stupid, what it was implying wasn't wrong.
Don't give them ideas, they will start releasing less cores with more cache for the same/moar price as more cores and slap the "gamer" sticker on it... mark my words.
As a game programmer I'd say that in terms of taking advantage of all of a CPUs cores/threads you need to create quite a few more threads than the ammount of logical threads the CPU has, as a single thread doesen't usually keep any core maxed out and changes processing thread according to de OS scheduler.
Didn't know cache has such a big impact on the equation. Very informative video.
It's like having to go to your pantry to get bread to make your sandwich vs having to drive to the store and buy a loaf first (which you then keep in your pantry). The larger your pantry, the fewer long, slow trips you have to make to the store.
Information pipeline CPU-RAM is always: RAM=>L3=>L2=>L1=>L0(where the calculations happen)
If something is in L3 cache then no need to hit RAM, if it's in L2 no need to hit either RAM or L3 cache. It saves time because caches are much faster than RAM for 2 reasons: they're physically closer to the cores and they're small memory devices(smaller memory is always faster). That's also why increasing cache size and/or changing cache structure is quite the trade-off: that's also left outside the video but cache do has an internal structure and that changes how fast it performs.
Tech Deals shakes his fist angrily at this video 😂😂😂
To be fair, Tech Deals said exactly that benchmarks do not account for nearly all the user experience. Compare a six-gear 400hp car that might be just as fast as a ten-gear 400hp car. Their 0-60 time might be the same but it will probably feel very different.
They even admitted in this video that core "task-switching" comes at some latency cost.
Tech's argument that more cores is better is that these benchmarks are done on a clean computer. Your average person has various programs running in the background.
I would love to see comparisons of the same CPU on a clean test bench vs a computer with a Twitch stream running on a second monitor or something like that.
@@bw-mx1dy He already covers that point at 16:30. Twitch does not count as a "core-heavy application" otherwise Steve wouldn't consider it odd.
Tech deals just talk out of his fucking ass a lot of the time because he has long podcasts and filler talk to fill
This is also why the rare i7 5775c punches way above its weight in gaming benchmarks, because it has 128MBs of dram cache. Would be nice to see someone revisit it at some point.
What i didnt even know bout this cpu
@@erikbritz8095 Most people don't - it barely existed between Devil's Canyon and Skylake. Only released a few months before Skylake's launch, with only the i7 5775C and i5 5675C for desktops.
The Iris Pro 6200 in them as well is far better than the HD 630...
@@dabombinablemi6188 ill look it up cuz i just saw that its still a 4 core cpu BUT the i7 4790k still does well and i7 7700k as well as i3 10th gen so this i7 5775C should do decently good for the cache.
@@erikbritz8095 In some games the i7 5775c keeps up with the 4790K, despite being clocked far lower...it really is an interesting CPU.
last time i checked it got wrecked by skylake due to the relatively low ipc and immature node.
First video I've held a ton of interest in knowing the answer to in a LONG time. Definitely gives credence to the 3D stacked cache AMD have debuted before. If they release a higher cache 5800X, I will buy it. Games I play absolutely love higher cache amounts so... it's nothing but an absolute win for me. :D
Wait for benchmarks to see if the extra cache will actually help. It might not scale like you think or it might.
@@Pinardi1 Granted, they were first party benches, and we don't even know if these CPUs are going to be launched or if they were just a publicity stunt, but AMD has been pretty spot on with their benches so far and what they've shown off. They reported a 5-15% uplift just from the cache increase so... we'll see, but yeah... I feel ya... I'll probably wait.
@@imglidinhere Oh the V-cache CPUs are definitely coming, it's the next product line-up from AMD as Zen 4 is not until late 2022. How full the line-up will be remains to be seen, but you can expect a 5800X, 5900X and 5950x respins with v-cache at least. I think they'll also bump boost clocks by up to 200Mhz too, as the boost clocks on the box were already about 100Mhz lower than what the 5900X and 5950X were boosting to real world. A 5900X with 5Ghz boost (up from 4.8ghz) with V-cache will challenge or beat Alder Lake in gaming, that is why I think they will do it.
@@emerestthisk990 oh it doesn't need to beat alder lake to keep my interest, I just need it to exist before I can make a final decision.
@@imglidinhere Going to be interesting to find out in any case
I never bought the whole idea of "You need X number of cores" It is mostly about the overall multicore performance. A 6 core CPU like the 5600x will always outperform an 8 core CPU like 1700x even if the game "needs" 8 cores.
I thought it's mainly because 5600X has a much higher IPC and lower ccx latency.
@@Doyyumhwaji thats exactly why
A 6 core processor is generally better than an 8 core of 6/8th's the performance with equal cache
There's always some part that is not parallelised.
Games dont need a certain amount of cores. If there was a 1 core cpu that ran at 50Ghz it would be enough to run any game.
The problem is people making false assumptions.
I.e game is faster on 5600x than 1700x therefore game only uses >6 cores.
Theres a giant chasm of errors there.
Never looked at the core/thread usage over time.
Never accounted for ipc
Never accounted for clockspeed.
Never accounted for caches
Etc.
Its like saying a 4770k is faster than an i3 9100 therefore the game *requires* SMT.
Thanks for the info. Honestly, I was ignorant of the fact that L3 cache has this much significance on performance. There is one fairly well-known techtuber out there that is spreading misinformation, where they overemphasise the significance of core count.
The term "misinformation" is trendy, but it's also bogus in many cases. The proper term is "different perspective."
@@rangersmith4652 Yeah, I take "misinformation" as meaning there is malintent behind it. Irregardless, I believe everyone should shoulder the burden of comparing information and coming to the proper conclusion themselves and stop empowering all the "misinformation" that's put into the ether.
Tech deals?
@@sukhoifockewulf9371 I agree; we should be free to present our views, whatever they are, so that all points of view can be considered. There is no need for the the term "misinformation." It's a term of convenience used by the narrow-minded to vilify statements, data, and opinions that run counter to their preferred narrative. There are factually true statements and factually false statements. But that's a very small part of modern discourse. The much larger part is opinion, and nobody's opinion is right or wrong. They're just opinions, different perspectives. The real issue is that most folks seem incapable of distinguishing fact from opinion, a truism that is continually exploited by media.
Tech deal consider longer time in the future and also more about productivity work loads… but he is also in the same trap as many other and consider the consoles to have 8 cores and so prediction that games will need 8 cores eventually. There just may be many generations of cpus befor the 8 core is needed.
I was waiting for someone to do exactly that testing you did. Thank you, Mr. Unboxed.
You guys are making an awesome job. Every time I come here and watch one of your content, I learn something. Thank you so much for the quality of your job Steve and Tim!
Very enlightening video! I would never have thought that a larger L3 would have made that much of a difference. Thank you, I will keep that in mind come next upgrade time!
This is great! Thank you for all the time put into this, and this was an enjoyable video.
Someone should link this to TechDeals. He's been claiming for some time that 6 cores are not enough (and that 32gb should be the standard). He also said that he "feels" the difference between 6 and 12 cores - whatever that means.
I upgraded from 16GB of ram to 32GB on his suggestion and I didn't notice any difference.
@@Icureditwithmybrain Sorry that happened to you, but yeah, you couldn't perceive any difference since 99.9% of the games take up between 6-12GB (probably the only exception here is Microsoft Flight Simulator when you max it). He showed as "proof" a screenshot of task manager with the RAM usage at 24GB - without even knowing that Windows will allocate that memory whether you use it or not.
Rule of thumb: if something isn't quantifiable in a benchmark, it's probably bullshit.
@@Sardath13 You don't need to feel sorry for me its not a big deal lol. At least ill be good on ram until the end of time.
Man, that guy...
I could not continue watching his videos after he was comparing the 2700X with the 9900K telling me 20 times that "if you want THE BEST, you get the Intel..."
@@WCIIIReiniger well for 1080p gaming he was right on that front
I wish you had also tested with just 4 cores enabled. It answers a different question than the one you asked, but it would be informative to see the performance drop for modern games.
Yeah a huge amount of people still have 4 core CPUs.
im still waiting on my i7 7700k :D
Awesome video! Loved this, please do more of this kind of testing!
Very interesting to watch rather than just GOU testing!
Nice to see numbers supporting the effect of additional L3 cache. I experienced this firsthand with my X58 system when I upgraded from an i7-930 to X5670. That single upgrade gave me another five years of life on that platform. Gaming performance was still more than acceptable when I finally upgraded, but I wanted a smaller, quieter system.
So, you upgraded from *4c/8t* CPU to a *6c/12t* CPU. While the latter one also has faster I/O bus, bit faster cores and support for faster memory. Its 32 nm litography (vs. 45 nm) is propably the main reason for lower TDP.
Also, L3 went from 8MB up to 12MB. So, that was an uprade in multiple fronts. Not just L3 Cache.
@@eukariootti1 The most impacting upgrade was the cache. Support for faster RAM didn't matter since I was using the same low-latency RAM with both CPUs. The only change I made was the CPU and my user experience drastically changed. The extra cores and threads certainly helped, but the extra cache helped more.
I called Tech Deals out on this the other day, and mentioned you guys, and apparently "charts weren't everything"...he was shitting on some guy for pairing a 3080 with a 11400f, and I said that all the guy had to do was run a higher res, and the CPU would almost be out of the equation then...it's amazing the misconceptions that still get perpetuated to this day...
The issue I see is a balanced pc. If you have money for a 3080, especially in this market, has i9 or ryzen 9 money. Don’t put budget tires on a Ferrari.
@@Mopantsu Those things are such a non existent issue with current gen 6 cores unless you're running demanding tasks in the background. He's the one and only guy pushing this nonsense.
@@Mopantsu And frame times are almost never an issue these days, show me a game that'll peg a 11400f or 5600X to 100% at 1440p or above at any point in time? I'm not trying to be a dick, I'd genuinely like to see an example of a modern hexacore having microstutters from huge 0.1% low dips or 100% CPU usage spikes, especially at higher resolutions.
11400f is a very decent CPU though, even if lower end 😆
@@glynkatkin As most people don't just run a game, the extra cores are for more programs. Discord/Chat, Steam, Chrome browsers..and all the other junk the avg gamer has running while he plays. 6 cores is great if you ONLY played the game.
You nailed this one on the head!!! Fantastic video, thanks for your invested time into the channel!
Tech deals left the chat
These hardware investigation/experiment videos are why I'm subscribed to you guys, really interesting stuff.
Great video great to see you back 👍 Tim's is in hiding trying to regrow his tash 🤣
Stache
What an interesting conclusion to a fascinating experiment! Great work, Steve! Hope your new apprentice is doing well and will get a chance in the spotlight soon!
Interesting video again!
Many Thanks for these behind the silicon studies!
Love your work Steve & Tim. Such an interesting topic & you did a great job explaining it.
I'm so glad that I researched about L3 cache before buying my CPU and also decided to get the 5600x because it seemed like it would be more important for gaming than other features
It's a great chip. As an owner. It will struggle with battle field in 1440p, unless graphics settings are maxed out. It cant do more than 160fps or it throttles like hell.
I'll upgrade from the 3600x once I have a better gpu than the 1070 ti
What you are you running? I also own the 5600X paired with a 3080 Ti. I don’t have any issues with high frame rates at 1440p. Are you turning everything down to low or something like that?
@@BeeRad502 was trying for the same 240fps frame rates i do in cod, going to medium settings and stuff, nope was like a cinebench run.
@@sgtBONGwater lame
What a wonderful surprise to be learning something new. Love HBoxd
Steve this is the kind of informative and educational video that I love your channel for. It doesn’t matter how many games you test, it’s the fact that you’re isolating an often overlooked part of the hardware specifications. Well done!
This is an awesome video! Love the interesting test like this you guys do Keep it up!
So... you're saying gamers should all get an R9 5900X for MOAR CACHE!!!
You need a dual socket 64 core server processor actually. Anything less than 128 cores and you might as well not bother running games.
@@mrlescure lol that’s peasant grade we need upcoming 128 core zen4 in dual socket to go past 30fps on bf 2042 neta ;) .
@@mrlescure you sound like tech deals.
@@mryellow6918 To be fair, techdeals is right but it won't become apparent for years to come. While it's true a faster 6 core can outperform an 8 core. When you consider the custom ASIC's consoles use and the way games will most likely be coded in the future.
He is right to say 8 cores is the minimum for gaming today. As in, 8 Zen 2 cores will fall behind both consoles in the future. 6 Zen 3 cores will most likely suffer the same fate too.
@@dondraper4438 I more mean the fact he constantly goes on about things like a 3950x performs better than like a 10900k in games because 6 more cores. I personally believe 8 cores is what you want for the future because the consoles also have that.
Thats why "Zen3D" with stacked L3 cache is so interesting and the future of stacked cache as a whole. It will reduce the costs of large "on-die" cache.
@Yeltnerb 1 Quiet contrary, a stacked cache design decreases production costs. The question is, if the consumer will also benefit from that ;)
Thank you for this test!! I had this question for a long time.
nice test and interesting results could ya add this as a follow up video with the next cpus at the later date please
Easy to see why Intel hobbles the cache capacity on the I5 and I7 then isn't it.
Perhaps, but I doubt it's actually intentional since you'd be losing your market position to the competition (which they're already losing due to other reasons). It's very likely that they did make some conscious decisions based on what they could do with the overall space available & the potential cost of the different parts on their die.
what do you mean by hobbles?
@@qweasdzxc hob·ble
2.tie or strap together (the legs of a horse or other animal) to prevent it from straying.
My Phenom X6 1100T agrees with this statement.
same with my 1090T :)
@@drunkhusband6257 it's not
It is a pure 6core bro
@@drunkhusband6257 only the bulldozer till excavator architectures had the shared resources like that. The 1100T is still K10
@@riba2233 yep the Thuban (code name of the core) is a true 64bit true 6 core processor with 1thread/core :) and a shared L3 cache (besides the L2 and L1 for each core)
Request: 6-8-12 cores with stuff running in the background in 3 scenarios.
Scenario 1: steam discord and some other light apps
Scenario 2: add chrome with 15 tabs and one video feed.
Scenario 3: add streaming and something heavier like a game server.
Thanks Steve ;-)
I agree with this. Add in some anti-virus/malware monitoring. Tired of seeing statements made but assumptions like everyone playing video games is a) building their own PC, and b) running a clean system with no background items. Every gamer I know is so PC illiterate that they barely understand major components but they have been hit by virus/malware and had to do a complete system wipe. Heck, most don't even update video drivers. They use the PC like a console but load tons more stuff onto it, then wonder why it bogs down.
This is the video Tech Deals should be looking, in a single monitor, not doing a hundred things at the same time in a single PC.
According to device manufacturers gamers want more red color and RGB.
That's pretty much correct even outside the desktop CPU industry. Team red here is the most desirable for gamers in terms of desktop CPUs. While Qualcomm Snapdragon, being the team red of smartphone SOCs, is the most desirable for mobile gamers.
That's because red makes it go faster. :D
@@Gnomleif I understood that reference.
Thank you for testing this. I always wondered how L3 cache affected Intel CPUs, now I know. 🔝👍
Thank you so much for doing this. I've been wanting a proper comparison between cache amounts for years.
I upgraded my 4930k (12MB cache) to a Xeon E5 1680 v2 (25MB cache) last year and saw a massive improvement in games, even old ones that only use 2 threads like Skyrim.
So much so that the E5-1680v2 at 4.2Ghz outperforms my 4790k at 4.7GHz in some cases.
I thought it was the extra cache but no one believed me on the forums.
This is VERY informative. Thank you for the hard work you put into this video.
Amazing work Steve - as usual. You have spoiled us with your unique aproach and objective/unbiased testing. Keep it up man, you guys are among the best!
This is some hardcore analysis, thank you.
I always like when you do these isolated comparisons.
Awesome video! Thanks Steve.
That Tech Deals guy is not gonna like this video. 😁
Tech's argument that more cores is better is that these benchmarks are done on a clean computer. Your average person has various programs running in the background.
I would love to see comparisons of the same CPU on a clean test bench vs a computer with a Twitch stream running on a second monitor or something like that.
@@bw-mx1dy thats why people buy elgato capture card to overcome streaming burden , Also many programs are idling on ram with no performance impact . Tech Deals doesnt understand this
Cache is the reason why Ryzen is so competitive against Intel, especially in high FPS games like Valorant and CSGO.
Which is moot. People only focus on core counts, cache sizes and clocks because those are the only easily measurable things a CPU can have at first sight. Everything else like microararchitecture minutiae that can't be easily explained or tested, so people don't talk about it much.
BTW out of those 3 things only 1 pertains to IPC (gaming or general), I wonder if people know which one...
Every games benefit from having more CPU cache. It's quite simple, really: CPUs always try to load data in the (physically) closest memory level to itself. Whenever your data can't be fitted into cache, it has to be loaded into RAM, requiring CPU instructions to go to much further distance than it would with cache. This incurs additional latency cost, which adds up in your frametime.
@@robertstan298 Educate. What help is that comment unless you provide the answer for those who maybe don't know or are misled? Instruction per clock, which thing pertains and why? Go...
That's not really true. Ryzen needs more cache to compensate for the higher worst-case latencies due to infinity fabric. It is true that it can provide speedups over intel for programs that rarely hit those worst-case latencies but it's not a general thing. Intel's CPUs (and others' monolithic designs) need less cache for the same average memory performance.
@@squelchedotter It's still true, though, because if they didn't have it compensating for the losses incurred from chiplet design, they wouldn't be nearly as competitive...
This kind of architecture testing is cool, thanks!
Really great video! Thanks!
Thanks for this brilliant piece of CPU science!
"Cash generally provides the largest performance gains"
I blame GPU makers for this...
This is information I was curious about a while back. Thanks
Interesting Video. Thanks, Steve!
Upgraded from i7 10700F to i9 10900 and there are NO DIFFERENCE, only wasted money... (GPU: RX6900XT)
@Blacksmith AutoWerks INC Using it for 4k gaming. Youre absolutely right about GPU and CPU on higher resolutions! i5 (and 11gen is fast)/i7/i9 there are no difference on 4k 👍
My favorite tech channel drops Dobby another sock!
I love your channel because you always go deeper than expected. Many thanks!
Great video. I learned quite a bit
It's hard to say how valuable this information is to consumers because I feel like as you go up in core count you also go up in cache amount, and since you can't just buy more cache without cores, idk. Still interesting though.
That's only on Intel. Were you distracted when watching this?
Wait till you see AMD copy this in the next platform. No more 32mb cache for an affordable 6 core baby. You want cache? You pay cash!
@@Owen-fn8ff Dude. You literally don't even follow tech.
@@K.R.X what makes you think AMD won’t do this? If they think they can and remain competitive, they will-that’s just a given
It is valuable if you care about value for money. Too many ppl waste money on 8 cores
Would love to see a comparison between the power consumption of the compares CPUs, with cores disabled to be equal. Cache is supposed to be pretty power-hungry.
It's the other way around, usually. Cache doesn't consume much power (and therefore isn't very hot despite the fact that it is typically more dense than logic), which is why AMD is able to layer slices of cache on top of each other in their upcoming Zen 3 refresh chips.
Really thorough. Thank you.
That was really interesting. Thanks for making this video.
TechDeals is really bad for these kinds of misconceptions. He is probably a big part of people's misunderstanding.
He kinda has the attitude of "you want more you pay more" as well as misleading.
@@77wolfblade Yeah what boggled my mind the most about him is he would recommend a 10850K/1000K over a 5800x because they have 25% more cores but only actually had about a average of 10% more CPU performance overall making the 10850K/10900K pointless being sandwiched between 5800x and 5900x especially in today's market with better priced Ryzen CPU's that are inflated like they were at launch.
....sponsored by NewEgg and also my clueless wife perma nodding - and yes, she really built 12 gaming chairs on her own....and all that whilst I m telling ppl that 5+5=11...and I must be right coz I m doing this for 30 years...oh and btw, don t listen to ppl providing empirical facts that allow conclusions, it s all witchcraft coz more cores are better anyway and 32GB of RAM are vital for smooth gaming
@@nonamenameless5495 yes my wife who looks like a blurrg from mandalorian who repeats everything I say and doesn’t know what she is talking about but gives advice as if she is an expert. I paid over 4k for 2 3090s and that was a deal. Also I’ll screenshot this download speed cause it will prove my point that more cores is better that will show everyone
if 4 core is the minimum now then as CPUs get more cores as time goes on more will be better
Depends on who you talk to. According to Steve, 6 would be enough, the newer the process w/more cache the better. I go a step further and say you need more cores for OTHER programs you are using at the same time. 6-12 threads (6 cores) for games currently) and 4-8 threads (4 cores) for other items THEN any future demand as game engines and other programs become standard/online all the time. Even further would be if MS implements some kind of CPU PC equivalent to the new consoles to emulate the more advanced I/O & compress/decompress items. Xbox is supposedly = 6 zen 2 cores and PS5 = 11 cores (I read their respective cores as 'threads' in this case so I would cut the core req in 1/2).
Excellent work, thanks for all the info!!!!!
Nice! Useful info cheers.
I wish you would show frame time graphs. Frame drops and such are what you really feel.
@@Mopantsu Somewhere Tech is raging
Indeed. Abrupt swings in a frame/time graph can be felt in game. But are they being caused by a lack of core availability or a lack of cache availability? Tech Deals often discusses what he calls "user experience" and how, all else being equal, more cores makes it better. It's a very subjective concept that can be felt but is not objectively measurable.
@@rangersmith4652 Depending on how the system handles more cores the experience could become "less desirable" the more cores you have. There's at least a bunch of older games that are more likely to crash due to instability if the CPU has more cores available. If you didn't know that, you'd definitely be able to say the user could "feel the better experience with less cores" 😆
@@markjacobs1086 I suppose so. But having more cores doesn't mean you have to use them. A quick trip into the BIOS fixes that problem.
@@rangersmith4652 It does, but that'd be based on the knowledge that more cores causes the problem, if you wouldn't know that you'd arguably have an inferior experience.
The only time I needed 8 cores instead of 6 is when I was playing a game in steam & streaming something in the background through the web browser. Other than that, I never saw any kind of difference.
I watch my core usage on afterburner on a 2nd screen when gaming and have yet to see a game max all 6 real cores of my i7 8700k @ 4.9. Most of the games I play (fps) either are not able take advantage of a higher core count or devs are still getting to grips with parallel process threads efficiently.
@@stevecade857 i
Very insightful. Thankyou sir.
Question, do you consider streaming to be a particularly demanding task, especially if you're not using nvidia hardware encoding for the feed ?
You put so much time and effort into this video and the community thanks you
I love this channel for exactly this kind of videos. #1
You "NEED" 8 cores as much as you need a $1000 dGPU.
Well with prices at the moment a $1000 gpu will get you a 3070ti or if your luck a 3080
Before that you need RGB much more.
@@sopcannon you know what I mean.
What gpu you need is based on lots of factors like at what resolution and refresh rate you play in.
Thank you Steve @Hardware Unboxed for this very informative video. Very well said. Hope you could also make a video clarifying the how much ram is really needed, the 16gb vs 32gb vs 64 gb argument. Thanks
Very, very interesting benchmark !!
Many thanks guys 🤠🤠
no! i will game on 12c/24t dual xeon machine until i die! i will hear no dissenting opinions!
thanks for the work ya'll do, it's genuinely interesting stuff.
To be fair, the E5 2678 v3 is still a beast, especially if you bought it a while ago for 80€. Amazing workstation performance for multi threaded applications, tons of cache, quad channel memory. And that's at the cost below an R5 3600 while maintaining similar or better performance.
i just got a pair of 2699v3s for that phat 45mb l3 and 3.6ghz turbo.
Hmmm I remember noticing that the Zen 2 8 core Series X cpu had much less cache than a 3700x. *edit* I checked and it's got the same L2 cache, but only 1/4 the L3 cache of a 3700x
Zen 2 8 core IS 3700X, what are you talking about?
Im going to assume you mean Zen+, not Zen2
@@Harambe8 The consoles use custom 8 core Zen 2 chips. Those chips have less cache than the desktop 8 core 3700x. Not sure why that's hard to follow.
@@1983XR500 I'm just telling you what Microsoft etc. say. Or are you trying to start one of those goofy arguments like the "is PS5 really RDNA2"?
@@PookaBot in the context of PC, they got mixed up, I understood what you meant
This was really informative!
Nice testing!
So that's why AMD claim they can get 15% more game performance just by stacking more Cache onto zen 3 cpu.
Indeed. That's why Alder Lake won't have much of a win over Zen3 if any, because Zen3+ with V-Cache will come right after Intel's launch.
@@ItsJustVV Wonder how much that will cost though, die stacking isnt cheap at all.
@@_--_--_ cache stacking is easier than full die stacking, fortunately. And people have examined the Zen 3 die and found the points AMD plans to attach the cache already present.
AMD has been planning this since they designed Zen 3. And with CPU shortages basically over, only greed is standing in the way of the customer getting a good deal.
@@_--_--_
It will cost a lot! That is why amd only make 3900 and 3950 versions with vcache!
God damn it's hilarious how bluntly Intel creates product segmentation.
It's just SO blatant.
great test as always, i think in games it also depend which api they use and some games are single threaded but majority of the dx12/vulkan are multi threaded where more cores always help, games like shadow of the tomb raider, doom eternal where it can use all 24 threads easily and doom eternal can use as many cores you can feed, i think that's the scalability of the engine which not possible with every engine out there , im shocked that so old rainbow siege can use 10 core 20 thread with vullan api, back in days even in dx11 rainbow siege use to max out my fx6300 which was pretty rare that era
Great video, thanks guys!
To answer the question, probably the amount of money I spent on RGB.
This types of jokes are getting boring
This video shows why I've been using the HUB to make my choices when upgrading my PC, I can only hope that the channel keeps up the excellent work and helps more people make a conscious upgrade rather than just thinking that the most expensive is always the best.
this!
Great video, interesting conclusions.
Wonderful video, thanks!
I wonder if Tech Deals watches this.
5900x all the way. Way better than my 3800x.
Tech's argument that more cores is better is that these benchmarks are done on a clean computer. Your average person has various programs running in the background.
I would love to see comparisons of the same CPU on a clean test bench vs a computer with a Twitch stream running on a second monitor or something like that.
I know a YTuber who recommends 8 cores all the time for PC gamers.
Most PC gamers run "dirty" systems and have a lot of "stuff" running while they're gaming. Since the testing here was run on a clean system, it doesn't account for all the garbage people tend to be running, sometimes without even knowing it. I know a CZcamsr who makes that point, and he's not wrong. He just presents a different perspective.
@@rangersmith4652 I know the guy too and he's full of misinformation. His arguments are complete nonsense in order to get people to overspend on what they need.
The stuff that he's talking about running in the background hardly even use the CPU, but he likes to say that look so many things running in the background therefore you need more cores for them to run.
@@NostradAlex I agree that many of the tasks people run while gaming do not use a lot of CPU power individually, but having a lot of stuff running adds up; It simply has to.
When that CZcamsr suggests spending more to have more cores, he's very often talking about systems with very high-end GPUs that cost a lot of money. He encourages building a balanced system that can do a lot of different things without a significant CPU, GPU, or RAM bottleneck. He's also applying the "buy once, cry once" principle. I don't get a sense he's trying to get people to overspend; he's trying to encourage less frequent upgrades, resulting in less money spent over a given time period, not more.
@@rangersmith4652 I took his RAM advice and I have yet to hit the 16GB usage that I might encounter with games and apps. I am no content creator but it is nice to have leverage on that part. RAM is cheap for me anyway. His buy large philosophy however does not go well with most people's limited budget which translates to his limited fan base on his channel. Sometimes the deals and advice he offers really saves you money.
@@peterl8417 More available RAM in your PC is a luxury to some people. I look at it this way: RAM is pretty cheap, and if having more of it means I can run a game and a movie and just keep opening browser tabs without a slow-down, then it's worth the financial cost. Some of that is RAM; some of it is processor power (cores, cache, clock speed, IPC, etc.). The bottom line is I build PCs that don't make me feel like I scrimped and should have aimed higher. Where that is on the price scale is different for every builder.
Loved this vid. Amazing the content that is created when someone wants to do something for fun or that interests them.
I have been wondering about this. Great investigation.