AMD Ryzen Gaming, What's More Important: CPU Cores or Cache?
Vložit
- čas přidán 18. 05. 2024
- Support us on Patreon: / hardwareunboxed
Join us on Floatplane: www.floatplane.com/channel/Ha...
Learn more about CPU benchmarking here: • Why Reviewers Benchmar...
Buy relevant products from Amazon, Newegg and others below:
GeForce RTX 4070 Super - geni.us/wSqSO07
GeForce RTX 4070 Ti Super - geni.us/GxWGmYQ
GeForce RTX 4080 Super - geni.us/80D6BBA
GeForce RTX 4090 - geni.us/puJry
GeForce RTX 4080 - geni.us/wpg4zl
GeForce RTX 4070 Ti - geni.us/AVijBg
GeForce RTX 4070 - geni.us/8dn6Bt
GeForce RTX 4060 Ti 16GB - geni.us/o5Q0O
GeForce RTX 4060 Ti 8GB - geni.us/YxYYX
GeForce RTX 4060 - geni.us/7QKyyLM
Radeon RX 7900 XTX - geni.us/OKTo
Radeon RX 7900 XT - geni.us/iMi32
Radeon RX 7800 XT - geni.us/Jagv
Radeon RX 7700 XT - geni.us/vzzndOB
Radeon RX 7600 XT - geni.us/eW2iWo
Radeon RX 7600 - geni.us/j2BgwXv
Radeon RX 6950 XT - geni.us/nasW
Radeon RX 6800 XT - geni.us/yxrJUJm
Radeon RX 6800 - geni.us/Ps1fpex
Radeon RX 6750 XT - geni.us/53sUN7
Radeon RX 6700 XT - geni.us/3b7PJub
Radeon RX 6650 XT - geni.us/8Awx3
Radeon RX 6600 XT - geni.us/aPMwG
Radeon RX 6600 - geni.us/cCrY
Video Index
00:00 - Welcome to Hardware Unboxed
00:52 - Core i5-10600K vs i9-10900K [cores disabled]
04:40 - Test System Specs
05:19 - Baldur’s Gate 3
06:17 - Cyberpunk 2077 Phantom Liberty
07:11 - Hogwarts Legacy
07:53 - Star Wars Jedi Survivor
08:42 - Assetto Corsa Competizione
09:32 - Spider-Man Remastered
10:14 - A Plague Tale: Requiem
10:51 - Assassin's Creed Mirage
11:33 - Watch Dogs: Legion
12:04 - Hitman 3
12:28 - 12 Game Average
13:19 - Final Thoughts
Read this feature on TechSpot: www.techspot.com/review/2811-...
AMD Ryzen Gaming, What's More Important: CPU Cores or Cache?
Disclaimer: Any pricing information shown or mentioned in this video was accurate at the time of video production, and may have since changed
Disclosure: As an Amazon Associate we earn from qualifying purchases. We may also earn a commission on some sales made through other store links
FOLLOW US IN THESE PLACES FOR UPDATES
Twitter - / hardwareunboxed
Facebook - / hardwareunboxed
Instagram - / hardwareunboxed
Outro music by David Vonk/DaJaVo - Věda a technologie
The one thing that improves gaming performance for sure is more Cash.
Nope... Because there are components that are extra costly and don't provide that much performance. And in fact, cheaper components can provide as much performance as pricier ones. So it's actually more about Finding the Best Value than just simply blindly go buy the pricier things.
@@Trip4man someone didnt get the joke
@@Trip4man"uhm akshyually" lol.
It was a good joke the dude made. Sure, you're right, it's not always true that spending more equals greater perf. But on average, for most people, increasing budget will allow for better performance.... And the joke was funny! So chill lmao
You want more Cache, I want more Cash. We are not the same.
More cache: This does not spark joy.
More cash: This sparks joy.
AMD has hit a jackpot with it's 3D V-Cache technology.
Intel only did it as a one-off with the 5775C, never followed up on it.
Isn't it great that V-Cache was just a skunk works for funsies thing one of their engineers cooked up? Wasn't even really in the plans until the prototype was super impressive.
Info from someone's AMD tour... Maybe LTT? Gamers Nexus? I can't remember now
Yeah but they also hit a wall with it.....
Which is why they are putting it in EVERYTHING now.
Cuz they know that AI chip ain't shit, LMFAO!!!
@@TheDarksideFNothing That was Gamers Nexus. Interestingly, Threadripper has a similar origin story. AMD seems to have a company culture that lets their engineers experiment a bit and it's paid off for them greatly.
@@Breakfast_of_Championsi7-5775C's technology isn't even close. That is an L4 eDRAM cache attached on the side.
Not only is eDRAM much slower than SRAM, it's just a standard MCM (side-by-side) module attached through a standard bus.
The incredible magic of 3D V-Cache is that it adds practically no additional latency, because it's literally right there, where the regular L3 cache logic is.
Any company can make a giant chip with more L3 cache, but that will lead to additional latencies (bigger == further away), and cost more due to sinking yield.
I just upgraded to 5800x3d from my old 2700x.I see about 30% performance gain in average FPS with my 6700XT, and the stuttering, frame time spikes are all gone, I can finally enjoy fluid gaming in most of the games.
it really is amazing how much of a difference going from a 3600 to a 5800x3d made, even while using a measly RX480 and technically "gpu bound" 99% of the time. Those moments when the CPU slogs you down really ruin gaming fun
30% sounds low, maybe lower some settings if playing anything competitive. I went 38xt to 58x3d, I was CPU limited in ACC and EPIC graphics settings everywhere didn't impact max / average FPS much. Dialled back some settings that are pretty useless when playing and almost doubled FPS at race starts, average 50% higher overall.
I went from a 2700x to a 5600 on a Radeon 6700 10G and even losing two cores I still get massive performance gains.
@@fracturedlife1393 competetive games will see larger gains, Elden Ring maxed at 1440p just got smooth frame pacing now :)
@@fracturedlife1393 from cpu bound to gpu bound :)
cores = muscles
cache = oxygen
Spot on 👍
great analogy!
Ha classic.
Cores=allnattymuscles
Cache=steroids
VRAM = ???
More like cache=blood vessels
When I saw the title I thought, "Haven't you already done this with Intel?" I wonder what would happen if you took the 64 core or 96 core Threadripper and disabled all but 8 cores. Would that give those 8 cores 384 MB of L3 cache?
peopIe should upvote this so hardwareunboxed sees it and tries it out
Depends on the layout if that's beneficial or not. Accessing cache on a different CCD induces a hefty latency penalty that would reduce performance in most instances.
@@markjacobs1086 is that latency penalty worth it over the latency penalty of having to access RAM?
@@scamdem1cGo check out the 7950x review, look at it's scores compared to the 7700x. The answer is no. It's at best equal, in some cases worse.
@@scamdem1cprobably, ram access penalty is hundreds of clocks.
Funny we forgot the lessons learned during the Core 2 Duo and Quad era. The extra cache on Penryn vs Conroe (especially 2M Conroe) mattered more than the # of cores for gaming
yep, this is why a eon with lots of chache is still relevant
Then Intel Ring Bus came on Sandy Bridge and made a huge improvement in perfomance.
I love these kinds of comparisons. Thank you for doing them.
I actually upgraded from the 5700G to the 5800X3D last year and it'd one of the best PC components I've ever bought
5800X3D ,, the most overhyped cpu of all time ,, and over-priced
@tilapiadave3234 nonsense
@@tilapiadave3234 That's why AMD is dominating the CPU market now...
@@tilapiadave3234given that in Germany you can get one for 277 Euros while the 5700X3D is 253..... nope!
@@tilapiadave3234said no one ever
It's allways worth revisiting these types of subjects if only to help newbies learn more about the machines they are buying. Also updated / expanded testing data is allways good.
As someone who plays a lot simulation type games, I am continually grateful that you included Assetto Corsa Competizone in your testing suite. For people that primarily play racing sims, flight sims, and large scale military sims, like ARMA, testing and comparing CPU cache as well as core count is integral to find out what hardware is the best choice for these kinds of titles. The way these games operate is so greatly different than most other games, mostly being console ports with not a lot of instruction sets being sent to the CPU in comparison.
3D cache also improves winRAR performance a lot, because the dictionary fits inside cache and the processor won´t go to main memory frequently.
7zip better
@@FateXO i'm excited about zstd and FSE-related compressors
@@FateXO
You’re pushing your luck little man.
@@Dankyjrthethird what you finna do about it old timer
I had a 5900x 12 core and changed to a 7800x3d 8 core and don't regret it for gaming now.
Did you get 5900x for gaming?
Does that mean cache are more important than cores?
I still have my 5900x and now waiting for the Zen5. Gaming in 4k and streaming at the same time
@@Mario211DE I don't think you need to upgrade
@@DungxxHen thing is tho im cpu bound even at 4k in different games already which is interesting
3:10 wow, dictator Steve :D:D:D But hey, you are good dictator ! :D
Cache itself is important, but it depends on how accessible it is to all the CPU's cores. If you have 20mb of cache but a core can access only 1/8th of it, its far worse than if one core can access all of the cache. That's basically why zen 3 is so much faster than zen 2. A core can access twice as much cache on zen 3 compared to zen 2
This here is about L3-Cache that is shared between all cores. But yeah L1/L2-Cashes also matter but are not easily comparable, because they are usually the same for the same architecture. And when comparing between two different architectures, there are more factors than just L1/L2 that are responsible for the performance gain.
Generally speaking, only L1-I and L1-D caches are private in modern CPU architectures. L2 cache is "on the core" meaning it's physically allocated to each core, but other cores can "snoop" L2 caches of other cores. With Zen, this can only happen inside a CCD, so Core 0 (on CCD 0) cannot access the L2 cache of Core 8 (on CCD 1). This is partly why multi CCD Zen chips are not better in games. L3 cache is again CCD-public, meaning any core on the same CCD can access the L3 cache, but other CCDs cannot. As you mentioned, with Zen 2, a CCD was 4-cores and a CCX contained 2 CCDs. With Zen 3, Zen 4 and Zen 4c, a CCD is 8 cores.
A good example for Zen 2 is Ryzen 5 3600 vs Ryzen 7 3700X. The Ryzen 5 3600 has 32MB of shared L3 cache between cores and the other one does not. But the difference in performance is only 5%. But that can be attributed to more cores and a higher frequency rather than a difference in cache. People have learned that the cache makes a big difference in Zen 3, but then they try to apply it retroactively to older architectures as well. I don't think Zen 2 was designed to benefit significantly from more cache.
@@JackJohnson-br4qr the cache on zen 2 was shared between 4 cores, thus an 8 core 3700x did have 32mb of cache but each 4 core CCD could access 16mb. That changed with zen 3 where each core had access to the 32mb of cache as there is no CCD anymore
@@cpt.tombstone except that dual ccd chips are faster in games, not by much with varying results, but in general they are the same or faster than single CCD chips
Cache is King.
Content is King, too!
Well I’d like both
Wu Tang said it first. Cache Rules Everything Around Me.
When it can be utilized yeah, but a brute force single core combined with low latency is more consistent in its results. That's why I'm interested to see how arrow lake pans out considering Intel is ditching hyperthreading for the sake of single core performance.
No one got the “cash is king” reference
Excel, browser and other productivity applications' impact on core frequency, ipc and cache will be much appreciated as most of the day to day tasks are still single threaded
Yes, I'd be very interested to see all the same CPU's tested against general apps/productivity. Especially as the much larger cache on the X3D chips usually results in lower frequency, which in theory matters more outside of games, but it would be nice to see all that confirmed.
Just got my 5800x3d a few days ago. 🥳
Long live AM4
AM4ever!
AM4 is going to beat LGA1155 in terms of usable lifespan :)
Maybe we are Lucky And they will bring a other New cpu with x3d for am4
very interesting, will be installing 5700X3D tomorrow from 3500X, my wife's PC ended up being a good upgrade path from 8700K PC.
upgraded my 3700x to 5700x3d and i'm loving it. the 3700x was great for its price back in the day, but the 5700x3d is just amazing
I was running Intel for at least 4 of my last builds and was about to go for the 14700k a week ago. Then I stumbled over some information on the lifecycle and the fact that AM5 would be more future proof for another couple of years while being superior for gaming anyway due to the cache. And then I also noticed that the R7 7800X3D also was way more efficient and cooler. All that while costing less in total together with an Aorus Master mainboard. Had to reconfigure my cart eventually and go with AMD of course. Super glad right now.
I was on intel for a decade. My last build was 5800x3d, very efficient never regret will last at least 5 more years.
hi planing also to buy 7800x3d AMD path, can you share me your build list ? thanks and have a nice day
@@johnfirst3986 I went with
- Corsair 7000D Airflow (Big Tower) white
- Aorus Master B650E (Mainboard)
- 7800X3D (of course) cooled by Noctua NH-D15
- Corsair Vengeance RGB 32GB CL30 AMD EXPO (RAM)
- Corsair RM850x (PSU)
- MSI GeForce RTX 4080 Super 16G Gaming X Slim White (GPU)
I had plenty of storage left from my older machine. And I game at 1440p for the most part. The build is overkill for what I need as I mostly play fighting games. Even the most recent games will not need the 4080 to consume more than 50 watts.
This was one of the most informative CPU videos I've ever seen. Good job making great content in a time where there isn't much happening as far as new parts.
Super useful video, as always! Thanks Steve!
Love the blowing up of the "More cores/multi-tasking!" argument points. Well done guys.
it's quite surprising to me, but good to know!
That's why I love you guys from down under. You're making videos to topics or questions the viewers would like to get answered. 👌👍
It's been implied by testing, but L2 cache is incredibly important as well, arguably with more impact. Raptor Lake's performance improvements are almost entirely based on in a large increases in L2 cache (not all 13th gen did get L2 cache increases, basically it's 13600K+). The nVidia Lovelace architecture also saw massive gains by increasing L2 cache sizes. At a basic level, L2 cache is "easier" and less expensive to implement than stacked dies.
L2 just has less options for blowing up majorly in size. I think even the latency of V-Cache would be too great for L2 IIRC
Would be interesting if they figured a way to use all the L3 space on the die for L2 and then use V-Cache only for L3. Best of both worlds.
@@TheDarksideFNothing "L2 just has less options for blowing up majorly in size..." The "problem" is that L3 size has significantly diminishing returns in performance gains. You could probably cut the L3 (per core) in half on an X3D chip and see very similar results.
@@awebuser5914 You are probably right about diminishing returns, but I wonder if the relative ceiling of L3 cache effectiveness on Ryzens has even been reached yet. Right now it's obvious that 96MB is much better than 32MB (for games), but what if they tried 128MB, 160MB or even 192MB of V-Cache? 😁
They wouldn't just because of expense.
It may even be better to just tack another 32mb on instead of 64mb@kosmosyche
No, Steve did a comparison on that. Most of it is clock speed and RAM speed, the difference in L2 cache is like 1 frame.
Core count and mhz are linked. High cache is about removing latency issues. If your cores or high mhz cpu is waiting for a chunk of ram from system ram the it dosn't matter if you have 6 or 7ghz cpu. It will be idle. A high cache helps to mitigate this issue by guessing what you might want to load in the future and keep it closer. The P4 tried to do this but their long pipeline killed the gains from the cache. The P4 had to flush the long pipelines and it took a long time.
I always follow the suggestion/raccomandation from Steve and Tim , NEVER let me down !!
CPU , GPU , Monitors , I based my purchases on this channel , I did NEVER regret ANY decision.
They are the best IMO.
I was just wondering about this with my brother yesterday, thank you for the explanation!
oh wow this is a subject ive been wondering about for awhile thanks for the video
Another dimension to check is Cache Size vs. ClockFrequency with similar corecount, as 3DVCache is usually lower clocked as standard Cache-CPUs
At this point, all desktop cpus run at outrageously high (inefficient) clock rates. Everything over 3ghz is mostly a waste of power.
Look at gpu clock rates.
@@PaulSpadeslooks over at my 6.3 ghz 14900k pc 👁️👄👁️
@@PaulSpadeswhy is that?
@@PaulSpades Performance scales with clock speed far past 3 GHz, what are you even talking about?
@@Eidolon2003 Agreed, this whole test is done on cpu's starved of ram info throughput. Zen2/3 have high latency to ram and ofcourse a cache buffer will diminish that problem somewhat. On Intel one can oc ring and cache for over 6Ghz scaling. This has been true for over 10years on Intel. Only HW likes to pretend this isnt doable on Intel, rofl. On Intel no oc guy leaves ring and cache on stock speed when oc'ing.
What about L2 and L1 cache size?
Really great idea for a video
Very helpful definitely 🙏🏻
Love the content like always
12600k alderlake vs 14400 "raptor lake refresh"
Lock the cores and caches to the same frequency (and tdp's) and see if there's any change in architecture... because they're the same die.
i really want to see a 1gb cache one day lmao you could cram a whole old game in there
Hell even a modern game you could fit enough of it in there to easily mask any data swapping.
We already see where some games see no benefit because they've already optimized to fit in normal amounts of cache so at some point you lose the benefits.
But I do wonder if a dev KNEW they were getting 1GB if they'd be able to take advantage of it in really interesting ways.
Yeah the one thing I keep wondering about how one could optimize if they knew they were getting 1gb of cache. i personally wonder if it would help raytracing performance at all since that hits performance the hardest@@TheDarksideFNothing
@@OtherwiseUknownMonkey Yeah, I think a full GB of cache would be much more about seeing what new things you could do vs making existing things go faster.
Right now all things are designed around small caches because that's the hardware that exists in mainstream. But some applications miss the mark, and that's where V-Cache shines.
Intel is apparently working on a last level cache that goes up to 8GB.
@@TheDarksideFNothing i wish i could hear a tech artist talk and a programmer talk on it, with 1gb cache on cpu you could make worlds feel so much more lively eith more intricate ai routines i imagine, and if you had a gig of cache on the gpu you could keep whole lightmaps in there, like lets say the game let the gpu know where the lighting will be at in 10 seconds from now and you could smartly interpolate those lightmaps that will still be in cache the whole time making rendering sm faster
I will be forever thankful to you guys for the review of the 5800X3D almost 2 years ago. If it wasn't for your benchmark with ACC i probably wouldn't have jumped on the X3D train and wouldn't have experienced the monster that this chip is. 18 months plus and counting and still feel completely blown away by the performance every time i load a game.
It's a shame no one ever tested it with the original AC (+ CSP). I only recently decided to make the jump to X3D, upgrading from the 3700X to the 5800X3D. and my goodness was I not prepared for the difference. I legitimately get up to 105% higher framerate. Yes, over twice the framerate. And that's with a 4060 Ti, so the potential gains with a higher end GPU can probably be much higher still.
X3D is such a blessing in many CPU-heavy titles.
Thanks for doing a video you found interesting to do and that's why ya did it ^^
Found it intereresting too, for sure =)
~a random canadian viewer
Very informative, thank you. There are some particularities, but but some things will never linear.
You need to test competitive games like a BF, COD, APEX, PUBG. It is for them that people upgrade their CPU\RAM in the first place.
So 6 cores are fine with just gaming and more cache would be usually better . How does it hold up when u also streaming from the same pc? Would a cpu with more cores then be better, or a cpu with less cores but more cache? Dunno if this a yes or no question or 'it depends' :P
This and the x3d comparison was super helpful!
Great topic and review, thx!
"Why am I doing this. Because I want to ". At this point, the video gets a like. Because I want to 😊
A few certain CZcamsrs aren't going to like this one...
Been saying this since Intel's mesh architecture (I had a 7820x), so it's great seeing more and more videos confirm cache is so important! Great video.
Really fantastic video! Great information, great data presentation, and very educational for buyers. Thanks Steve!
To me bigger difference would be cache vs frequency
I am using X99 based Xeons and some games benefit so much from cache that it doubles my framerate despite running
Just look at 5700X3D vs 5800X3D reviews. They are almost the same perf while 5700X3D runs by around 400MHz-500MHz slower.
good call to be honest. we already know 12 threads is more than enough, but frequency VS cache is a whole different animal, as some games will heavily favor frequency, whereas others will heavily favor cache.
You guys always seem to make videos answering questions I've wanted to know the answers to. Cheers!
ive always meant to take the time to THANK YOU !!! for having such a nice channel and for all the hard work it takes to get it here !
I do find this interesting! thank you for the video :)
Great video Tim!! I love it!!
This was why i upgraded from a 3900X to a 5800x3D, despite the core deficit i mostly play games but also i wanted a single CCX 1x8 rather than 4x3 core because of the cross-talk between cores over the infinity fabric and accessing the cache. Still using an RTX2080 so while over all fps has not changed much (GPU limit) but my 1% lows has been reduced by over 50%. Also games don't like too many CPU cores as it messes up the scheduling which epic has came out about crashes on Intel CPU's because of core count.
cheers steve a interesting watch, i knew cache was important but not to that degree bring on the 186mb l3 cache 8 cores cpus
Thank you! I'd rather add as optional opponents 4600g and 4700g. They have only zen2 cores but twice less L3 when compare to 5xxxG parts, so L3 compare chain can be wider - from 8 to 96MB :)
Really impressive how big changes are in fps and how much L3 cache has changed over years!
Thanks for the benchmarks!
I appreciate your no nonsense video titles.
Very great content ! Keep it up !
I use a AMD Ryzen 9 5900X with 64 MB of L3 cache along with 32 GB of DDR4 running at 3200 and i never feel it need more it runs every game and application i use with NO problems hopefully it will keep going for a few more years yet
Thanks for the video.
*ABSOLUTE GENIUS WORK*
As a geeky request, if you can compare the 7000 series x3d models. Please
Your more candid tone in this video made me laugh more. Thanks for the great content.
14:59 shots fired. Lookin at you byte size tech
The other thing not mentioned is that cache can help vs clock speed. The 5800X and 5600X chips both have higher clocks and yet are showing some serious performance difference in gaming.
Great job Steve
I remember many hears ago buying a used Opteron X2 170 for gaming. It had one more core and a boat load more cache then it's Athlon 64 equivalent, and even being 200Mhz slower it out gamed pretty much any Athlon 64 because of the added cache. I was also able to sink a 1Ghz overclock on that bad boy and REALLY crank out the performance.. good times.
More content like this, please! 💯
Great video! But I feel like taking clock speed into consideration would be important
Hey steve. Love the vid. Whould love see a vid abpit whether an ssd with dram cache makes a sense for a gaming pc. Does more expensive storage make a differnce?
Are you going to do another video on CPU scaling with high end GPUs at all the resolutions ?
Chiplet vs monolithic would also be interesting
tremendo contenido el que hace este señor, una información bien clara sobre lo que realmente importa a la hora configurar un pc, me encantó el video!
Great demonstration!
It would be interesting to do a deep dive into this subject and trace the CPU operations to see how many times L3 cache is hit and how many times main memory during gaming. That would give you conclusive information.
If anything this really highlights the impact of swapping data in and out of system ram has on overall performance.
Love this channel. Exponentially increases my enjoyment for pc tech and gaming.
The 3D V-cache absolutely makes a different in frametimes too, I went from the 5800X to the 5800X 3D and the caching stutters went away, I was getting insane shader cache stutters on the regular 5800X and they were non existent on the 3D version, this is after a full driver install and the cache cleared.
Multitasking requires the CPU cores to share the rest of the memory and storage subsystems and therefore increases latency and latency dependent games will suffer regardless of core count. That's why the E-cores on ADL and RPL taking care of all the background tasks is somewhat of a fallacy.
Yeah. For that argument to ever have any merit we will need quad channel memory on the consumer platform.
"That's why the E-cores on ADL and RPL taking care of all the background tasks is somewhat of a fallacy."
Better scheduling fixes software CPU overhead, that's why my 12700K is perfect for a DAW and chips without Big-Little aren't that good.
What Steve shows in bars doesn't reflect the real world when actually using these types of CPUs, same for a lot of results i watched on the internet when these techtubers are ignoring music producers.
When i disable the e-cores on the i7, i lose singlethread speed and gaming perfomance even though I'm freeing cache for the big cores. Depending on the game, it could be a hit to the frametimes (a small one with some exceptions).
When the scheduling works perfectly, we get a massive 30% increase for gaming perfomance (watch APO on the 14900K as an example).
Cache and memory aren't everything. Imagine how much better the Ryzen 7 1700 would have been at launch if the scheduling worked fine, nowadays it outperforms quad core i7s from that time due to the lack of cores.
Simply search which CPUs have the highest IPC, singlethread speeds and thread counts, the rest is irrelevant (minus number of PCIe lanes).
Thanks Steve!
this video makes me wanna upgrade my 3600x to a 5600x3d or 5700x3d damn!
I went from a 3600 to 5700x3d.. still rocking a rtx 2070super. My games are just smooth. Plus the fps is more consistent
It depends what GPU you use.
OMG! These are my favourite kinds of videos!
This was a good test there doesn't seem to be a difference between 6 and 8 cores. However 1 thing I think may be true is clock speed from a higher ghz to clocking it at lower static value can't be measured in this type of testing because clocking lower increases memory bandwidth for its effect on the fsb depending on the processors minimum needed wattage for performing work this didn't used to be needed but it's a protection added by the meltdown and spectre patches although I've only tested it on Intel as far as I remember anyway
Nice comparison! Just updated my R5 3600 to a (used) R7 5800X3D - so 3x L3-Cache + 2 more cores. 😍👌🏼
In a span of like 4-5 years i been steadily uprading the basic used PC I bought back then.
Had an FX-6100 and a 1060 3GB. Horrible but enough for me at that time to get away from Laptop gaming and over to PC gaming.
Went from the FX-6100 (fried a board trying to OC it lmao ) to a FX-8300 ( black edition it was i think ), then R5 2600 with a nice all-core overclock and now a 5600x. You could really feel the jump from FX to Ryzen and then from the 2600 to 5600x. It's incredible how much more performance CPUs, even in the low and middle class, have compared to some years ago.
Using the PBO + CO method for the 5600x which works flawlessly. Really nice and easy ( even though i love the normal overclocking via bios ).
For my 1080p high refresh gaming it's a nice combination with a RTX 3070 as GPU. Not having watched the video, i am thinking that while it depends on the title,. more Cache might have a bigger impact than Core-Count.
Next thing I could see me upgrade to, is a 5600/5800x3D. They offer an additional amazing performance jump and I dont need to change motherboard and stuff, cause legendary AM4.
Too bad the 5600x3D is a microcenter only part again. It's a damn nice chip. Should get some new Ram as well. current one is not really impressive at all. xD
SOTR with the 10900k did show a substantial lead in min fps going with 8 cores, and still improved going to 10 so there are definitely games out there, especially now, that will use the extra cores.
I followed your recommendation (5600X) two years ago and never regretted it as it runs everything very nicely, in fact I built my whole system following your various videos on hardware components. I briefly thought of upgrading to the 5800X3D when it came out but then I realized that I don't really need it for the types of games I play (mostly strategy games @ 60 fps).
Agreed. I did the same , always follow the suggestion/raccomandation from Steve and Tim , NEVER let me down !!
CPU , GPU , Monitors , I base my purchase on this channel , NEVER regret ANY decision.
They are the best IMO.
An i7-12700K with DDR5 is faster than a 5800X3D, it's similar in gaming perfomance to the 7700X while clocking 800MHz lower. It's a streaming beast as well...
If i waited a little bit more, maybe i would have a 7800X3D. The 5800X3D also has an annoying flaw, it's a jittery mess for the clocks, depending on the game i even watched a Ryzen 9 5900X being smoother for the frametime graphs than the 5800X3D for Rust.
I don't like CPUs with bad clocks or power limits. I feel the jitter as well for the Windows UI on those CPUs and other things.
IMO the clockspeed masterace will be end soon (in main use) just like Intel did back in the days with the Pentium4. To much power/heat to dissipate comparate to performance.
I belive ,the big and fast cache, will be the proxime future primary choice for the productor .@bra2867
Great video, hope AMD and Intel will use this to make more cache for gaming CPU, because it is interesting how far is scale.
Something that has to be noted is that cache architecture is far more complex than just adding more even if it's possible, the corecount, the cores, the task and the L1 and L2 caches affect how much L3 is optimal.
In general a larger cache is slower so there is an optimal value for the cache for each task. Slower cache means it takes longer to access it but it means you have to access the even slower RAM less often.
If your task has very little memory requirements (takes very little ram space) then a cpu with larger L3 can actually become slower.
Games get heavier and heavier so we might get L4 cache soon in the form of HBM on the chip and the L3 caches might get smaller.
I would appreciate a follow-up to this video covering emulation performance, RPCS3 and Yuzu specifically.
Thank you.
This kind of video is the most valuable where we really learn something new!
I like your reasoning, really :)
Hey Steve,
This was a great video, I was just looking into 5800X3D as a potential upgrade path as it does seem to regularly get discounts here in the EU.
I am kind of curious how it would compare against one of the 12/16-core SKUs though. All of these have 64MB L3 cache, but boost higher and are generally better value than X3Ds for productivity. I'd imagine many of us also use their "gaming" PCs as a workstation, so I wonder if the gaming performance gain is significant enough to justify a 5800X3D vs. 5900x for example. In your original review the average 8-game 1% lows came up to 150 vs. 121 FPS respectively, but the X3D got blown away in all productivity tests.
What I don't get is why the 64MB L3 cache on the 5900X/5950X did not seem to make any difference in these tests vs. the non-X3D 5800X (32MB cache). The way I understand it, the 64MB should be shared between both CCDs (as opposed to 32MB per CCD), so even if games don't use multiple cores, shouldn't they still benefit from the larger cache size?
15:26 performance degredation mainly caused by limitation of bandwith storage issue. I have tried installing and updating game in background with SSD 1 and gaming with SSD 2 there is no noticeable frame drop issue so far
(keep in mind I am using Core i5 12400 and 2 SATA SSD)
What about more cores with background tasks running? Usually while i'm gaming I also have things like Firefox with CZcams playing, listening to Spotify, downloading stuff, etc...
Would be nice to do some kind of standard test for thoose use cases.
Not bad, some good, raw info. Nvidia latest ada architecture had a big L2 cache increase, interestingly, rather than L3.
To be honest, after the release & reviews of the X3D cpus we kinda already knew those results.
Steve you are an absolute legend mate.... I was honestly just writing "What would be an interesting, but also almost impossible to accurate measure, is the number of cores that start to make a difference when playing games in a more real life example and not in a benchmarking environment. So, basically, what happens when you play a game on a windows pc that is not a clean installation to get accurate results for the hardware you test, but when you have youtube, discord, motherboard/peripheral background software, various tabs on a browser open, etc"...
And I see you 've already covered this on 15:00
Thanks Steve.
Wendell from Level1Techs thought that 3DVCache is helping overcome memory bandwidth and of latency limitations. I would be curious to see how tweaking the RAM affects the performance delta between the vanilla and X3D parts.
great video!
I really like this episode. Can you guys do another episode comparing l3 caches versus speed of ram?
Would be great if you could add some multiplayer titles / competitive games to these charts.
Those likely scale much less with cache, since they don't have the huge amount of game world managers, AI NPCs and such that the singleplayer games you test have. (not sure how the rainbow six benchmark compares btw)
For testing those, for example cs2 and overwatch2 have the demo/replay system, so you'd only need to play one match, then compare whether the FPS in the replay are similar to the real match and then you can run the tests on the replay = repeatable test.
There's obviously still issues like the games not allowing you to watch old replays on a new patch f.e.
I mainly find it very hard to judge how relevant results shown in these videos are to people who mostly play multiplayer titles.
I hope for future memory technologies, they work more on reducing latency rather than increasing bandwidth. That would likely have more effect on performance at this point.
I really love this analysis. Great job!