Why Less CPU Cores are *Almost* Always Better.

Sdílet
Vložit
  • čas přidán 30. 05. 2024
  • Spreadsheet with all the results:
    docs.google.com/spreadsheets/...
    System Specs:
    CPU: r7 5800x3D / r9 5900x (depending on test)
    MB: MSI B350 PC MATE (latest bios)
    RAM: 2x16gb 3600 MHz c18
    Boot: Samsung 980 pro 2tb
    GPU: RX 7800xt / RTX 3080 (depending on test)
    ==JOIN THE DISCORD!==
    / discord
    I guess a bunch of CPU cores is just a flex now :/
    It's really easy to get caught up and overspend on a CPU, but for the majority of people (even many enthusiasts) you will most likely not notice a difference. Spending your money towards fewer cores, but individually are higher quality, for most people, is a better move. It is faster in gaming, and in productivity, usually that is handled by the GPU now. It surprised me how little a bunch of cores effect the end result.
    lmk what you think!
    GN: • The Last Hope for Inte...
    HUB: • Insane Gaming Efficien...
    • Intel Core i9-14900K, ...
    store.steampowered.com/hwsurv...
    www.statista.com/statistics/2...
    0:00- More core, More better... right?
    1:50- The pressure
    2:50- Disadvantages of getting extra CPU cores
    4:32- Advantages of extra CPU cores
    6:11- What this means
  • Věda a technologie

Komentáře • 768

  • @vextakes
    @vextakes  Před 7 měsíci +61

    what a flex 💪💪

    • @ChristianSandviknes
      @ChristianSandviknes Před 7 měsíci +8

      When it comes to composing, having more cores is definitely a plus - since every VSTi (virtual instrument) will generally have its own threads, so you can run heavier and more instruments; so I feel music production is always forgotten when people do these comparisons (it's always video editing)

    • @di4mondpl4ys45
      @di4mondpl4ys45 Před 7 měsíci +1

      Imagine you live in Germany and pay 0,32€ / kWh 😅

    • @muhammadamaar7936
      @muhammadamaar7936 Před 7 měsíci

      Wow lmao

    • @EZFun-Official
      @EZFun-Official Před 7 měsíci +1

      If more cores doesn't matter, than why did intel and amd ever make more than 4 cores in their CPUs? It was almost pointless to add more in the first place!

    • @badpuppy3
      @badpuppy3 Před 7 měsíci

      Yes it does bro, ok? You stop.

  • @TinkerTaverns
    @TinkerTaverns Před 7 měsíci +235

    While I agree with the underlying point, I feel you did a bit of a disservice by comparing an X3D chip against a normal X chip, as that's more likely to be what's contributing to the performance difference (as you even said, higher quality cores)
    Also, in terms of productivity, in the context where there is rendering going on (like blender), GPU's are most certaintly the better option, but most other productivity apps will and likely cannot benefit from GPU acceleration in the same way. For example code compiling, or running virtual machines/docker containers is a place where more CPU cores definitely still help
    Once again though, I agree with your core point, as long as you've got 6-8 good cores you're plenty fine nowadays

    • @prizrak-br3332
      @prizrak-br3332 Před 7 měsíci +6

      I think the point was a top of the line 8 or 6 core CPU will perform better and still cost less than a lower end 10 core and up

    • @TinkerTaverns
      @TinkerTaverns Před 7 měsíci +11

      ​@@prizrak-br3332 Yeah I suppose so, I still think it could have been helpful to show 5800x vs 5900x and then maybe also include the x3D, but I imagine that comes down to whatever he had at hand

    • @SkrtSkrtBubbleGumSquirts
      @SkrtSkrtBubbleGumSquirts Před 7 měsíci +10

      @@prizrak-br3332 I agree with tinker I feel like comparing an x3d cpu is like comparing two cars but one is has a turbo and saying the this one is faster with a smaller engine like yeah no shit thats the point of the turbo in this case the 3d cores

    • @RawfunRahman
      @RawfunRahman Před 7 měsíci +2

      More cores/threads = more tiles the gpu will be able to render. He missed this point as well.

    • @yarmgl1613
      @yarmgl1613 Před 7 měsíci +2

      as a programmer, 12 cores is perfect for me, android apps compilation is super fast, took about 5-10 minutes on my 4 core 9300h, now takes less than a minute with the r9 7900, opening up my IDE and scanning all the files takes seconds on huge projects, huge productivity booster, but ofc an 8 core 7700 would suffice

  • @chexmixkitty
    @chexmixkitty Před 7 měsíci +200

    You should have used a regular 5800X when comparing that and the 5900X. The X3D has a bunch more cache so it will perform better in games. The core count isn't the deciding factor in performance in this instance.

    • @averytucker790
      @averytucker790 Před 7 měsíci +8

      Exactly, they made it clear from the press information that the 3D CPUs was to boost performance in gaming (That's literally what it did the last few times AMD released 3D chips.)
      Plus, That's why the advertisement for the exact 3D-CPU he used says, "World's fastest gaming DESKtop processor." right in the video for it.

    • @x0Fang0x
      @x0Fang0x Před 7 měsíci +13

      "the core count isn't the deciding factor" yes that's his point.

    • @HeloHounds
      @HeloHounds Před 7 měsíci +7

      @@x0Fang0x however, prior to the X3D chips coming out the Ryzen 9 5950X (16 core) was the fastest gaming CPU. But yes, the 8 core X3D chips are the fastest for GAMING now.

    • @MaskedOG
      @MaskedOG Před 7 měsíci +5

      The point of this Video was that a quality 8 core CPU is better for Gaming then your high end 20000000 core ones. Unless Games get optimized for more cores this wont change. So if you are primarely Gaming then its mostly a wast of money to spend it on a threadripper etc instead of the Quality Gaming CPU.

    • @janbenes3165
      @janbenes3165 Před 7 měsíci +1

      @@HeloHounds They were fastest because they were binned to boost higher, but if you took those chiplets and split them into two 8 core CPUs they would be just as fast in gaming.

  • @NubAnSkrub
    @NubAnSkrub Před 7 měsíci +123

    Depends if someone needs a cpu intensive task

    • @sursomsatan1225
      @sursomsatan1225 Před 7 měsíci +5

      Like what?

    • @Eduard0Nordestino
      @Eduard0Nordestino Před 7 měsíci +11

      I think the only think high cpu core count is good is with data center everything else it was overwhelmed by gpu hardware acceleration.

    • @NubAnSkrub
      @NubAnSkrub Před 7 měsíci +26

      @sursomsatan1225 video editing, emulation, simulation and more

    • @firkinflamer8604
      @firkinflamer8604 Před 7 měsíci +3

      ​@@NubAnSkrubGood also for games like Civ 6 and Stellaris

    • @daves.software
      @daves.software Před 7 měsíci

      ​@@sursomsatan1225running multiple docker containers so that you can test your application's latest build. Cutting a build from 2 hours to 30 minutes is worth paying a bit more for more cores.

  • @Kacheng_3080
    @Kacheng_3080 Před 7 měsíci +32

    For majority of the games, single thread performance of the said CPU core(s) is more important than have X amount of cores a CPU have.

    • @MyouKyuubi
      @MyouKyuubi Před 7 měsíci +9

      exactly, single-core performance is the only thing that matters... like the QUALITY of each core, matters, not the quantity.

    • @Haggy836
      @Haggy836 Před 7 měsíci +2

      @@MyouKyuubi This^

    • @ishiddddd4783
      @ishiddddd4783 Před 7 měsíci +3

      yes and no, while most games still rely in IPC/cache, you cannot realisticaly expect a 4 core cpu to perform the same as a 6-8 core from the same generation even if they have the same single core performance, background tasks, game cpu utilization, will make sku's with less cores start to struggle and so, make the game start suffering from stutters.
      Quantity becomes redundant past 8 cores, but since a lot of people do stuff on their background while they play, the extra cores help leverage that load.

    • @ArtisChronicles
      @ArtisChronicles Před 7 měsíci +1

      ​@@ishiddddd4783 basic what I'm hearing here is that I buy a good 12 core now and I don't need a new cpu for 10 years.

    • @yancgc5098
      @yancgc5098 Před 7 měsíci +2

      All games actually, not just the majority of them. Single-core performance has, is, and will always be more important than multi-core performance

  • @apostleoffate2028
    @apostleoffate2028 Před 7 měsíci +57

    Depends on the variables and how the arhitecture of said cpu is (non x3d vs 3d from amd as an example) and what do you have in mind when you make a system, etc.

    • @__-fi6xg
      @__-fi6xg Před 7 měsíci

      who asked bro xD

    • @apostleoffate2028
      @apostleoffate2028 Před 7 měsíci

      @@__-fi6xg the devil that loves to lie on people's shoulder ))))

    • @electrothecat
      @electrothecat Před 7 měsíci +5

      @__-fi6xg he did, and he's right, silicon lottery loves to screw you over sometimes, that's why AMD had to get a higher bin for their 5800X3D chips, so they don't overheat when the CCD's get loaded. bc the 3D VCache actually sits on top of the actual cores on the die, hence the name, they *have* to use higher quality silicon to dissipate heat more efficiently, otherwise you'll have a melted CPU before you could say "melted CPU" when running R23. So if you don't want the trouble with the silicon lottery, just get an X3D chip.

    • @kevinerbs2778
      @kevinerbs2778 Před 7 měsíci

      @@electrothecat it does not sit on top of the cores. It sits over the L3 cache. There is an extra layer of silicon to keep the die levels over the other cores.

  • @peterwstacey
    @peterwstacey Před 7 měsíci +9

    Big one at work is code compilation - each C or C++ file is handled in one thread, so more cores = faster compilations. Even on smaller projects, the more compilations you can do, the more Jenkins jobs can be run overnight or by the Harbormaster when doing a GIT push

    • @dginx
      @dginx Před 7 měsíci

      Well you are probably looking for a Threadripper type CPU then : )

    • @RNA0ROGER
      @RNA0ROGER Před 7 měsíci

      FFMPEG. AV1 and a lot of other things moar cores much better speeds

    • @ArtisChronicles
      @ArtisChronicles Před 7 měsíci +2

      ​@@dginx you mean the dead platform they only just revived?

    • @dginx
      @dginx Před 7 měsíci

      @ArtisChronicles Then according to you and me, yes.

    • @mycelia_ow
      @mycelia_ow Před 6 měsíci

      @@ArtisChronicles What about EPYC? That never went away, it's far superior too.

  • @Physuo
    @Physuo Před 7 měsíci +14

    My biggest use is multitasking, optimising media in davinci resolve for even faster playback and re-encoding videos with higher quality lower filesize formats like AV1. A 5900x performs the same as a 5800x or 5600x in terms of if only 2 or maybe 5 cores are being used. But the second you have anything like shader compilation or emulation (in games), high core count CPUs make way more sense.
    Nintendo switch and PS3 emulation on a 5900x is a breeze as well as editing and compression.
    If you ONLY game, and you ONLY play native games (no emulation), yeah 6-8 cores are enough. 12-16 cores are for anyone going into the stuff I just talked about

    • @battyflaps5410
      @battyflaps5410 Před 7 měsíci +1

      100% agree shader compilations on my old ass quad core takes forever in emulation and ps3 emulation well.. Its not happening.. Lol D:

    • @allxtend4005
      @allxtend4005 Před 6 měsíci

      true too many people think they need a i9 and RTX 4090 to play with ther 1080p 60hz monitors ... best buy is always RTX (X)070/(x)080 series and same for amd and same for cpu's i7 is the way to go, the money you did not wasted on the more expensive hardware (if just gaming) you can spend on the newer hardware same class and have a better advance then you had with the high end Entusiasic hardware.

    • @SomePotato
      @SomePotato Před 5 měsíci

      @@allxtend4005That is no surprise, as Nvidia's x70/x80 and Intel's i7 used to be the high end. They introduced their i9s, x90s and Titans specifically to milk "enthusiasts" with too much money on their hands.

  • @mileswithau
    @mileswithau Před 7 měsíci +7

    This is a bit of a niave take, especially with respect to the "GPU vs CPU" point. It's ultimately just sort of complex; GPUs are better at solving _certain types of problems_ than CPUs, but you can cripple their performance by picking certain tasks. Generally, things that are well-suited to GPU acceleration are operations consisting of trivial parallelism, largely branchless code and pure math operations (ie. graphics programming). But, trying to do things outside of that problem space get slow quickly.
    Operations that act serially or atomically on a resource are also very poorly suited to GPU acceleration (and may even be impossible to implement), while scaling well with many physical cores (this consists of many I/O bound things).
    GPUs also get a leg up from a lot of fixed-function hardware, where some function is baked into the silicon itself on the card. This can result in some marked speed gains, but usually comes with the caveat of lower quality work (GPU video transcoding for instance, while fast, generally results in lower quality and larger file sizes than software transcoding. A lot of this comes down to fixed function hardware being set set at manufacture time, meaning you can't benefit from codec and encoding improvements. Optimisations made for performance generally also sacrifice quality).
    TL;DR: There is a reason why Disney's rendering farm consists of roughly ~55,000 Xeon cores, and not GPUs. To that extent, multicore CPUs have many uses among software developers, server operators and many other heavy workloads (including some specific games). While for most modern video game titles video card performance is usually more important than CPU performance, it's not a statement that should be made broadly about computing in general; it's very situation dependent.

    • @NikolaNevenov86
      @NikolaNevenov86 Před 7 měsíci

      Well there is also the case where a GPU is actually a multi core CPU with less coputing power and specialized design. So technically a GPU is a multi core processor that performs better exactly because of it's core count.

    • @mileswithau
      @mileswithau Před 7 měsíci

      @@NikolaNevenov86 No, not really. GPUs are able to have many cores because of differences in design and how they process information; each core is very simple and slimmed down, often sharing parts of its pipeline with other cores (such as I and D caches, instruction fetch/decode circuitry, parts of the control unit, etc.). The result of this is that they are very good at doing a lot of the same thing, but it really has to be the _exact_ same thing (on an instruction level). This is a big part of the reason why most GPU code is branchless (ie. having no if/else statements or other conditional execution) in order to be fast; since all cores have to doing the same thing at the same time (at least within some sort of cluster/execution unit), every core has to take both branches, disabling itself while executing or discarding the result of one of them. This results in a lot of wasted time, and limits the use of things like recursion.
      Additionally, this means that GPUs cannot work on data where the output of one operation on one data is used when computing the result of another; every computation has to be largely independent of one another, which is why they are pretty bad at serial/sequential operations.
      In the mean time, while a multicore CPU may have less cores, each one of them is a fully fledged and capable CPU core that _can_ function completely independently of the others. This means you can work with branching code without nearly as much of a performance hit, and work on sequential operations much easier (or even at all, in most cases).

    • @NikolaNevenov86
      @NikolaNevenov86 Před 7 měsíci

      @@mileswithau well technically you are explaining that the GPU is made up of lots of simple(less powerfull cores). Also its not that GPU cores cannot do conditional computation, but the design is that all cores in a workgroup have to evaluate or wait for the cores that go into the conditional statement. There is a nice talk/lecture by an AMD gal that explains why branching is usually slow and when it could be fast.

    • @mileswithau
      @mileswithau Před 7 měsíci

      @@NikolaNevenov86 For certain workloads, yes, it can be avoided, but not in general.

  • @maxxoft
    @maxxoft Před 7 měsíci +7

    I'm a software engineer, so I bought myself 5700X instead of 5600X when I was building an AM4 system, and I knew it's not gonna be much better in games. So after all it really depends

    • @deneguil-1618
      @deneguil-1618 Před 7 měsíci +1

      same, I do quite a bit of image analysis and AI stuff, in the next system i'll build i'll be going with at least 12 cores, the 7900 and 7950X3D caught my eyes last year for the mix in power and efficiency

  • @nicksterba
    @nicksterba Před 7 měsíci +23

    For gaming, I think you really should be comparing a 5900x not to the 5800x3d because obviously that is designed to be amazing for gaming with the extra cache, but compare the 5900x to the standard 5800x. Those results would be more interesting imo. I don't think more cores are better for higher fps per se, but for higher 1% & 0.1% lows.
    Also, my 12 cores would like to have a word with you about video rendering, because yes, while GPU encoding is much quicker, there is a lot of debate between video/PC nerds on whether or not GPU video rendering outputs the same visual quality as CPU software encoding. I always CPU software encode videos for myself and clients for that bit of higher visual qualtiy, so that I can more safely lower the bitrate by a little bit. Until it's proven than GPU encoding for video is the same or better than software encoding, I will continue to own high core count CPU's and will continue to CPU software encode videos.
    Besides all of that, it simply is nice to do multiple intensive things at the same time for a pure smooth user experince. I'm going to buy whatever is the final halo CPU for AM5 years later once that comes out.

    • @__-fi6xg
      @__-fi6xg Před 7 měsíci +1

      We need a encoder for you my man, what are you on about.

    • @v5k456jh3
      @v5k456jh3 Před 7 měsíci +3

      If even nerds can't decide if CPU encode is better or not then it's either placebo effect or a negligible difference.

    • @nicksterba
      @nicksterba Před 7 měsíci

      @@__-fi6xg Software vs hardware encoding quality. It's hard to find open info about it but many people who are very into this topic say that software econding (cpu) gives better quality than hardware encoding.

    • @__-fi6xg
      @__-fi6xg Před 7 měsíci

      @@nicksterba yeah and linux has better audio than windows but so many additional downsides that its just not Konveniat enough to compete with windows.

    • @Tainted79
      @Tainted79 Před 7 měsíci +7

      I m a retired IT professional. One of the last contract jobs I had was for a special effects company. They used threadripper AMD cpus for rendering final cuts. Because of the speed of course. But most importantly cpu rendering was 99.998% accurate. As opposed to Gpu was 96-97%.

  • @HeartOfAdel
    @HeartOfAdel Před 7 měsíci +39

    The thing that almost no one ever mentions is LATENCY and IO performance. Lower core count cpus, with one ccd, have better windows response times, so the overall usage feels snappier. It matters for productivity too.
    Hybrid architecture seems to have latency issues, and dual ccd/infinity fabric struggle too. Intel's older ring bus still seems to have the best io performance with many people claiming to feel a snappier pc with 10700K vs 5800x3d or 12900K. I wish people brought it up more actively.

    • @shaynegadsden
      @shaynegadsden Před 7 měsíci +3

      It's because people talk shit the latency at that level is measured in nano seconds that is faster than the fastest reflexes in the world of any species

    • @HeartOfAdel
      @HeartOfAdel Před 7 měsíci +8

      @@shaynegadsden it's not just that dude. These things are quite noticeable: UI is smoother, dragging and dropping is better, apps open/close in a snap. I immediately feel better response and smoothness after ram overclocking. Even going from 40ns to 34ns is very noticeable for whatever reason when tightening power saving timings. And io performance difference can be noticed easily it's not just nanoseconds that show that.

    • @shaynegadsden
      @shaynegadsden Před 7 měsíci +4

      @@HeartOfAdel that is general system perform not IO or core to core which the infinity fabric also going from 40 to 36ns isn't noticeable but if you think you made an improvement your brain will make you feel like it's better, just to give perspective at 40ns means it can happen 2500 time in a single blink or 50 time faster than the highest refresh rate monitor which is 500hz

    • @HeartOfAdel
      @HeartOfAdel Před 7 měsíci +2

      @@shaynegadsdenbear in mind that I'm generalizing and by IO I mean everything that's responsible for the eventual latency. But all that ram timings/ring or IF speeds/ or core to core latency etc. all do make a difference even for UI animation smoothness.
      It's clear that those nanosecond can't make much difference, but that's why I stated "with power saving timings". Ppd 0 for example can shave off 3ns of latency but the amount of smoothness it gives to a system is as great as switching from droopy core speeds to locked 5ghz, it's insane, UI smoothness increase is insane. It is clear that there are other bottlenecks that can get alleviated besides what's shown in aida64 nanoseconds, and it is defined by many factors that I've already mentioned above. And overall all that can make a noticeably snappier machine, with faster "input-output"

    • @anemone5870
      @anemone5870 Před 7 měsíci +2

      I guess you must be those guys who claim they can see 1000 fps in cs go and can feel latency in nano second difference.

  • @X_irtz
    @X_irtz Před 7 měsíci +32

    I think it generally matters in some cases such as, as you mentioned, encoding and rendering, but for gaming it kinda just boils down to the single core performance.

    • @HoD999x
      @HoD999x Před 7 měsíci +2

      modern games utilize ~6 cores well

    • @HoD999x
      @HoD999x Před 7 měsíci

      @Noodles1922 only in extreme cases (assuming all the cores are used)

    • @shaynegadsden
      @shaynegadsden Před 7 měsíci

      Games will use more and more cores over time because the simple fact is now it's much easier to add extra cores than increase the performance of one and the hardware will dictate the direction of the software

    • @heyitsmejm4792
      @heyitsmejm4792 Před 7 měsíci +1

      @@shaynegadsden not really, because your high core count CPU of today won't be fast enough to run future games as games care about single core performance. so even if it can accommodate the game, core count wise, its not going to be that faster in terms of single core performance so you're not gaining that much if not nothing at all. or even a modern 6 core CPU would give better performance.

    • @shaynegadsden
      @shaynegadsden Před 7 měsíci

      @@heyitsmejm4792 i don't know if you have noticed but single core performance hasn't exactly been going up by much and there is an ever increasing amount of games that will use 8 cores with some seeing 25% more performance from 8 cores vs 6 but most being around the 5% mark but even at 5% that is more than Intels single core performance gains over the past 3 generations of cpu. And it's only been 5 year since 8 cores was even really an option on desktop pc's meaning we are only just starting to see games built with them in mind

  • @RawfunRahman
    @RawfunRahman Před 7 měsíci +3

    Something you missed. For productive 3d workloads like blender, having more cores/threads is actually beneficial because then the GPU can render multiple "tiles" at once.

    • @RADkate
      @RADkate Před 7 měsíci

      none sane is cpu rendering anymore, but stuff like sculpting is heavely multi threaded

    • @RawfunRahman
      @RawfunRahman Před 7 měsíci +1

      @@RADkate Did I mention CPU rendering? Obviously nobody renders on CPU. But I think you missed my point. Go to performance tab under render settings and see there are options for settings tiles. More cores/threads more tiles the GPU can simultaneously render.

    • @mycelia_ow
      @mycelia_ow Před 6 měsíci

      @@RADkate They work in tandem mate lol

    • @RADkate
      @RADkate Před 6 měsíci

      @@RawfunRahman yeah idk tiles make my renders way slower

  • @sergione371
    @sergione371 Před 7 měsíci +10

    The comparison between 5900x and 5800x3d is totally incorrect!
    The 5800x3d has a 3d cache that is known to be particularly usefull in game, this isn't about cores numbers...

    • @Greenalex89
      @Greenalex89 Před 7 měsíci

      True, but a 7700x or even 7600x (6 cores I think) or older CPUs would have shown similar results. More cores wont translate to more fps, since games simply are not capable of using more than a few cores. 99,99% of games cant make any use of more than a few cores, since most game design engines as such are still based on old core utilizations. New games are made with old game creating programs, so to speak.

    • @__-fi6xg
      @__-fi6xg Před 7 měsíci +1

      but doesnt that prove that cheaper is better with x3d cache? which happens to have less cores as well? Its all intertwined and the 7800x3d happens to be the best in gaming with only 8 cores. Why is a modded skyline outperforming a ferrari with less horsepower? Lets ignore the turbo, its only about the horsepower, thats basically your argument in car terms.

    • @sergione371
      @sergione371 Před 7 měsíci

      @@Greenalex89 you generally put in comparison CPUs with the same architecture. I agree that more cores doesn't mean better performance, but the 3d cache is the point here.
      If you take a 7600x and a 7900x you have a gap of 4 fps, maybe...
      But if you take a 7600x and a 7800x3d, with only 2 cores more, there is a huge difference thanks to 3d cache.
      If a 7600x3d would exist, it would be exactly the same thing, as it would push more than the one without 3d cache

  • @thetechnerd7467
    @thetechnerd7467 Před 7 měsíci +6

    You can do background tasks with a 6 core cpu just fine without impacting the performance much, if at all. Unless you do rendering the background or something like that.

    • @blkspade23
      @blkspade23 Před 7 měsíci

      Well 6 core CPUs are already on the edge of becoming the new quad core. TLOU is using 80% of 16 threads which means 12.6 threads. CP2077 will use 16 threads if you have more than that, but intelligently scales when you have 16 or less to not have the game eating all the CPU. I've seen Star Citizen using nearly 60% of a 5950X, or about 19 threads. Practically all future PS5 ports will likely be capable of or demanding 12-16 threads.

    • @quintit
      @quintit Před 7 měsíci +1

      Yea, unless ur background programs are acutally processing is pretty much the whole argument. This video legit could have just been "dont buy high core count cpus unless u actually need more core processing power, and games dont count for that use case." boom, video summarized.

  • @dragonman910
    @dragonman910 Před 7 měsíci +2

    In most cases for gaming, the architecture and single core speed of the cpu makes up for the difference in core count.

    • @vyor8837
      @vyor8837 Před 7 měsíci +1

      And then 5 years pass.
      And you're finding that overall CPU performance starts mattering.
      You know, like how the 6700k is now slower than the 1800X in games.

  • @arghpee
    @arghpee Před 7 měsíci +4

    Emulation is a great use case for more cores. Virtual machines also generally require lots of cores and memory if you're multi tasking. I noticed that Reshade uses all 12 cores of my r9 5900x doing different effects on each. Clock speed/IPC is king in gaming, single thread single core is why u can have a fine experience with a 7600x/13600k system. More cores come in very specific use cases such as art production, streaming, editing, programming, etc which is why I went with the 5900x and will upgrade likely to a 8950x3d or similar when the 50 series or 8000 series comes.

  • @nickbabin5884
    @nickbabin5884 Před 7 měsíci

    Thank you for this video, this has always been a blurry area for me in my pc knowledge. Also I loved the dope Pokémon background music!

  • @justincowans2677
    @justincowans2677 Před 7 měsíci +11

    Great video. All I do is game so super happy with my R5 5600.
    Maybe you shouldn't use the 5800x3d in your comparison without mentioning the 3d memory.
    Someone looking to buy may get the 5800 non 3d and think that they will be getting the same performance.

    • @ArtisChronicles
      @ArtisChronicles Před 7 měsíci

      I looked for something that fit nearly the same power requirements as my previous 1700x, so that led me to the 5700x. I didn't consider the 5600 or 5600x since there's definitely times when I'd affect performance by doing some extra stuff, I've already seen it with my 5700x.

    • @K.C-2049
      @K.C-2049 Před 6 měsíci

      @@ArtisChronicles5700x seems a great balance between power and efficiency. I just threw one in my PC and it runs cool and speedy (mind you ambient temp in my room is like 18 C in winter)

  • @TDamsma
    @TDamsma Před 7 měsíci +6

    You're absolutely right. The AMD Ryzen 7 5800X3D outperforms in gaming thanks to its efficient utilization of the 3D cache, which games tend to benefit from. On the other hand, while the Intel i5-14600K does boast 14 cores, it's essential to note that 8 of these cores are E-cores primarily intended for tasks beyond gaming. These E-cores come into play when handling more demanding, multitasking workloads.

    • @adammartin4906
      @adammartin4906 Před 5 měsíci +1

      This dude is just repeating what techies said back in 2014 the whole 4 core is enough thing what people haven't done is there research that most triple a titles now can utilize 16 cores and get more FPS out of these big core processors so while this used to be true it's not true at all in 2023 and this dude sucks at tech if he is going to keep repeating what techies said in 2014, I mean damn does he even research the fact that the less core thing that was a thing in 2014 is no longer a thing in 2023 and what sucks is the people that are watching just roll with it not knowing they were just duped hard WOW!!!

  • @HeloHounds
    @HeloHounds Před 7 měsíci +1

    The High core count CPU in conjunction with a good GPU makes workflow processes much faster. I for one noticed the biggest differences in Blender and DaVinci Resolve from upgrading my 2700X to a 5900X. I had started originally by upgrading the GPU (Vega 56) to a 3060ti.... Wasn't a huge difference, was the CPU upgrade that made a incredibly noticeable difference. I have since upgraded to a 3090 and once more, not a huge difference (it is there but not huge) in DaVinci or Blender (Yes, I have my GPU set for processing and rendering).
    All in all when my group asks what they should get for upgrades I always ask if it's for gaming only or if they do video or 3D rendering... If it's gaming only I always recommend the 8 core X3D (5800 or 7800) as per CZcams reviews the higher core count X3D chips are slower than the 8 core in GAMING. If they do the other stuff along with gaming and they are going completely new and top of the line, I recommend they get the 7900/7950 X3D chips. If they have AMD chipset and it's AM4 I recommend the 5900 or 5950 X series equivalents.

  • @pascaldifolco4611
    @pascaldifolco4611 Před 7 měsíci +6

    I fully agree, was kinda hypnotized by the core count when I got a 3900X, upgraded to a 5900X, but it's overkill for what I do, next one will definitely be 8800X3D ^^

  • @hermanogris143
    @hermanogris143 Před 7 měsíci

    6:40 Well, some people like me for example has the money to buy a newer cpu but that means you gotta buy a new motherboard for it, so If im a AM4 user with a ryzen 7 3700X I would need to buy a 5000 ryzen processor its not like i could grab a 7700X and mount it on the motherboard.
    If I wanted to stream and play competitive titles which one would you pick between 5900x and 5800x3d?

  • @lorenzo0932
    @lorenzo0932 Před 7 měsíci +5

    for me ffmpeg file compression makes high core count cpu very appealing, because hardware encoders are very very fast but but compromise in quality or in file dimention (at the same bitrate quality is worse / at the same quality bitrate is much higher).
    Is a niche use case and won't apply to most peoples but is still a use case.

    • @shanent5793
      @shanent5793 Před 7 měsíci +1

      Are there any free encoders that leverage GPU compute and not the dedicated encoding hardware to accelerate video compression? It seems like that should allow some flexibility in the algorithm yet still spare the CPU some of the work

    • @lorenzo0932
      @lorenzo0932 Před 7 měsíci +1

      @@shanent5793 this is a really interensting question. As far as i know, there aren't free encoders that use gpu compute instead its hw encoders. Maybe i'm wrong but i can't find anything about it. Maybe closed source project like davinci resolve or premiere pro use this approach but i don't know.
      If anyone knows something about, please let me know
      PS. Sorry for my english

    • @shanent5793
      @shanent5793 Před 7 měsíci

      @@lorenzo0932 From the original RDNA white paper: "In many cases, video encoding and decoding can be performed in software on the RDNA dual compute units for the highest quality." Apple workstations used to come with Radeon, maybe they had some specialized software for Mac. I think the studios just use Threadripper PRO for software encoding today

  • @E2_Hawkeye
    @E2_Hawkeye Před 7 měsíci +1

    I will say that a lot of other video rendering software can't seem to use the GPU or it's in beta so when I render in this AI application I have the GPU cranks it out in a fraction of the time but it looks horrible, that mode is in beta. That being said the 5800x3d is still doing videos in a fine amount of time, it takes longer for CZcams to push them to HD/4k and it gets my shorts out the door in like 30 seconds.

  • @VenomVendue
    @VenomVendue Před 7 měsíci +6

    I got the 3900x opposed to the 3600x back in the day because I do a lot of multitasking not just playing a game, doing work, streaming, sometimes running multiple instances of a game or two different games at a time, so thats why I appreciate having a lot of cores, granted if you're just solely gaming ofc the go with the 6core or 8core

    • @Haggy836
      @Haggy836 Před 7 měsíci

      you can multitask just the same on an 8 core as a 12 core.. that's the point he's making, all of those other things you listed are better done on a GPU not a CPU so those extra cores are doing nothing.

    • @VenomVendue
      @VenomVendue Před 7 měsíci

      if I didint have a 12core and didin't monitor the cpu usage I might agree with you but objectively you're wrong in my usecase
      @@Haggy836

  • @constanciokurdapio541
    @constanciokurdapio541 Před 7 měsíci

    These results, is this solely tested on gaming? Like how about gaming while having multiple tabs on a browser on a 2nd screen, with various apps running in the background like anti-virus, Spotify, wd dashboard (in my case) and so on. I have a 5900x paired with 6950xt, playing starfield, youtube still occasionally pauses. Casual gamers tend to have multiple apps running, so I'm wondering.

  • @TheEdge92
    @TheEdge92 Před 7 měsíci

    I'm looking for a good CPU combo for the 7800 XT. But pc builds bottleneck calc says that even the 5900X is too weak for this GPU @1080p. 4k seems fine. Any ideas what CPU to get for playing 1080p and 4K without bottlenecking too much?

  • @trackingdifbeatsaber8203
    @trackingdifbeatsaber8203 Před 7 měsíci +1

    I guess the one thing I can think of a high count cpu being good for is encoding. Dont get me wrong if you try to compete with the speed of Nvenc, quicksync,VCE you will get awful results. But if your willing to wait av1 or h265 off my 5900x can be much more efficient than whats possible with gpu encoding. I might be wrong as the only gpus i have on hand to test is a gtx 1080 TI and a rx 7600 but yeah the av1 encoder on the 7600 is like equivalent to preset 9 or 10 which I can do real time on my cpu at 1080P in doom eternal. Maybe the nivida encoder this generation is just insanely good but if I wanted to create something that looked good while being bit rate constrained I would defiantly go with CPU encoding which benefits from more cores.

  • @Greenwithao
    @Greenwithao Před 7 měsíci +2

    I was on the understanding that if your playing a game + have other apps running like some browser tabs, discord, and sportify more cores would be better. It would be nice to know framerate and frame timing on a real world setup. everyone seems to benchmark on clean computers with best case scenarios.

    • @quintit
      @quintit Před 7 měsíci +1

      More cores lets you multitask with less interruptions yea, but you won't notice the difference unless multiple of your open processes are actually *processing* lol. Or if you have certain cores dedicated to a specific process.

  • @mojojojo6292
    @mojojojo6292 Před 7 měsíci +1

    There is an area in 3d rendering that still relies on the cpu. Shader compilation. Whether loading a scene or a single complex material it can be quite slow without a fast high core cpu. Something not mentioned as well was you pretty much need an nvidia card for all of the tasks that can be boosted by the gpu especially the rtx line. Amds software support is terrible.

  • @me22gigabyte
    @me22gigabyte Před 7 měsíci +1

    And here I am with an i3-12100F 4 cores, kinda considering upgrading to an i5 next year but I really don’t have any issues with my CPU atm

  • @AlucardNoir
    @AlucardNoir Před 7 měsíci +3

    There's one thing you haven't mentioned and that's the type of games people play. Some games are a lot more CPU heavy. For those games it doesn't really matter if oyu have a 4070 or a 4090, the CPU is the one that's going to kill your performance. Games like Path of Exile, or the newly released Cities Skylines II. Depending on the kind of games you play you can get away with a lower class CPU that can save you hundreds or you can get away with a cheaper GPU and buy the more expensive CPU. It's all a matter of what you need. The problem is most people don't really know what they need.

    • @SomePotato
      @SomePotato Před 5 měsíci

      And then there is Dwarf Fortress which would really like a 10GHz single core. :D

  • @2ndLastJedi
    @2ndLastJedi Před 7 měsíci

    Your saying get a 8 core so do you consider the 13600k to be more than 6 core? Are the E cores enough for those background tasks?

  • @TheACLP
    @TheACLP Před 7 měsíci +1

    posted 2 hours ago, Vex out here grinding youtube. Good content, quality went up, even your acne cleared up. good job!

  • @kaiservulcan
    @kaiservulcan Před 7 měsíci

    You show interesting results with AMD GPUs. Is RocM mandatory in order to obtain this kind of results?

  • @1213dragon1
    @1213dragon1 Před 7 měsíci

    What would be the ideal upgrade from a Ryzen 5 3600 to pair with something like a rx7800xt.
    I'm planning on upgrading my GPU to the rx7800xt and then later on the ram to a 32gb 3600mhz kit then the CPU.

  • @Kevwylde330
    @Kevwylde330 Před 7 měsíci

    I agree with this to a point. When I build computers for my friends I've went ahead and recommended the x3d chips for myself though I am using a 12 core cpu but that's because I do tend to just like having EVERYTHING open for my work flow plus working IT using virtual machines and what not. Even my memory usage is around 80 percent from 32 gigs of ram but I also understand the type of power user I am. But you still are correct for MAJORITY of users a high core count would not benefit them that much.

  • @atoo7yt490
    @atoo7yt490 Před 7 měsíci +1

    depends on how the game uses yours cores, 2077 got an update to use 8 and more cores, nfs unbound uses all of my 8c16t cpu and saw someone see shared utilization on even 16c32t, yes single threaded games are worse but those are usually old enough that it doesn't matter

    • @kaimojepaslt
      @kaimojepaslt Před 7 měsíci

      no its just swaping around the cores. doesnt use them all at once.

  • @witekborowski1410
    @witekborowski1410 Před 7 měsíci +1

    Back in the day I used to run scientific analyses on large datasets (hundreded of TB was a lot 10 years ago), and I could make use of any number of processors available. Maybe now you could use GPU for some of the cases, but in general CPU is the king in this type of work.

    • @quintit
      @quintit Před 7 měsíci

      Compression/decompression benefits greatly from additional cores, and ofc vms.

  • @HawkieTalk
    @HawkieTalk Před 7 měsíci +2

    It's an issue with Windows' priority for cores, not about performance.
    It is also not necessary to test different processors. You can disable some cores and test the same processor with a different number of cores.
    Also, as a user of ultrawide monitor 32:9, having 24 threads is very useful.
    I often run 5-8 large programs or when I play, I can run several games at once without any lag.

    • @Greenalex89
      @Greenalex89 Před 7 měsíci +3

      Exactly, most Users dont need high core count CPUs..at all.

    • @roccociccone597
      @roccociccone597 Před 7 měsíci

      @@Greenalex89that’s true. But as someone who uses and benefits from a 16 core desktop part I’m happy they exist, because I used to have to buy the HEDT parts which were way too expensive.

    • @Greenalex89
      @Greenalex89 Před 7 měsíci

      @@roccociccone597 Of course, Ive always been a fan of buying reasonably. Its just that so many people are getting fooled thinking a high number of cores equals higher FPS rates in games.

  • @RikMahanta
    @RikMahanta Před 7 měsíci

    I want to buy arc a750. Should I buy it now end of 2023 or wait for other graphics card. My work is video editing and graphics design.

  • @Innosos
    @Innosos Před 7 měsíci +5

    Downvoted. Comparing across different manufacturers and archs is straight up misleading. So I have to conclude that you either "don't know what you're talking about" or do know and intentionally chose to misrepresent the issue for greater effect.
    The issue of a 14900K vs. a 7800X3D sipping more power does not follow from the number of cores but rather the architecture, process node, power limits / clock speeds and the X3D's cache.
    If you wanted to get your point across you could have pitted two non-3D cache Zen 3 CPUs against each other and matched clocks or, alternatively, disabled some of the cores on your 5900X and compared results and show that more than 6 cores made little to no difference in gaming.

    • @SKHYJINX
      @SKHYJINX Před 7 měsíci +1

      This channel was better before he dumbed it down to daniel owen levels, now hes just like the rest of them....
      CZcams kinda forces it though, there is a reason they go this route.

    • @Innosos
      @Innosos Před 7 měsíci +1

      @@SKHYJINX Dumbing it down is fine if what he said was factually correct. There's ways to end at the very same conclusion without using underhanded tactics that do not make him come across as either incompetent or manipulative.

    • @SKHYJINX
      @SKHYJINX Před 7 měsíci

      @@Innosos 100% this whole video was a shit show and made for the "masses" and you can tell by other videos he knows better.

  • @jondoe9351
    @jondoe9351 Před 6 měsíci

    Depends if games can use all your cores and disable e-cores for gaming if you have them

  • @monsterboomer8051
    @monsterboomer8051 Před 7 měsíci

    Shall I buy 7800X3D or am I totally fine with 6core 7600 for 1440p gaming at 120hz?

  • @vintologi
    @vintologi Před 7 měsíci

    Not all productivity programs support and benefit from GPU acceleration.
    There are also some games that benefit even from e-cores, the issue with AND cpus is you getting no more than 8 cores on each CCD which prevents games from effectively utilizing more than 8.

  • @gamerdon6
    @gamerdon6 Před 7 měsíci

    one thing to mention here is the 8 core you have used here is not just a regular 8 core it has 2-3 times more cache than a normal 8 core so that also matters in games

  • @odenz91
    @odenz91 Před 7 měsíci

    should i buy 7500f or 5700x? confused! the price 7500f cheaper than 5700x by 11 dollar in my country

  • @mhh3
    @mhh3 Před 7 měsíci

    tbh if you wanna encode and have the smallest file size cpu encoding is the way to go i tested it with a nvidia gpu and used both h264 on the gpu and x264 on the cpu, yes gpu is faster but filesize is smaller when it is done on the cpu.

  • @OneAngrehCat
    @OneAngrehCat Před 7 měsíci +1

    The fundamental MORE COREZ problem is that multithreading isn't fun.
    Programmers will thread/use multicore when they MUST, not when they CAN.
    The expectation that many cores will lead to a better experience is very wrong. It's diminishing returns past 8, and way worse diminishing returns past 16. This is why AMD/Intel don't seem interested in raising the core count past the current numbers. They'll do it eventually, but it won't be what people expect.
    Some highly parallel stuff like databases, compilation, blender render etc are the exception rather than the rule. Games are highly single threaded too.

  • @ProtoType4588
    @ProtoType4588 Před 7 měsíci

    my pc wil always be workstation first then gaming so when i had to pick between the 5800x3d vs the 5900x i went 5900x because the plugin that i use in photoshop called filter forge is cpu bound calculations the more cores you got the faster it renders

  • @mudgie0205
    @mudgie0205 Před 7 měsíci +1

    Multitasking performance is what a computer generally needs even if you’re ’just gaming’. More cores will make OS hitches less prevalent or even non-existent

  • @DonovanMcNabb5
    @DonovanMcNabb5 Před 7 měsíci +1

    The size of the video from gpu endcoding can vary a lot. I still prefer cpu encode for smaller sizes and much better quality, and I went from 6 to 8 to 12 to 16 cores because yes core count matters.

  • @ComedicRick
    @ComedicRick Před 7 měsíci +1

    GPUs can't do everything though and if you want to use CPU only renderers (most pro level render engines don't have feature and image parity on the GPU, if they even have a GPU implementation. Like it's great if you only use redshift or cycles, but lots of people don't use those, or do but have scenes that exceed the vram and fall back onto the cpu), or simulations that don't run on the GPU, then you need a high core count cpu to crunch through it.

  • @user-xp2gc9dn2g
    @user-xp2gc9dn2g Před 7 měsíci +1

    Both CPUs in your test need to be the x3d versions for an apples to apples test .

  • @Sekiro-3.14
    @Sekiro-3.14 Před 7 měsíci +1

    I have the i7 9700k, perhaps the fastest non-multithreaded processor I know of. With its 8 cores reaching 5Ghz and not exceeding 65° with simple air ventilation. Taking full advantage of my RTX 2080S. At the moment it has never stopped me from working or playing anything and I use it very intensively. Thanks for your VEX videos!

    • @TillTheLightTakesUs
      @TillTheLightTakesUs Před 7 měsíci +1

      I paired that with 2x 2080 tis back in the day. Now it's powering a 4090 would you believe it? I still don't see many scenarios where it gets slowed down by my cpu. There are some, yes, but when my cpu is slowing things down, things are already so fast... I think that's the key people are forgetting. You don't have to have 100% gpu usage, not all the time. They go apeshit the moment their gpus aren't being 99% utilized and they call bottleneck. Sure a bottleneck it may be, but do you not like the performance you're getting now? That's a good question to ask. Are you not happy already when your cpu is bottlenecking? Do you have to have 1000 fps in csgo rather than 950?

  • @TheAnoniemo
    @TheAnoniemo Před 7 měsíci +5

    I feel like you're glossing over a lot of important details here, like the implications of the 3D v-cache on the 5800X3D, the dual CCD nature of the 5900X, the big vs little cores on Intel. Also, there are still many productivity tasks that are not or cannot be GPU accelerated, or like some video work will lead to better quality.

  • @pa-pyro2804
    @pa-pyro2804 Před 7 měsíci +3

    8 cores is absolutely enough but I think 12 will probably become standard in the future. Personally I dont think to much on it id like at least 8 cores but high frequency is more important imo experience anyway. I run a 7700x at 5.5gh

    • @darthwiizius
      @darthwiizius Před 6 měsíci

      I went from an 8 core zen chip in my B450 build i bought exactly 4 years ago to the cheapest of the cheap 6 core Zen 3 chip. I swapped for core quality reasons, zen can't match your chip in single threaded games, zen has problems with Crytek engine games and frame timings in general. The Zen 3 part is a lot better for gaming.

  • @wumso-ih1xq
    @wumso-ih1xq Před 7 měsíci +1

    For gaming its safe to say you wont need more cores than there are cores in the playstation and xbox apu's.
    So 8c16t should be the maximum when shopping for a gaming price to performace build.
    If its only for gaming look for better IPC, Single Thread performance or bigger cache.

    • @wumso-ih1xq
      @wumso-ih1xq Před 7 měsíci

      Or if its above fhd, just take the most power efficient, because gaming at 1440p and 2160p is seriously gpu bottlenecked, and isnt so heavly dependent on general cpu performance

  • @johnpaulbacon8320
    @johnpaulbacon8320 Před 7 měsíci

    Thank's for this very well presented and informative video.

  • @thepcenthusiastchannel2300
    @thepcenthusiastchannel2300 Před 7 měsíci +1

    I would disagree with the assessment made at 0:55 into the video. While GPU encoding is faster, it cannot be as finely tuned nor does it produce the same level of image quality. This is especially true when comparing File Sizes to image quality between a video transcoded using a GPU and one using a CPU. The CPU will have a smaller file size while maintaining a higher image fidelity level but will take longer to encode.
    This is why people get CPUs with more Cores. The end result is just not the same with a GPU. It's good enough for most people who need something quick or who are going to upload to CZcams (video compression wrecks any quality advantages) but it's not what I use to transcode movies for my Plex server. I use the CPU and yes... it takes a long time but the end result is preferable when displayed on a 4K TV.

  • @dragonman910
    @dragonman910 Před 7 měsíci +1

    Here's my take on that (apologies for it being long):
    PC building/specs have always boiled down to one thing: what do you mainly use your system for? Streaming, gaming, editing, 3d modeling, work from home, or a mix of all of them.
    High core count cpus are just a bragging point for people if their workload doesn't require a lot of cpu cores (workloads like gaming). If a cpu with less cores is matching or even beating the higher core one, you're paying more for no extra performance.
    For the longest time, Intel used the 4 core model for cpus because that was all you needed. Even though games are using more cpu cores than ever, it's still nowhere near enough to fully utilize high core counts.

  • @yarmgl1613
    @yarmgl1613 Před 7 měsíci

    as a programmer, 12 cores is perfect for me, android apps compilation is super fast, took about 5-10 minutes on my 4 core 9300h, now takes less than a minute with the r9 7900, opening up my IDE and scanning all the files takes seconds on huge projects, huge productivity booster, but ofc an 8 core 7700 would suffice

  • @LOVIATHAN
    @LOVIATHAN Před 7 měsíci +3

    My 4core i7-6700k with a 980ti can still play semi old games at ultra wide 1440 at 144 frames, all while having discord on a side monitor. Of course it depends on the game and I usually have to sacrifice setting but you’d be surprised how far this tech will get you! Looking to upgrade to a 7700x and a 7800xt soon though, so I can play newer games like halo infinite and war zone at high settings!🔥🔥🔥

    • @istvanszabo5745
      @istvanszabo5745 Před 7 měsíci +1

      A 6700k is way too weak for such gpus tho, there will be severe bottlenecking.

    • @LOVIATHAN
      @LOVIATHAN Před 7 měsíci

      @@istvanszabo5745 I also said a 7700x

    • @istvanszabo5745
      @istvanszabo5745 Před 7 měsíci

      @@LOVIATHAN And? Even a 6600 is too much for such an old cpu, but go ahead, waste your money on performance you will never get

    • @__-fi6xg
      @__-fi6xg Před 7 měsíci +1

      @@istvanszabo5745 he mentioned a cpu upgrade and a gpu upgrade, 7700x is a amd cpu, i think you didnt get that

  • @loykunEX
    @loykunEX Před 7 měsíci

    I'm thinking of buying a new PC soon after 6 years. I will have several android emulators opening in the background while playing games so I personally find the multiple core CPUs to be quite appealing

  • @dadlord689
    @dadlord689 Před 6 měsíci

    In my practice the only reason you would need as many cores as possible is for compilation. And we talking about months of saved time per core.

  • @AngladaFrankFrankyvape
    @AngladaFrankFrankyvape Před 7 měsíci

    Isn't the 5800X3D a faming oriented CPU with 3D Vcache? I mean compare what is comparable at some point like 2 CPUs with 3D Vcache or 2 COUs withoiut.

  • @darkbooger
    @darkbooger Před 7 měsíci +1

    I see high core counts on CPUs as being more useful for productivity rather than gaming. I personally don't see a reason to go past 8 since it's not like I'm doing anything that will require anything beyond that.

  • @kiwibom1
    @kiwibom1 Před 7 měsíci +1

    Core count is not the only important thing in a cpu’s gaming performance. IPC, cache size, ram speed. Currently for gaming all you need is 6-8 cores, more is kinda useless.
    The new intel cpu’s (12gen and up) are different, for gaming i personally don’t count E-cores as "real cores". They don’t do much at all in gaming currently and probably ever. They can help in other tasks but not gaming.
    For video encoding (for the little i know about that workload), cpu encoding is obviously much slower but gives much better results than gpu encoding. At least that’s what i read online.

  • @NikolaNevenov86
    @NikolaNevenov86 Před 7 měsíci

    To my knowledge, simple core count isn't the full picture. There is core frqency, L1 L2 cache as well. I would imagine 5Ghz 4 core cpu will beat a 12 core 2Ghz cpu on game related tasks.
    That being said, if you are using the cpu for work, most likely you will need more cores because you will have more applications running.
    Most of the software I use take at max 10-13% of the cpu. Stack 3 of those an some minor ones like audio player, slack and the browser, and all of a sudden that 8 core 4Ghz cpu is struggling to open a folder.

  • @KN-592
    @KN-592 Před 7 měsíci

    i do multi task with an i5-10400 and it is not too bad lol, so those options like the 7700x etc will be insane enough

  • @calemj3679
    @calemj3679 Před 6 měsíci

    Looking to upgrade my pc and I'm unsure what to get. I got a gtx 1070ti and i5 8600k. I wanna be able to play 2k gaming, does anyone have any suggestions?

  • @ShadarThePcBuilder
    @ShadarThePcBuilder Před 7 měsíci

    What cpu would you recommend for a Rx 7800xt 1200 budget

  • @skylake_will
    @skylake_will Před 6 měsíci

    To me, currently a 6 core Ryzen 5 CPU is enough, but i might take a 8 core Ryzen 7 CPU to have more room while playing some open world games while using discord, or editing some vídeos could be a little bit faster, but i mostly play AAA open world games and a 6 core CPU is pretty good for now, maybe in 1 year or more i can think about switching my Ryzen 5 to an Ryzen 7

  • @jarenpocopio6033
    @jarenpocopio6033 Před 7 měsíci

    Theres a lot of very cheap intel srver cpus with 24 cores ,but the catch is its hard to find a mobo for them

  • @snowythecolaaddict
    @snowythecolaaddict Před 7 měsíci

    I bought a Ryzen 5900x because the extra cores allows me to run a gaming virtual machine with headroom for games I can't run under proton. Still a little sad I didn't buy my cpu 3 days earlier as I could've got the 5950x for £100 off.

  • @RobloxianX
    @RobloxianX Před 6 měsíci

    stuff like the intel core series cpus are really just 6 cores with the i5. Performance cores are the traditional cores that most people know. Efficiency cores are more of a sideshow. So the whole people use a 6 core CPU thing just boosts the i5 even more

  • @jarjar621
    @jarjar621 Před 7 měsíci

    im having 5600 rn and i need those cores because sometimes i edit my videos while recording at the same time.

  • @yufoh7753
    @yufoh7753 Před 7 měsíci +1

    I paused at 2:25 and looked at my 7950X score then laughed.
    The point of more cores isn't about doing more in one thing at a time. It's about being able to do many multiple things at the same time, without the system chugging along. It just so happens it's better at doing some things singularly better at times also.

  • @rxpvblic
    @rxpvblic Před 7 měsíci +8

    As someone who went from a gaming laptop with HDs, GTX 1050, and a 4 core processor (i7-7700) -> i9-12900K/3080 Ti/NVMe Drives/32 GB DDR5 RAM it has been an utter blessing. I went with the best Intel CPU (at the time) granted I could have gone with the 5800X3D but I game a lot/I.T. work...I would agree more core counts would only be a productivity preference but in terms of gaming you're definitely fine with 8. Streaming was definitely a big change for me because the encoders on the 30 series is definitely better than what I had before lol and now AV1 is on NVIDIA's 40 series so that's pretty nice.

    • @skorpers
      @skorpers Před 7 měsíci +1

      You work in I.T and decided to go intel?

  • @HaruVM
    @HaruVM Před 6 měsíci

    In after effects when they implemented multicore now in the newer versions of CC sometimes y gets like a I save 1 hour in rendering than using 5900x with multicore on and off is impresive thats why I buy a 5900x :C but in games yeah I suffer x'D, sadly after effects the usage of GPU is really low sometimes inexistent.

  • @50H3i1
    @50H3i1 Před 7 měsíci +1

    If you got more than 8 P cores on intel . or single CCD AMD with more than 8 then yeah it might have a slight advantage for gaming .(which we don't have any cpu like that rn)
    V cache is better bragging point than core count

  • @icGhost81
    @icGhost81 Před 7 měsíci

    Well from a 13600k user basically it's still a 6 core 12 thread but you get the ecores for background tasks

  • @sudd3660
    @sudd3660 Před 7 měsíci

    this is god advice for those who did not know.
    when i got the 12700k 12 core, not much use for extra cores for games so far, but nice to have in rare occasions, seems like either i am single core bottlenecked or full on multi core bottlenecked and more cores is faster.
    installation times can be improved, shader cashing goes faster, opening applications can be multithreaded, loading time times especially if you have a fast ssd.

  • @XxGorillaGodxX
    @XxGorillaGodxX Před 7 měsíci

    I have a pretty average system with a 3600XT and 5700 XT, and the CPU holds up just fine despite having a ton of browser tabs running while I play the most CPU intensive games I own. You can get a CPU decently faster than mine for not much money but it still holds up fine, so most people shouldn't worry too much about what CPU they buy unless it's being paired with a powerful graphics card.

  • @bluewave2432
    @bluewave2432 Před 7 měsíci

    All depends on what you are using the cpu for. Put em head to head in other productivity workloads and higher core count might matter.

  • @AdamIverson
    @AdamIverson Před 6 měsíci

    8 cores, 16 cores, that's cool and all, but what would happen if I were to get 192 cores using two AMD Epyc CPU? Overkill even for workstation? I saw it on ebay for $6000, comes with dual socket motherboard and two 96 cores AMD Epyc CPU, but still need RAM and everything else. Looks tempting.

  • @RNA0ROGER
    @RNA0ROGER Před 7 měsíci

    FFMPEG, All forms of video encoding, compiling, higher core count cpu's make a ton of sense if you can afford them. In the years to come they are also likely to pay dividends as games adopt more cores.

  • @anarchicnerd666
    @anarchicnerd666 Před 7 měsíci +6

    Great video Vex, plenty of good info and advice here!
    I can think of a task where having lots of cores counts - RPCS3, aka PS3 emulation. Pretty niche, but that emulator loves having lots and lots of cores for emulation, and even with the most ridiculous Core I9 struggles with the highest end games like God of War 3 and MGS4. RPCS3 performance was actually a key consideration of mine when building my system, I settled on the 5700X with it's blend of cost, core count and efficiency.

    • @skorpers
      @skorpers Před 7 měsíci

      How much did he pay you? Not a single point in this video was accurate.

  • @Technically_Techy
    @Technically_Techy Před 7 měsíci

    Yeah ive been HEAVILY considering going from a ryzen 9 3900x to the 5800x3d when i upgrade my gpu from a 2080 super to 7900xt in the future.

  • @prose_nineonetwozerofivethree

    What is the background music?

  • @ibdoomed
    @ibdoomed Před 7 měsíci +1

    Don't forget that Microsoft is moving to a per core licensing model, more you have, more you pay.

  • @albal156
    @albal156 Před 7 měsíci

    X264 encoding your stream or recording is another use case. I did that when my GPU encoder wasn't that great and we had the 2017-2018 and pandemic mining booms.

  • @thejackimonster9689
    @thejackimonster9689 Před 7 měsíci +1

    I'd even argue that most users do not have any real advantage of high core counts. The most reason I see in having multiple cores is compilation of source code. A GPU can't be used for that because it's awful for text processing. Similar goes for encryption, compression or any IO tasks. But that's not really something you do a lot unless you run a server. But developers can see benefits reducing their time compiling code to get more things done. So high core counts can be really useful.
    Multitasking for the average user is a huge misconception, I think. Even if you run multiple processes and applications at the same time. A single CPU core can already handle that because every desktop operating system is designed to handle different tasks in parallel by switching process contexts on the fly for a CPU to run. Sure, having multiple cores can help when you need 100% CPU usage in multiple processes at the same time. But that's a really rare case to be honest. Especially when there's so much work which can be processed by a GPU these days.

    • @mileswithau
      @mileswithau Před 7 měsíci

      You can still reduce the amount of task switching that takes place using a multicore CPU, and with a smart OS scheduler (as most are) many idle or low activity tasks can be moved to one core, freeing up another to handle smaller numbers of busier workloads. This is much lighter on CPU caches, and means all the cores can run at lower speeds (which is nice, as getting a CPU to twice as fast is a lot harder than having two of them running in parallel).
      As for text processing, encryption, compression and IO: I would like to note that this are currently on CZcams, which utilises all of those things (at least to an extent, much like most pieces of software).

  • @ThePianist4Life
    @ThePianist4Life Před 7 měsíci +1

    I have a couple of problems with your approach to this topic.
    Your general point is valid: don't just buy more cores, because more seems to be better. Always research what actually makes a difference for your use-cases.
    But:
    1) Title is misleading and annoying clickbait
    2) too much focus on (for most people) obvious problems of high core count CPUs (money, heat, power draw, use for gaming)
    3) comparing them to GPUs is meaningless. Only the fact, that you shouldn't buy them for doing those tasks is a valid point.
    4) Talking about Intel's "Performance" and "Efficiency" Cores is a totally different topic, that is just confusing for ppl, that don't already know about this stuff.
    An uninformed viewer could think, that the same principle applies to all high core count CPUs (like the 5900x) - which is not the case.
    5) admitting (at the very end of the video) that there might be some use-cases, that you didn't research.
    And mentioning some examples (multi-tasking), but without any data/measurements to back it up.
    You essentially talked about (subjective) feelings.
    This actually could have been an interesting topic: measuring the impact of multiple (common) apps, that run in parallel (on e.g. loading times, FPS, lags, ...), using 4, 8 or more cores.
    6) some use-cases that benefit from more cores (that immediately come to my mind): streaming & (almost any type of) creative 2D/3D professional work: video editing, modelling, texturing, ...
    While there are a few tasks, that are already optimized for GPUs, a lot of tasks are not, and will likely never be.
    How well these tasks benefit from more cores, depends on the software's implementation, but there are many tasks, that can be (and are most of the time) parallelized, but are not suitable for a GPU.
    For example:
    - opening (importing) a (big) project. Which you do a couple of times every day.
    - converting/modifying (big) assets like meshes in blender, or textures -> which you might do hundreds of times per day
    But actually measuring the differences between 4, 8 or 12 cores is a totally different and very difficult problem.
    With more cores, memory (RAM speed and capacity), and storage speed will likely become more important - potentially bottlenecking the higher core count CPUs.
    7) Your FPS comparisons in the beginning were also rather confusing. The 5800x3D is far superior, because it has a different cache architecture, not because it has less cores or "better quality" cores.
    Comparing 5900x vs 5800x would better compare the core count of the CPUs. But even those CPUs have other important features, that are different between them (e.g. Cache).
    Disabling individual cores in the bios might be one option. (not changing the CPU)
    8) For a general statement about the impact of more cores (what you a trying), you would need to test a lot of different CPUs, not only two.
    9) You missed to talk about bottlenecks. Especially in Gaming. You only see a big difference in performance, when you are actually bottlenecked by the CPU.
    But this only happens in a few cases. You would need to:
    a) use an high-end GPU, matched with a slow CPU
    b) play at low resolutions (1080p) on a very high refresh rate monitor (200Hz plus)
    In your footage you can see, that the GPU is the actual bottleneck (always at 100% utilization).
    Higher core count CPUs are usually clocked lower (out of the box).
    But they can clock higher on an individual core, because they have a) more cores to choose from and b) the quality of the cores is higher. Silicon lottery is simply in their favor.
    You can even see the higher clock speed of the 5900x in the game footage you captured.
    I would strongly recommend, that you look at some established tech channels (you mentioned GamersNexus), and really analyse how they are doing their measurements and comparisons.
    I hope this helps.

  • @doctorno3912
    @doctorno3912 Před 7 měsíci

    I opted for more cores (7950x) over the x3d largely due to my gaming machine streams 1080p/4k video content from plex to other devices around my home when I game.
    Don't even feel it when playing a game and a couple of streams are running in the background.

  • @monikaw1179
    @monikaw1179 Před 7 měsíci

    It does depend how important video quality is to you, software encoding (on a CPU) is far more configurable and still better quality than fixed feature hardware encoders. Still, NVENC etc. are 'good enough' for most people/use cases these days.

  • @dragonman910
    @dragonman910 Před 7 měsíci +1

    Here's my take on how many cores are ideal for gaming:
    4: absolute minimum for older and less cpu intensive games with nothing else going on in the background.
    6: absolute minimum to run more modern cpu intensive games with nothing else in the background.
    8: ideal sweet spot for gaming and having some extra cores for other tasks.
    10: more than enough for gaming and other tasks. Start to get diminishing returns for your money if you only game on your PC.
    More than 10: waste of money for gamers.

  • @stephan4923
    @stephan4923 Před 7 měsíci

    the reason this video holds true is because they are adding cores to cpus faster than a lot of devs are learning how to make use of them. The vulkan api makes use of all 20 threads on my 10850k, but the 13900k has cores that aren't identical, and a lot of them. It's too bad we can't see the future.