AMD Ryzen 9 3950X Review: Premiere, Blender, Overclocking, & Gaming CPU Benchmarks
Vložit
- čas přidán 13. 11. 2019
- We're reviewing the AMD Ryzen 9 3950X CPU today, a 16-core, 32-thread AM4 CPU. We're benchmarking the R9 3950X in Premiere, Photoshop, gaming, power, and overclocking.
Sponsor: EVGA RTX 2080 Ti XC Ultra (Amazon - geni.us/tebmP)
AMD R9 3950X LIQUID NITROGEN OC LIVESTREAM AT ~4PM EST 11/14/19 ON THIS YT CHANNEL.
Article: [publication on a one-day delay]
Find the AMD R9 3950X on Amazon: geni.us/UtirpBU (will appear in this list when made public)
Find the Intel i9-9900K on Amazon: geni.us/0YUu
Find AMD's R9 3900X on Amazon: geni.us/kcnNu5
Find Intel's i9-7980XE on Amazon: geni.us/2OU8Bg
Find AMD's R5 3600 on Amazon: geni.us/zBQMOj
AMD's 16-core CPU launches today, following-up the 1950X from a few generations ago with what's technically a "mainstream" CPU. Now, of course, the price-point doesn't make it mainstream, but the platform does. A 16-core CPU in a non-HEDT socket is new, and that's what we're dealing with today. You lose quad-channel for the memory, but it's still AMD's new Zen2 architecture on the maxed-out chiplet configuration. In today's testing, we're focusing on running power consumption benchmarks for the AMD Ryzen 3000 R9 3950X CPU vs. others, gaming benchmarks on the 3950X vs. 9900K, 3900X, 3600, and more, and production benchmarks. The production benchmarks include Blender, Adobe Premiere (best video editing CPU tests), Adobe Photoshop, and decompression via 7-Zip. Overclocking testing was also done, where we noticed better bins than the 3900X and high achievable all-core overclocks. We'll be overclocking the 3950X with liquid nitrogen later on launch day.
For the motherboard, we used the Gigabyte X570 Master with F10c and F10a, but performance testing was done on F10c (latest publicly available BIOS as of 11/14/19). Find the X570 Master here: geni.us/3CrEt - the 3950X can work on B450, X470, and X570, provided a sufficient VRM.
We are working with Eden Reforestation Projects to plant at least 10 trees PER ITEM purchased from the GN store during November: store.gamersnexus.net/ (if you buy 5 items, we'll plant 50 trees!) So far, we are over 50,000 trees planted between GN donation matches and GN store orders!
Like our content? Please consider becoming our Patron to support us: / gamersnexus
** Please like, comment, and subscribe for more! **
Follow us in these locations for more gaming and hardware updates:
t: / gamersnexus
f: / gamersnexus
w: www.gamersnexus.net/
Editorial, Testing: Steve Burke
Testing: Patrick Lathan
Video: Andrew Coleman
Links to Amazon and Newegg are typically monetized on our channel (affiliate links) and may return a commission of sales to us from the retailer. This is unrelated to the product manufacturer. Any advertisements or sponsorships are disclosed within the video ("this video is brought to you by") and above the fold in the description. We do not ever produce paid content or "sponsored content" (meaning that the content is our idea and is not funded externally aside from whatever ad placement is in the beginning) and we do not ever charge manufacturers for coverage. - Hry
Check out part 2 of our DIY AMD NAS, ft. Level1Techs! czcams.com/video/SqaAmVN4J4A/video.html
Can you try disabling half the cores or smt to see if you can get higher clocks for gaming? 16 threads should be enough for gaming.
Gamersnexus it is contemporary when it comes to equal.
Could you test 3950X vs 9900KS (delidded), both OC and with tightened timings, in gaming? And gaming + streaming? Would love to see an absolute best case scenario (without LN2) from both teams, for enthusiast PC gamers.
Thank you for all the amazing content!
yo GN. rockstart says it added -cpuloadrebalancing launch option to RDR2 to help with stalls on 4 and 6 core cpus with nvidia driver. maybe you coudl try and see if this option exists on gta and see if it helps. anyway the patch is out and it seems to help. not saying a retest is in order just maybe if you could mention your thoughts in an ask GN thing or something.
And it’s released finally... no more slack!!!
Can't believe 3 years ago the best mainstream processor was a 4 core 8 thread 7700k, now we have 16 core from AMD.
Moore's law is back!
I'm not sure you can call 750usd mainstream
Thats my CPU
Allah praise AMD
@@KillBoyUK It's called mainstream simply because it's still a consumer class CPU
I was disappointed that you didn't perform a "drop test" on the CPU. I guess I will have to watch the LTT video for that.
Wont be dissapointed
@@filipantoncik2604 The only thing Linus hasn't dropped is his high pitched voice
@@brazeiar9672 big burn
Spoiler* he drops it lol.
@@SirNickyT really?
Please not.
26:09 This is just amazing. While having twice as many cores as the 9900K, 3950x pulled only 30w more at most. Simply incredible.
Its hilarious to see how a 16 core chip on stock still consumes less than intel's 9900KS on stock.
@@maybenot7298 dude chill lol this thing basically matches the 9900k in single thread performance and has literally double the cores
quit your whining and bootlicking, nobody cares
@@maybenot7298 "average user" well we need several of those workstation for work and Intel is extremely slow in renders
@@maybenot7298 LoL @you if you think this processor is for the average user
@@maybenot7298 and an average user won't pay over $300 for a cpu
@@maybenot7298 AMD has a bigger cock. what can ya do
Within margin of error of the 3900x in games. I'm glad I didn't hold off and pulled trigger on my 3900x. I personally don't do enough non-gaming workflow to justify the price hike. Both chips are just monsters. I couldn't be happier
Same. I was worried I'd be upset when - not 2 months after I got thee best am4 part ever, ever - another, even bester part comes out for a small amount more money.
Almost had preemptive buyers remorse. But now I don't!
@@joek81981 Seeing that the 3950X is trading blows with the 9980XE in multi-threaded workflow, I'm really curious now to see what 3rd Gen TR brings to the table.
What a game, right? I say it a lot here in the comments, but it deserves repeating.
This is a golden age for computers-people. Great time to be a PC gearhead.
Personally, I'm very interested in disabling an entire chiplet to try mimicking an intel 8c/16t for some games and apps; some things don't like the zen arch
@@aj0413_ True but even in those fringe cases the difference isn't that massive esp if not comparing it to the 9900K/KS
I love that all of the reviews came out seconds from each other, but I came to you first, Steve.
steve> all
Of course we come to Steve's first, duh!
@@sdj33 both steves > all
Watching GN last.
I'm the save the best for the last guy.
COMMITTED FANS LETS GO!
I would have liked to see Intel HEDT power draw also in the chart, to demonstrate how much worse they are in performance/watt. Maybe next time.
We'll have it in X-Series benchmarks for the 10980XE.
I saw where the 9900ks was 400+Watts total system draw and the 3950x was around 260+Watts total system draw.
LTT did a test like that. Check their vid
Holy smokes is that correct?!? 400 vs 260?
@@coachnutt61 HOLY mother of power draw O.o 400 watts?! now imagine the heat generated.
Y'Know, even though the introduction is very factual and "dry", I can hear the anticipation and excitement from you!
This processor is so cool, because AMD not only as the underdog made something competitive, they made something for a market that definitely exists and pushes huge boundaries, leaving the ball in everybody else's court.
TL:DR - Intel entire HEDT range is now pointless, thanks to AMD and this desktop CPU
Unless you specifically need extra pcie lanes
Well, 9980XE was consistently above 3950X. In gaming also. If someone buys 9980XE etc and doesn't OC then wtf.
@@TheNaitsyrk $2000 vs $500 AMD , as i said, makes HEDT pointless
@@simonh317 lel
@Warm Soft Kitty only if you buy 1000+ processors.
Steve, you get the first click my friend. Trust your opinions over all others!
@@LycanWitch haha I watched LTT right after this one. Sometimes LTT does some wacky things in their reviews which is cool and entertaining but I watch GN first to get his almost as unbiased as they come opinion before I get all hyped up by Linus (no disrespect to papa Linus either btw)
Thank you GN for always providing unbiased and informative videos, appreciate the time and effort!
Thank you for including the 7700k on your charts. It just came out in 2017, but I can hardly EVER find it in any other channel's comparison charts.
It's a big help for those of us poor creatures who needed an upgrade right before the Core Wars started.
This old man remembers when the phrase "wintel" was coined, the king is dead, long live the king!
@T R The problem with Intel is their entitled prices. Even when AMD was kicking their ass with Athlon 64, those Pentium 4 oven machines were still sold at high prices.
I believe that entitled pricing policy comes from their high idea of their own name(a little how Apple thinks they can charge too hell just because their logo is on a product).
So people kind of feel happy when AMD are kicking their asses because they need some humility lessons as to start pricing at what they offer and stop thinking their shit are made of gold.
@T R Waaaaay ahead too.
Right up until the FX-60/70 started demanding really big money. :(
Then I was sorta happy to see the Core2duo/Quad ect come out.
@T R
I was still a basement nerd with a socketA Barton 2500/3200 when those benchmarks were getting done.
my first big boy system was a Phenom II 940 w/a 4870 1gigger... what a joy that was for it's time. it lasted through 3 different video cards and then spent 3 years serving mom movies on demand at her house before my brother turned it on w/o the heatsink one day while servicing it. :( rip.
"Let's close this out with Total War." - I thought that was what the entire video was about.
@@imadecoy. Nah. Total War is just poorly optimized. If a TW game could make use of 16-cores we could have 20 000 unit battles.
@@imadecoy. then why do you watch them if you don't agree with their testing?
@@imadecoy. I think he sarcastically referred to this cpu as being the beginning of a total war against intel
@Dex4Sure The Linux versions use Vulkan...
_Oh boy, how mistaken you are!_ That was just a grazing shot, although it's a pretty lethal one already - we'll see if it's killing it.
If not, there ought to be some blood bath later in the month …
Since *if* it wasn't so… _Phew_ I guess, jacking up cores again, The Ripper™ just threats coming for a third (and probably final) time and will most definitely kill the heart of Intel's HEDT pretty fast. It's just that Intel had to decide whether or not taking some narcotics for the sake of numbness beforehand - or actually really wanted to attend TR3's bloodbath in person. Seems with that price-cut Intel issued lately they just took the pill, to make it less bloody.
Since AMD is still carrying their youngest baby yet - and it seems to be a huge-grown!
Damn what happened subscribers were like 20k now it’s so big! Congrats GN
Thank you for adding the R5 1600 into the benchmark for RDR2!! I just bought it and am running a 1080ti and have been worried about the performance but I’m happy to see I’ll be able to achieve 60fps easily with minimal cpu bottleneck
Am i the only person that makes the GN open noise along with it at the start of every of video? I can't resist it. I've tried
I'm amazed you guys talk about all core OC, but don't expand on the potential of Per CCX OC to get a good blend of fast cores and slow cores, then use Process lasso to set affinities so all background tasks run on the slow cores. That's what I do on my 3900X 4.5/4.4/4.375/4.375 at 1.331V LLC3 (1.306 vdroop) and is golden!
Not that I do, but you can also soft disable SMT in Process lasso. This program has become very relevant now with high core count CPUs
Yes, could use a benchmark of this solution vs hardware enablement. Especially now with those Windows 10 1909 CPU scheduling changes.
You need to put a resume together and hit buyers of new Tencent Epyc.
How interesting, you can have high power cores and low power cores similar to a phone CPU's?
I want to see how this benchmarks
Sacco Belmonte I dont really trust ProcessLasso, I feel like people think it's just some magical program that will instantly set affinities and increase performance
@@fardnia9434 It does no harm. I have been using it since I got my 3900X and is more benefit than anything else. I don't use ProBalance though.
The single core boost behavior is much improved since this video came out. Kind of amazing actually.
Great video, very knowledgeable and speak very well regarding the material!
What about streaming? CPU encoding while gaming is a good reason to invest in high core count.
I can easily render videos with a 4 core CPU, just take x3 the time.
But when it comes to streaming definitely can't use X264 encoding at a decent quality for streaming for lack of cores :P
Dropping SMT would make more sense IMO than to drop some cores for those troublesome games.
3950 without X is still possible
Dropping one chiplet/CCD (8 cores) would make sense to remove inter-CCD communication over the infinity fabric.
@@ole7736 this. On a monolithic die, disable SMT. On multiple chiplet design, disable a CCD to reduce latency.
@@TheVillainOfTheYear Why I'm heavily eyeing switching to be honest
@@TheVillainOfTheYear We can clearly see from the benchmarks in this video that the 3900X is often much faster with SMT off (beating the 3950X in Civ 6 and GTA V), but for some reason they didn't try the 3950X with SMT off.
Thanks AMD for progress!
I got ryzen 5 2600x and i love it so much
Fantastic content as always!!
The real lesson here is that developpers seriously need to code for multi-core multi-thread CPUs now.
It's looking like we're still in the dark age with single core frequency being so important.
Nobody took AMD seriously when they started upping the core count. Now that Intel has had to follow suit to play catch-up I bet we'll see devs making changes.
John Terpack I really hope so!
They already are but it takes time for the cycles to come to fruition. The next few years will see a lot more shipped games taking better advantage of core counts, and within 5 years it will be almost universal.
No point supporting more cores if optimisation is garbage. See cyberpunk 2077
80% performance is "not bad for half the cores". Gotta love Steve's understated commentary. :)
More quality content from the Bob Ross of the tech world. Fascinating insight, a reassuring voice and, of course, wonderful hair 😉
27:49 THANK YOU Steve! At least one objective reviewer left.
i did it anyway lol
This video needed Snowball as this CPU looks like it's the cat's meow!
GN is more technical. Linus is more enjoyable. I feel that with linus I am watching a story. Its engaging. There's a progression and excitement. At GN I feel like it is a technician giving a review, which is also good. Different approaches, but it's interesting to compare.
Its the difference between a JESUS and a Mapple Leaf.
Better to go hardcore ;)
Sovietbird agreed. Those 3 are THE channels to watch to get a balance. There are several others that go way more technical, or way more niche on gaming or rumors, but LTT + HUB + GN usually covers everything worth seeing.
Aurora Niloufer what about me Hahahah guys crack me up slackers
Hair > earrings. Sorry.
I liked when you put timestamps to the videos :)
Excellent review!!
"AMD is Equal Dog"
- Tech Jesus 2019/11
AMD is up to par or better in performance BUT Intel still has a much much bigger bank, so it's gonna be a while until AMD is really equal dog...if they can keep this up.
@@xxJOKeR75xx yeah they'll need to keep making faster chips for at least another two years.
ROFLMAO
TECH JESUS! LMAO
Virtual Machine usage could really make this my next CPU. I need to have fast VMs running 50% of the time.
Great review, good job guys!
I was hoping to find a Handbrake benchmark, though. Would you please add that back for future tests?
Please consider an Overclocked 8700k to 5 ghz for ur futture CPU testing
for the future, many of us 6c12t Gamers want to know where we are
standing in 2019/20. Thank u for u work :)
Holy shit I had no idea this was coming out today
Shitty hole
Great review as always. Temperature in the frequency-graphs would be nice tough ... Just for reference.
AMD is now the great equalizer now which is a good thing for everyone! Great video as always Steve and Gamers Nexus as always yall have the best info keep it up!
I'd love to see OC results with SMT off. Just out of curiosity. Less work means less power means less temperature means lower voltage needed means (somewhat) higher frequency.
Steve, I think you and Wendell need to get together again and do a deep dive on both your 3950x chips, his results show his stock 3950x performing on par with his 4.8ghz 7980xe meanwhile yours seems to be way behind.
There must be some major silicon variance involved or some wierd HW/SW issue at play here...
i would bet its with the update to the windows scheduler that released recently allowing wendel to have his chip in the lead
@@Bobis32 in that case it might be a good idea to test with 1usmus' ryzen power plan as it is supposed to give similar performance to the new windows update by partially fixing the scheduling issue
@@Bobis32 We are testing with the newest Windows version.
Zen 2 is very sensitive to a lot of things, e.g., temperatures, RAM/IF clock speed. Different cooling, different cases, different RAM could all make these results vary noticeably.
Sounds like an even better reason for another collab vid then, cause there seems to be a pretty big performance Delta here
Dang, that transition to your store at 12:51 would make even impress even Linus.
As always you are best.. 😍
"Blender Render", say that 3 times fast. If you cant drink some nitrogen...
Render Blender Rlender Bender Blender Render
That That That
@@tommytomthms5 I think he meant "that" not "That"
@@MalePifko He was clearly joking
Blenderer Rendererer
I have a few questions so forgive me laziness as I'm sure these are mentioned in the video.
Is this with the newest windows update? From what I read it comes with improvements to the scheduler.
Will you bench some games with SMT turned off? 32t is obviously overkill for games.
Have you seen the custom power plan by 1usmus? Perhaps it's not needed anymore with the newest windows update.
first impressions i've seen around seem to show that the upcoming update doesn't really have an effect in the real world, whereas the 1usmus powerplan seems to actually help with 3-5% gains in single thread workloads.
The whole cpu is overkill for games anyway.
I'd love a separate video about 3950x performance tweaks. Just the right stuff for enthusiasts! :)
Can you guys add the HW News time bars to your benchmark videos moving forward? And maybe teardowns? Not so I can skip around, but to make rewinding easier. The time bars are literally my favorite part about your channel, followed by the OC content, the hardware teardowns, the Jensen teardowns, and then Snowflake.
I hope you could test new Arctic liquid freezer ii coolers someday. Nice quality review as always keep up the great work tech jesus
Thank you, Tech Jesus
I'm interested to see some numbers in streaming and Davinci Resolve.
Would be nice to see Digital Audio Workstation benchmarks. Thanks for the review.
I really hope amd is able to get there clocks up as time goes by and newer generations are developed. It would be insane to see the 16 core chip run at 5.0ghz all cores some day.
I remember when Intel charge
2017 $1000 for 8 core CPUs
2019 $950 for 18 core CPUs
2020 $500 for 18 core CPUs
In the future at this rate, it will happen.
2020 amd will make out more cores. and intel will have to compete or lose more market
The best part is th4 games will start going all out with support for high multithreads
They can afford to lose a bit more marketshare, and most likely won't (much) in the more profitable ones. AMD is pretty much at max production capacity already and has no real way to ramp it up. Something Intel is investing in given the shortages.
Not a good thing for consumers.
So, it's good to keep AMD there for pressing Intel processor pricing. Buyers Winning!!
Not good for consumers.. LoL
@@Calyptico That's why Intel has so many paperlaunches.
12:55 That was one Linus Sebastian like self promo Segway. Duuuudde
You mean maple syrup smooth transitions
ty for setting the standard for the most scientific reviews on youtube among tech reviewers
I wish audio work would get mentioned in these tests. Indeed music production is probably one of the best tests for a processor, as it stresses it to the max but in realtime performance, which when working with music needs to be glitch-free at all times, no clicks or pops, unlike offline renders which just crunch the numbers.
Fan settings:
Red fans - excited
Blue fans - salty
^PSU brand: GENERIC
Me: "s***t brown" fans. Thanks Noctua!
No compile test? I would have loved to see how Zen 2 performs in compilation
We dropped the GCC benchmark because it's just a cache benchmark. If we can figure out a better one, it'll come back.
I think there is a series of articles on XDA Developers for compiling AOSP (and possibly other things?).
@@GamersNexus personally I'd be interested in compilation of say, tensorflow. Something like this hub.docker.com/repository/docker/tarfeef101/tflow-builder
@@GamersNexus Is that GCC test a realistic test though? If so then it's still relevant. A large cache is a potent way to offset latency; if AMD is going down that route then the fact that GCC benefits from such a design decision is surely the whole point of why the large cache is there. Seems a bit odd to drop a test because it sits in cache, when enabling a task to sit in cache is the whole reason why the cache is so big. To extend the logic to the extreme, if AMD released something with 16GB L1, would all tasks be dropped? ;)
@@GamersNexus It seems really hard to find compilation benchmarks for CPUs. For this CPU in particular, I think it's important to have them since people are buying this CPU for work and would love to know what really matters. A caching benchmark to me seems pretty awesome. It lets people know that clock speed is not the only factor. I write a lot of high performance software and one of the ways we get make faster programs is cache optimization. This is also how console games squeeze out more performance out of known hardware. Here's a video if you want a good example of how game devs goto the extreme for performance (note it's from a C++ talk). czcams.com/video/rX0ItVEVjHc/video.html
Competition is good for everybody. I heard Linus say it years ago and I repeat it all the time because it is just so true.
I would love to see you arrange the FPS charts according to the "0.1% low", which is a far more accurate measure of smoothness than average FPS. In fact, I wonder why AMD didn't use this metric instead of average FPS. The 3950x appears far more competitive against even the best Intel CPU if we compare the "0.1% low" performance instead.
I thought the max advertised single core boost was 4.7 GHz, not 4.6?
Yes it should be? I checked the Amd site as wondered if it had changed to 4.6 following all the boost fiasco , but its still 4.7 per Amds website.
It does
He just hit bad on the silicone lottery, other people are going 4.7 pretty easily
@@JROCThaGreat That may be true, but the max boost listed on AMD's website is 4.7 GHz.
Are there any reasons except "low interest" for not doing compilation benchmarks? The increase in L3 caches has been making me happy, since I have found that to be one of the most important parameters.
Say, a Linux or chromium compilation would make me very happy.
Agreed. For tech Jesus Adobe I'd important but software developers, virtualization engineers, engineers, and even statisticians use CPUs like this. Intel also has other advantages too like Intel RST real time storage and raid. Someone doing video work will want many tbs of storage with Adobe where if you have 4 ssds Intel has an advantage in I/0 and express lanes as 3950 has cut them
I know this would be a stretch request but I'd love to see some work load tests an things beyond imaging type work. I am a systems engineer and design FPGAs and utilize a lot of cross-compilation for arm. Vivado and make use a lot of threads and cores when before building for their respective architectures respectively. I'd love to see how processors like these perform on work loads like these.
Thanks for all the work you do GN.
It tends to be hard to find good benchmarks for engineering loads. One reason I suspect is that all the software costs something in the 4 digits if you get a good deal. If xilinx published a benchmark maybe you could convince some reviewers to use it. I'd love to see HFSS or other FEM software bench marked since they tend to be somewhat transferable between software.
Good honest review 👍
Got a question for you, Steve: When gaming alone, that's cool. But what about gaming + Streaming / Recording H264? Main example being twitch streamers, but also gameplay youtubers.
How much impact does live encoding add to a gaming experience?
Yeah GN used to have that gaming + streaming test. I hope they will do the streaming test.
They probably prefer two rigs or nvenc in those scenarios tho
will you make streaming benchmarks?
i think this may be a really good cpu for gaming + streaming to twitch, and i likes your dropled frames for both sides graph the last time
Ideally you would find a way to restrict gaming cores to one CCD and streaming cores to the other. E.g. with process lasso. That way you prevent expensive core hopping and inter-CCD talk. Perhaps you could run an streaming VM (virtual machine) on 8 distinct cores, which would be easier to configure (process lasso needs to be set up for each game individually, iirc).
@@ole7736 process lasso is great, but a huge pain in the butt to configure
@@bas2085 That is true. What about a VM (virtual machine) for streaming (hardware pass through of a capture card) that hogs one CCD? Should work for all games/titles without any configuration after everything is set up.
If you're streaming just use an Nvidia GPU with nvenc. There is very little performance loss using it even when using a "weak" CPU.
@@Stagmuffins Sure, for low to mid-tear streaming setups, that is a very good solution. Professionals probably want higher quality.
R9 Ryzen processors are a beast especially for computer work loads.
Still a happy camper with my golden silicon R7 1700.
Hey Steve, your Hair looks awesome in this video again. ♥
Hardware Unboxed noticed that Adobe wasn't really using four of the cores.
Adobe considers AMD to be subpar for it's software and doesn't optimize for it. I have a link to a thread where they completely blow off the 3000 series as being viable for workflow over intel. It is crazy.
@@TristanPope If you have it, why not just put it in here?
@@pantzman I didn't wanna throw links on someone elses video but here ya go: community.adobe.com/t5/Premiere-Pro/iPhone-11-Pro-4k-60FPS-HEVC-will-not-work-in-Adobe-Premiere-CC/m-p/10736475#M237073
Tristan Pope this is ridiculous for adobe to pull this. Read through the whole thread. It’s amazing they don’t even have one Ryzen system. You’d think adobe would want to save money and switch their test models to cheaper yet faster Ryzen systems. Imagine a company being that stuck in their ways.
Adobe uses Intel's compiler and 95% of corporate users stick with Intel who are there customers. Also Apple is an Intel shop with most Photoshop users being Mac fanatics. Intel c compiler is both on Mac and PC which makes porting easier.
I'm curious how the 3950X would hold up against a 9900KS in gaming in a real world scenario when you got a handful of launchers, discord, bittorrent, web browser with a bunch of tabs and other stuff running in the background.
BitTorrent? Why would gamers be downloading Linux ISOs?
Looks like someone is a torrent user here.
more cores is better
ahh some new tech to review..refreshing
3900X is still a beast. My Blender viewport with cycles render is basically in real time, setting the maximum core render viewport to the same number of threads (24). For real time stuff like modeling, UV management, texturing and sculpting a good CPU like Ryzen 9 series and enough RAM is sufficient. For AMD / Linux users: check Core Control utility. It is very useful with CPU profiles (not only GPU) and if you set your Ryzen to "ondemand" consumption, it will work in a very optimized way for your specific workflow. You won't regret it.
Wait, I'm confused. I thought it was out on the 25th, but Steve said it's getting released today
Most likely made the video in preparation for the 25th launch but AMD lifted the embargo early.
@@MrGlutting That would make sense, you're right
This probably the "soft launch" which is just a nonsense marketing term for "we let reviewers and media release reviews".
Steve lives 11 days in the future
Interesting, Zen 2 tends to be behind in average and maximum fps and then in the 1% and 0.1% kicks ass. Funny.
Higher overall latency but less bottlenecks. so you end up with closer highs and lows
Which gives you a way smoother experience. Even if Intel did say 20-30 fps more than the AMD system it would feel smoother overall.
@KingOfGorillas Yes it will be interesting indeed. It will be more like "ok ok we been jerking around long enough and let amd have there fun but now it's time get serious and send lil brother back down to the minor league" I imagine it will be something like that :)
@@quajay187 Doesn't look that way.
@@quajay187 hope that happens before they reuse that 14 nm LUL
Can you include tests for streaming/recording while gaming? That seems like a really common use case for processors like this. Would love to see how it compares to stuff like 10900k
Great benchmarks
"We're the hipsters using blender"
But still not using 2.8
LOL
ikr. how disgusting lol
That makes it even more hipster🤔
Are all these benchmarks done with CPU-exploit fixes in place? Eg: Meltdown, Spectre, Zombieload.
If memory serves, Steve has stated in an older video that all tests are done with all AMD exploits and Intel exploit patches and fixes installed up to the date of the test.
@@ps2232 I see. So it's likely that all the catching up AMD has been doing, is at least partially due to Intel's fault at being more vulnerable, requiring patches that lower their CPU performance via software.
Please add DaVinci Resolve Studio in your productivity suite because its approach to CPU usage is uniquely different than Premier and is one of the fastest growing video editing tools out there
First review I always watch because you cuts cut through the crap and don’t skimp on numbers. These other channels just show the bare minimum of benchmarks and pad the video time to 10min
Hey, no tape on the CPUs this time!
finally my 480mm loop will actually be worth it, maybe I need to upgrade to dual 480 to be overkill.
Wtf are you going to do with those custom water cooled rigs ? Running earth simulator ?
480mm dedicated to cpu shud be plenty.
yeah this should put your 480mm to work. Gonna be fun
You guys miss to mention how much faster desktop 3950x is then workstation Threadripper 1950x that we drooled over not that long ago. Astonishing!
Hey guys I'd love to see an updated list of performance with the updated Adobe suite. Maybe a Adobe 2018 vs 2019
Adobe and AMD need to spend a bit of money to optimise their software for the hardware - otherwise competing products will impact on Adobe going forward...
If you read the comments, Adobe sucks ass.
It would be nice to see what kind of improvements the 3950X would get with Ryzen-optimized RAM speeds, i.e 3600MHz CL16 or 3733MHz CL15, as I imagine anyone spending $750 USD on this CPU would have no issues spending extra for the best RAM.
Exactly, they even have made videos saying to use 3600 at the least for ryzen, not sure why 3200 was used.
This CPU is so damn good that I wish I had a use for it. My next CPU upgrade is definitely going to be AMD.
13:00 -- Remember, The GN Logo Blender Render is getting better in November/December!
🔥AMD ALL THE WAY!🔥
When running these tests, are you using the latest BIOS versions for Intel’s CPUs? I know there is performance loss to patch the various vulnerabilities
Ty for including the 7700k.
would like to see some multitasking benchmark suite added, or some heavily modded game benchmarks that are notorious CPU hogs. Something I do frequently is 25-30 chrome tabs open across 2-3 windows on second monitor while running a game on primary monitor. I need to be able to alt-tab frequently and have a chrome canary instance set up with flags to disable tab sleeping so it always acts like I have that window in focus for work stuff.
Only when Intel is overclocked and has more cores then it's on top 😂 I don't like charts where overclocked cpus are shown because only very small amout of people will overclock and no companies will overclock cpus.
Most of people who need performance do overclock and they include oc potential when comparing cpus.
No company? My SI (Digital Storm) sure did overclock for me
I dont anyone that doesnt overclock. I guess its who you kick it with.
In a high level enthusiast part, it's definitely a valid comparison. People who want the best performance are going to do it. Now lower parts, no. I doubt the extra frames from overclocking some i5 or a 3400g are gonna come into play but on these 500 dollar and up bits, it matters.
I just wish the OCd parts were marked in a more quickly visible way. There is a lot of data in these graphs to take in rather quickly
@@cleverja Pretty much everyone I know are on Intel/Nvidia systems and they dont overclock. I dont know any overclockers in RL.
i feel sorry for steve. linus released a video within the same minute as steve yet again steve only has 600 views whereas linus has 6.2k views already! tbh i prefer steve’s content tho😂
kaelyn Linus has 10x the subs so the math works out.
Steve started from scratch, Linus started from the top, big difference.
@@eurobeaner More like 15x. Linus has over 9.5 million to GN's 662k.
I watched linus first myself, simply put I didn't have alot of time this morning and linus is usually slightly more brief,, and about 5 minutes can be cut out by skipping for 30sec-1min after the segways :P
@@DarkLinkAD Linus did not start from the top. Wtf is wrong with you???
Gamer Nexus, onest and profesional. Thank you!
👍 for drawing attention to non-zero-based graphs.
Let's read graphs!
Intel (9900K) : Gaming
AMD (3950X) : Render
It seems like the bottleneck is most of the time between the monitor and the chair. (see below)
AMD: Everything
@@stephenvsawyer...sigh
What did I say already
For one type of game this makes a great choice: Chess.
Chess yeah, but I doubt it could run solitare 1080p 60 fps
Chess engines will use all the cores you have, but they never use chess engines when testing, hell engines are even good to test overclocking stability!. So sad they always ignore them...
Good video!!
Ever thought of doing a CPU comparison video with "mainstream" enthusiast cards priced between $300-$600? It would be great to see benchmarks of current high end CPU's with GPU's like 2070S, 5700xt, 5700, 2060S... Its a more realistic use case too because obviously not everyone buys $1200+ GPU's even if they buy high-end consumer CPU's!