Zen 2 Analysis - A Different Perspective

Sdílet
Vložit
  • čas přidán 16. 03. 2019
  • What if AMD cherry picked the best scenarios and Zen 2 actually sucks?
    ♥ Subscribe To AdoredTV - bit.ly/1J7020P
    ► Support AdoredTV through Patreon / adoredtv ◄
    Bitcoin Address - 1HuL9vN6Sgk4LqAS1AS6GexJoKNgoXFLEX
    Ethereum Address - 0xB3535135b69EeE166fEc5021De725502911D9fd2
    ♥ Buy PC Parts from Amazon below.
    ♥ NEW USA Store! - www.amazon.com/shop/adoredtv
    ♥ Canada - amzn.to/2ppgYsX
    ♥ UK - amzn.to/2fUdvU7
    ♥ Germany - amzn.to/2p1lX6r
    ♥ France - amzn.to/2oUAK2Z
    ♥ Italy - amzn.to/2p37Uui
    ♥ Spain - amzn.to/2p3oIBm
    ♥ Australia - amzn.to/2uRTYb7
    ♥ India - amzn.to/2RgoWmj
    ♥ Want to help with Video Titles and Subtitles?
    czcams.com/users/timedtext_cs_p...
    -- Video Links Below --
  • Věda a technologie

Komentáře • 945

  • @adoredtv
    @adoredtv  Před 5 lety +430

    I had a real nightmare trying to render this video, took half a day and I finally had to cut it at the problem area 20:23 which is why there's a little repeat. ;)

    • @Alex-jk2qy
      @Alex-jk2qy Před 5 lety +8

      Keep on doing awesome content Jim!

    • @dondraper4438
      @dondraper4438 Před 5 lety +3

      Love your videos

    • @Knowbody42
      @Knowbody42 Před 5 lety +8

      Maybe you need a 64 core Rome CPU to render it

    • @andljoy
      @andljoy Před 5 lety

      @@Knowbody42Nah 2950x and R VII are the shit at the moment when it comes to Adobe at the moment , with some plugins you need that much VRAM.

    • @MazeFrame
      @MazeFrame Před 5 lety

      @AdoredTV Seems to be a problem with Adobe. L1T has a video on the subject.

  • @bayanzabihiyan7465
    @bayanzabihiyan7465 Před 5 lety +359

    Bruh... Ive watched every single video since that original polaris video sequentially. And i never once noticed a shift in your accent since i watch videos right after upload. I'm shook at your old accent.
    This made my day. keep up the good work.

    • @fern3436
      @fern3436 Před 5 lety +18

      Ya I didn't realize I had been following him since the very beginning of his tech analysis. I remember finding that video and being impressed with the analysis and I have watched every video since. Adoredtv and Gamers Nexus are far and away the best tech channels on this site!

    • @sniperwolfpk5
      @sniperwolfpk5 Před 5 lety +2

      Same here

    • @DJ_Dopamine
      @DJ_Dopamine Před 5 lety

      Same here also!

    • @fokjohnpainkiller
      @fokjohnpainkiller Před 5 lety +1

      How time has passed

    • @keralius
      @keralius Před 5 lety +3

      Yeah, didn't realise I'd been a viewer for that long

  • @KirkKreifels
    @KirkKreifels Před 5 lety +643

    My 3rd daughter was born today...and Jim posts a video on zen... Good day😎

  • @Lemard77
    @Lemard77 Před 5 lety +303

    Your voice in that old video sounds like a dwarf from The Witcher 3 xD awesome ^^

    • @CaveyMoth
      @CaveyMoth Před 5 lety +12

      I think that Jim got too much helium or something.

    • @adoredtv
      @adoredtv  Před 5 lety +51

      @@CaveyMoth Yep I used to speak in a high pitch because I thought my voice was too deep for most to handle it. Even now I probably speak at a slightly higher pitch than my natural voice.

    • @CaveyMoth
      @CaveyMoth Před 5 lety +16

      @@adoredtv Whoa, so you weren't speaking in your normal voice? That's fascinating. My voice is low AF, too, and people have trouble understanding it in real life.

    • @adoredtv
      @adoredtv  Před 5 lety +41

      @@CaveyMoth I think I'm easier to understand now, also my pacing has improved a lot as well. Been living in Sweden for 3 years so I have to try to make myself understood on two fronts. ;)

    • @TheBcoolGuy
      @TheBcoolGuy Před 5 lety +6

      @@adoredtv You live in Sweden?! :O I do too! I'm Swedish! Pratar du svenska? Hur låter det?! Kan du visa hur det låter i någon video?

  • @robertstan298
    @robertstan298 Před 5 lety +254

    Well, AMD may show only their best at presentations... But isn't it also true that Intel only show their FAKEST at presentations? (Industrial Chillers w/ pre-overclocked no-show CPUs, VLC video playback of "live" iGPU gameplay etc)? *LOL*
    Glad you mentioned that tho. :)

    • @syncmonism
      @syncmonism Před 5 lety +30

      LOL, that cooling system was insane. Intel should have been punished for that >:(

    • @kimh9337
      @kimh9337 Před 5 lety +12

      This is very true! But you should never measure your own success based on the failures of others! Kinda the same as saying that if you are fleeing from a bear, you only need to run faster than the slowest person ;) I'd prefer seeing AMD run faster than the fastest person, that would be something to brag about!

    • @robertstan298
      @robertstan298 Před 5 lety +11

      Be sure to also check out Intel's GRID Autosport (or was it GRID 2?) iGPU demonstration... That was actually a VLC video playback from a few years back. The chiller fiasco may have been disingenuous as fuck, but the fake iGPU demo was a flat out LIE. And there are other examples.
      Let's not ever forget these things. Intel is the textbook definition of cronyism and failure of free markets. We need better technology leaders than these crooks.

    • @madd5
      @madd5 Před 5 lety

      LOL I think they should be sued for lying like that.

    • @BenjerminGaye
      @BenjerminGaye Před 5 lety

      Please go somewhere with your whataboutism.

  • @thomashayes1285
    @thomashayes1285 Před 5 lety +72

    Thank you for another extremely good video. Some thoughts:
    1) Those voltage curves are terrible for Polaris, no wonder people think it is such a power hog. Just doing some napkin math here, but power consumption typically scales linearly with clockspeed and the square of voltage. So looking at relative power consumption we have
    Efficient Polaris = 850Mhz x 0.815V x 0.815V = 565 relative power consumption
    Shipping Polaris = 1266Mhz x 1.120V x 1.120V = 1588 relative power consumption
    1266Mhz / 850Mhz = 1.489 and 1588/565 = 2.81. So AMD got 50% higher clocks for close to three times the power consumption. I think that shows just how hard they are pushing Polaris past it's efficiency point. (dunno if this is right, someone correct me if it is not).
    2) It sounds like the 1.4GHz 64core epyc couldn't possibly have been what they showed at next horyzen, unless they somehow got truly ridiculous ipc gains (which I doubt). So maybe the 1.4Ghz is a lower end model designed for efficiency and very low clocks, and there will be a 64 core part with much higher clock speeds, or AMD chose a particularly well clocking chip for the event and overclocked it to frequencies shipping parts won't have.
    3) It wouldn't surprise me if TSMC 7nm has some voltage wall somewhere, so ryzen would clock well up to a point, and then just refuse to budge without extreme measures. I am saying this because it seems to me that more modern processes are that way, clocking well up to a point and then stopping in their tracks, whereas older an older process would just bump the voltage slightly for every clock bump. Global foundries 14nm stops at ~4Ghz and Intel 14nm++ at ~5.1 GHz, for example. So I think ryzen 3000 will have a similar clock wall, but if it occurs at 4.3Ghz or 5.3Ghz, time will tell. (again, people more knowledgeable than me, correct me if this is wrong).

    • @master_andreas1202
      @master_andreas1202 Před 5 lety +1

      1)The Equation you used comes from dennard scalling which due to complexity of modern MOSFET scaling is no longer valid.
      More at:(en.wikipedia.org/wiki/Dennard_scaling)

    • @thomashayes1285
      @thomashayes1285 Před 5 lety +1

      @@master_andreas1202 I know that dennard scaling broke down a little over a decade ago, so smaller transistors are not necessarily more power efficient or faster. It basically takes away Moore's law's teeth. But why would that invalidate P = CV^2f?

    • @coopergates9680
      @coopergates9680 Před 5 lety

      Is the voltage wall a FinFET thing (less of an issue for planar transistors) or increases with shrinking node side?

    • @thomashayes1285
      @thomashayes1285 Před 5 lety

      @@coopergates9680 I honestly do not know. It could very well be finfets causing the wall. The only thing I know for certain is that the voltage/frequency curve keeps getting steeper.

    • @coopergates9680
      @coopergates9680 Před 5 lety

      @@thomashayes1285 I almost think it might be nice to plot max stable clock against voltage (swap the axes), so that users know beyond what point increasing voltage yields insignificant or insufficient potential clock speed increases. The 'wall' would then be a horizontal line.

  • @MooresLawIsDead
    @MooresLawIsDead Před 5 lety +60

    Jim, another excellent video. I have had multiple people on my channel call me crazy for suggesting 5GHz chips, and state that it's specifically crazy because VII proves 7nm only brings 20% higher clocks. Then I point out that a 20% clockspeed increase over the 2700X would mean a 5.2GHz 3850X.... and they stop arguing lol.
    The fact is it is conservative to expect 4.5GHz 16-core parts from AMD, and if you are optimistic honestly the sky is the limit on this launch (but I recommend conservatism). Like you say Jim - the most likely downside to Zen 2 is probably a segmented roll-out of parts over all of 2019, but I don't see that as horrible at all. Oh, and yeah wow your old accent is hilarious. Cheers!

    • @adoredtv
      @adoredtv  Před 5 lety +7

      Cheers bud.

    • @defeqel6537
      @defeqel6537 Před 5 lety

      GF's 7nm was supposed to have a pretty consistent frequency-power curve up to well over 5GHz, but it was also a superior process, so who knows where the limit is with TSMC's 7nm.

  • @mapesdhs597
    @mapesdhs597 Před 5 lety +63

    14:04 - Actually it's probably running mostly out of L2/L3, because AMD is using a more complicated version of the test which should not fit into L1, namely the sphfract scene at a much higher resolution and a lot of oversampling. It may well be accessing main memory too, I'm not sure; on SGIs I deliberately ran C-ray/sphfract at higher resolutions and with high oversampling in order to ensure the test would hit main RAM to at least some degree, but on these modern architectures with all the caching and other stuff going on, grud knows what Rome is doing here, but it sure as heck isn't dependent on any relevant core to core communication. Presumably there's a management thread which collates the scan line render results back from the separate threads, but the returned data chunks are pretty small, just a few K even at a high resolution, so inter-core bandwidth is not a factor. My guess is, much like the tiny "scene" test benefits from residing entirely in L1, the sphfract test benefits on modern CPUs from being able to sit mainly in L2/L3, which these days is very fast. On SGIs one could use precise monitoring tools to discern what the CPU was doing, but I guess x86 doesn't have this (I don't know, beyond my knowledge).
    17:39 - The "scene" test definitely won't touch it, but sphfract at high res, etc. probably does (honestly not sure tbh). It's definitely hitting L2, but it could be that the way its using L3 (if at all) is still very favourable for the demo.
    Jim, thankyou so much for highlighting my comments on your earlier video, I'm glad that people will likely now have a better understanding of the nature and limitations of C-ray. It's an appealing test for AMD because it scales so well with threads (in a manner which the old CB R15 does not), but it's far from a real-world scenario, especially for rendering. For example, someone at a major US movie company told me that for their modern productions, a single rendered frame may involve pulling in many tens or even hundreds of GB of data over their SAN (hence the rendering on the CPU cores themselves involves a lot of data and thus accessing main memory), which means bandwidth and latency on their renderfarm is important, factors C-ray doesn't test at all. A different movie company in the UK told me their SAN can do about 10GB/sec, performance that is absolutely essential now they are frequently working with uncompressed 8K (can you imagine the memory demands of that? The guy told me they're about to move up to 48GB Quadro RTX 8000 cards because the 24GB of their existing Quadro M6000s is no longer enough).
    There's lies, damned lies, and statistics. Or as my old stats book says, people use statistics as a drunk uses a lamp post, for support rather than illumination. C-ray is *interesting* (that's why John wrote it), but my jaw completely hit the floor when I watched the Rome demo and there it was on the screen. That was like... Bugatti promoting their latest Veyron based on how fast one could fill the petrol tank. :D
    I did try to contact AMD to ask for more details of exactly how they ran their test, since disclosure of the precise compile command used to create the binary is supposed to be part of the test process (in order to be sure there's been no cheating), but I was unable to get a response (I'm a comparative nobody in the x86 space). I even added a new Test 5 to my C-ray page to match what I gather is the settings they used for the test:
    www.sgidepot.co.uk/c-ray.html
    but I'm loathe to flesh out the currently empty table with any entries until I can be certain the settings are correct. Jim, if you have any contacts at AMD, can you give them a nudge? I'd love to hear from them, would be great to have Rome in the #1 spot and see how things pan out from there. 8)
    What's hillarious about all this though is that on the one hand if AMD keeps using C-ray in its PR then Intel will copy them and use it too (and where that rabbit hole leads is anyone's guess; is it believable that neither side will ever try to cheat?), while on the other hand as long as they do use the test then by definition it cannot be used to promote whatever advantages Zen2 may have over Intel a la improved AVX performance. Why didn't they use CB R20? Perhaps because it incorporates Intel's raytracing engine and using Win10 might not be optimal on CPUs with as many cores as Rome (assuming it's possible to use Win10 on Rome at all atm). Hopefully, those to whom Rome may be appealing (as with any CPU) will be more discerning in their buying decisions and wait for proper relevant reviews that correctly reflect their intended workload.
    Thanks!
    Ian.

    • @defeqel6537
      @defeqel6537 Před 5 lety +1

      I guess C-Ray would also be a test that would scale well on multi-socket systems?

    • @Kawayolnyo
      @Kawayolnyo Před 5 lety

      So...you're literally saying that AMD's Zen is Bugatti Veyron of CPU platforms for much less money than Inturd. Got it.
      Tank you very much for publicly and openly confirming/clarifying that Zen absolutely BTFOs Intrash on all fields while staying much cheaper and way more efficient at the same time (and also having great forward/backward compatibility).
      P.S.
      >Why didn't they use CB R20?
      Zen's THREADRIPPER (not even Epyc) already completely BTFOd Inturd's overpriced garbage CPoos in Cinebench R20, according to latest tests. So there's that. See here for example: www.imagebam.com/image/d6137d1167288784.
      Yes, as you can clearly see by that pic, Zen ALREADY (not even in it's much more improved Zen 2 state, but "mere" Zen+) utterly destroys *FOUR* extremely overpriced Xeons *in a 4S* and got very close to owning TWO *most expensive* "platinum" Inturds in a 2S configuration, and that's in a heavily Intel-biased benchmark that was made SPECIFICALLY only with one sole purpose of making Intel """look good""" in comparison with AMD's Zen on the worldwide scene. Irony is T H I C C, lol.

    • @mapesdhs597
      @mapesdhs597 Před 5 lety

      @@defeqel6537 Yes, hence the 32-CPU SGI Origin3K results on my C-ray page, along with my own 24-CPU POWER Challenge. Just as cores don't need to talk to each other much for these tests, sockets therefore don't either, except for returning render results to whichever core holds the management thread (but the data is miniscule).

    • @mapesdhs597
      @mapesdhs597 Před 5 lety

      @David haldkuk I think perhaps you're looking at it too much from the perspective of how complexity works in real-time 3D scenarios such as games, ie. more polygons means lower performance. For C-ray, one can make the test a lot more complicated merely by increasing the output resolution and using higher oversampling. From what I've been able to find so far, AMD for its Rome test used the sphfract scene at 4K res with 8x oversampling, so it isn't even really that complex a configuration, and from what Jim says it could well be that AMD chose settings which would ensure the test would remain within L2/L3 (there's no "standard" C-ray test, so they can choose whatever they like).
      For something tougher one could move up to 8K with 16x oversampling, but I don't know if the changes in relative performance would be that useful. Yes one could create a newer scene file with many more objects and surfaces, but I don't know if that would increase the compute complexity in a manner that's any different to simply rendering at a higher resolution and/or deeper oversampling level. Worse, the longer runtime might allow people to infer that the test is more relevant somehow to real world performance, when it really isn't. This is why, with SGIs, I was interested in comparing how a popular benchmark scene in Maya differed so greatly to a real-world scene rendered in Alias, which means the renderers are different aswell (the Alias scene came from a digital artist who designed magazine adverts, large advertising billboard posters, etc.):
      www.sgidepot.co.uk/perfcomp_RENDER4_maya1.html
      www.sgidepot.co.uk/perfcomp_RENDER3_alias1.html
      The Maya test is very simple and (on the same CPU arch) scales pretty much just with clock speed, whereas the Alias test is sensitive to system architecture and especially L2 cache size, eg. the dual-R14K/600MHz Octane2 is only slightly faster than a single-CPU R16K/800MHz Fuel (the latter has 2x more L2, higher mem bw and lower mem latency). And crucially, the Alias test very much reflects the kind of real daily work the artist in question has to deal with, so it's genuinely useful (or was back then); the guy does the same sort of work today, but of course he's long since moved onto PCs:
      www.johnharwood.com/
      Teasing apart these issues has always been messy, as Jim's excellent videos convey. Sometimes companies do not want to delve into exactly what's going on in a public manner too deeply as it might reveal issues with their systems which don't look so good. Jim shows how the power efficiency curves of Zen/Zen2 may be related to the choice of test settings used by AMD to present their products in the most +ve light possible (I guess that's marketing; people do the same thing, the clothes we wear, hair, makeup, jewelry etc., all designed to convey something we want to project that's relevant to the context, eg. romantic appeal. professional presence, imposing military force, eco tree hugger, etc.) Heck, the entire plant and animal world for half a billion years has been an exercise in deceptive advertising. :D These days though, with modern social media, etc., trying to spin things in such a manner might be counter productive. The kind of people who would be interested in Rome are less likely to be fooled by such shallow practices, ditto (one would hope) the enthusiasts interested in a 16-core Ryzen. The danger is AMD over hypes the product but then delivers disappointment.
      In the context of SGIs I looked into an example of this sort of thing. After SGI released their final InfiniteReality4 graphics product, Onyx350/Onyx3900 gfx supercomputers and their Tezro workstation, a natural question to ask was, how good would these be products be with a maximum spec for running Discreet Inferno or Flame? How much better than existing configurations with lesser CPUs, or older SGIs with earlier architectures? eg. a quad-1GHz Tezro V12, likewise the equivalent node boards stuffed into the Onyx systems with V12 or IR4 gfx (max 32 CPUs for Onyx350, max 1024 CPUs for Onyx3900). None of SGI's PR contained this information, which I thought was a bit weird. Discreet wasn't talking about it either. Thus, with the help of some key people I was able to run some proper tests:
      www.sgidepot.co.uk/perfcomp_DISCREET1_FlameTests.html
      I never got round to testing Inferno, but the conclusion for Flame on SGIs was startling: for various real-world tasks running on systems using V12 gfx, performance can be severely held back by the V12's 128MB VRAM limit (SGI should have increased the VRAM for V12 in O3K-class systems to at least 512MB, preferably 1GB, but perhaps by then they couldn't afford to). It meant that having much faster CPU options such as the quad-1GHz barely made any difference in many cases, the CPUs were waiting on the V12 to get a move on. IR4 (released in 2002 btw) running Inferno would not suffer from this because it has a lot more VRAM (10GB, with 1GB texture RAM). Point being, even though SGI risked annoying customers by potentially selling them products or upgrades that may not provide a useful gain in performance, the marketing and PR still did it anyway (at least in the case of those using Flame, which for SGI was a critical market by then).
      Epicurus said 2300 years ago that advertising was the greatest evil. Nothing has changed, marketing/PR still poses tech products in the best light if it can, regardless of whether doing so might make the product designers and engineers want to tear their hair out in frustration.
      Ian.

    • @mapesdhs597
      @mapesdhs597 Před 5 lety +1

      @@Kawayolnyo :D My Bugatti analogy was just to convey the idea that AMD boasting about C-ray isn't telling relevant potential customers anything they want to know. I could just as easily have referenced something more mundane, like promoting the latest TV by boasting about the number of buttons on the remote control. :) It's a mismatching of concepts, like the Suez Crisis popping out for a bun (and I'll nick that line from Adams as often as possible). The kind of customers who might be interested in Rome would I am certain not care about C-ray numbers.
      I'm no expert on the whole Intel/AMD competitive position in Enterprise btw, not my field. Also remember that TCO is often more important than raw hw performance, which includes other aspects such as system support, maintenance, staff salaries, software licensing and 3rd party sw optimisation, etc. A Cinebench score, just like C-ray, does not for this class of hw tell one anything useful in terms of making a buying decision. It makes for great PR and headlines, but I doubt it helps much with relevant buyers who are more likely to be interested in directly representative benchmarks, or indeed inhouse testing on loan systems.
      Ian.

  • @Tuchulu
    @Tuchulu Před 5 lety +121

    Haven't finished watching the video but baby Jim's voice is adorable

  • @Redd_Nebula
    @Redd_Nebula Před 5 lety +42

    That polaris power video was the first video of yours I saw. I was lucky enough to come across your channel just as you changed from a lets play channel. Its been great to see you improve the quality of your channel to where it is now. Keep up the good work Jim.

    • @SgtStinger
      @SgtStinger Před 5 lety +3

      Same here. The improvement over time in technical analysis from Jim is great. I really appreciate what this channel has become!

    • @kyrie69
      @kyrie69 Před 5 lety +1

      I've been around since then as well. I get excited when there is a new video from Jim, he holds nothing back when it comes to his criticism of computer hardware.
      Jim and I are both GenXers who have grown up with all of this wizardry that is modern computer chips. Having 2MB of memory in your setup usually set you back $1000.

  • @Nianfur
    @Nianfur Před 5 lety +44

    Asus have started releasing BIOS updates for Ryzen 3000.

    • @pec1739
      @pec1739 Před 5 lety +1

      dude i'd love to read about that, any links to it ?

    • @Nianfur
      @Nianfur Před 5 lety +2

      @@pec1739 Just 6 hours ago someone wrote an article. www.google.com/amp/s/wccftech.com/amd-ryzen-3000-valhalla-cpus-x370-x470-motherboard-bios-support/amp/

    • @pec1739
      @pec1739 Před 5 lety

      @@Nianfur thanks man !

    • @chronyk743
      @chronyk743 Před 5 lety

      CROSSHAIR VI HERO BIOS 6808
      Update AGESA 0070 for the upcoming processors and improve CPU compatibility.
      ASUS strongly recommends installing AMD chipset driver 18.50.16 or later before updating BIOS.
      So i have updated my bios so yeah

  • @prettysheddy
    @prettysheddy Před 5 lety +28

    Man I couldn't wait for another one of your videos. Honestly alot of what you have I find with heavy research (your leaks are unparalleled though) but I just love how you explain the information. It reinforces and gives me a more complete understanding of the subjects I was studying. Okay guys stop reading this and get back to the video!

  • @flcnfghtr
    @flcnfghtr Před 5 lety +44

    The reason they picked an 8C is straightforward, they wanted to do an apples-to-apples comparison to the 9900K. A clocked-down 16C chip would obviously be faster in MT performance, but everyone knows that, so that wouldn't have been very impressive. The handwave here is in the selection of Cinebench as the benchmark, while using power-limited chips. Like the Polaris demo, they are clocking down to their efficiency point, while using a benchmark in which a 2700X already outperforms the 9900K's IPC. It's not an unfair test, but it is one where they are putting their best foot forward.

    • @glenwaldrop8166
      @glenwaldrop8166 Před 5 lety +2

      16 core against a 9900K wouldn't be as telling about the technology.
      AMD's point is that they're winning on IPC, cores and possibly clock speed, certainly power. If they beat them down with a 12 core we wouldn't know as much about single threaded performance.
      Apples to apples gave us far more info than a 16 core vs 8 core shutout would have.

    • @ionitaconstantin1052
      @ionitaconstantin1052 Před 5 lety

      @@glenwaldrop8166 Its hard to tell the performance increase without clock speed.

    • @ionitaconstantin1052
      @ionitaconstantin1052 Před 5 lety

      @Chiriac Puiu sure it is enough to make ppl exited but not to be sure about the speed of the rest cpus.
      As far as everybody knows right now that may be the best they have in terms of clockspeed "that matters for gaming alot" and the rest may just be more cores lower clock most likely.
      Not sure on prices and the performance of the full cpu range to be to exited since last I checked the word was about radeon vii that will be around 400$ for the same performance as a 2080... the performance ok but whit less features and more heat and noise , but the price... well lets just say its not what ppl wanted.
      I just hope they don't do the same on the cpu side and keep the prices decent.

    • @ionitaconstantin1052
      @ionitaconstantin1052 Před 5 lety

      @Chiriac Puiu Yes but my problem whit it is that we do not know the clockspeed and ipc improvements to make any sort of claim about performance.
      Until reviews whit benchmarks and official prices its kind of pointless to make claims , unless u work for amd and know something the rest don't I'l believe it when it comes to market.

  • @Flurry17
    @Flurry17 Před 5 lety +44

    last time i was this early intel i7 had 4 cores

  • @philscomputerlab
    @philscomputerlab Před 5 lety +3

    Thanks, enjoyed this very much! Interesting times indeed and looking forward to your reviews and analysis when the products hit the market!

  • @DannyzReviews
    @DannyzReviews Před 5 lety +32

    As always Jim, another video that was well put together, and one I was especially looking forward to. Since the whole Ryzen 3000/Zen 2 hype train started rolling, the majority had been very optimistic. But bringing in some healthy skepticism is necessary I'd say. Nonetheless, it looks like even in the worst case scenario, things don't look too bad. I wouldn't be disappointed seeing the clock speeds taper off at around 4.6GHz or so, with an IPC boost of around 10%. That was my initial expectation anyway. Therefore even at their worst, they'll still be ahead of Intel. So as long as they keep the prices right, I can see Ryzen 3000 still being a success.
    I can see that you did have to dig down deep to find the concerning parts of everything that has been shown. I'm just annoyed now that we've heard so much, and in such variation, but with no clear release date in sight. I'm getting really anxious about it. I want to trust they'll use their better judgment to not botch the launch and hopefully they release a 12C/24T CPU in the first line-up.

  • @MrJacker1991
    @MrJacker1991 Před 5 lety +15

    Great video. The only bad part is knowing that I now have to wait 7 days or so for the next one. Keep up the good work! :D

  • @dongurudebro4579
    @dongurudebro4579 Před 5 lety +119

    SO even if we asume the worst its still pretty awesome - sounds like a deal to me! :)
    Thanks for that outstandig research and analysis.

    • @sharkexpert12
      @sharkexpert12 Před 5 lety +21

      the worst is a decent upgrade not bad but not earth shattering the best is another godzilla set loose all over the industry that people are going to struggle to tame.

    • @baronvonlimbourgh1716
      @baronvonlimbourgh1716 Před 5 lety +5

      @@sharkexpert12 the most important things is still it's awesome efficienty at lower clockspeeds.
      Clockspeed does not matter all that much in enterprise situations. The big xeons have been in the 2ghz range for forever because power usage and heat generation are far more important.
      And amd succeding in the datacenter is far far more important then succeding in the enthausiast gaming space for their survival. It is possible that zen2 at the very high end could again be a disapointment.
      In the end it will be still be a win for amd as marketshare in the datacenter should be their main priority anyway.

    • @TonyM-zi9rq
      @TonyM-zi9rq Před 5 lety +2

      Yep, been holding out for some time now waiting on the Zen 2 chips to upgrade my ole 4790K.

  • @CaveyMoth
    @CaveyMoth Před 5 lety +20

    Holy crap, a downclocked Polaris GPU sounds great for a media player PC.

    • @DzheiSilis
      @DzheiSilis Před 5 lety +11

      Just get a 2400G

    • @prototype3a
      @prototype3a Před 5 lety +1

      I was thinking similarly but for a CNC machine that I want passive cooling on so it doesn't clog up with sawdust. ;D

    • @DzheiSilis
      @DzheiSilis Před 5 lety

      @@prototype3a get a 2400G with one of those passive coolers that derbauer showed off

    • @DamianB82
      @DamianB82 Před 5 lety

      Imagine what Vega could do downclocked, I already known.

    • @alexc3504
      @alexc3504 Před 5 lety +1

      I actually limited my Vega 64 a bit on purpose to make it run less hot, they put way too much power into these reference cards. My friend who runs a Vega 56 showed me an undervolting and overclocking guide for Vega and I thought he was smoking something the first time I heard him until I read it. It's one of those blowers and it runs far more stable with the power usage reduced in the Radeon software and doesn't hit it's thermal limit to shut down like my old Nvidia card did often. God Maxwell was a dumpster fire. This card in my experience behaves a lot better than cards I've used in the past. It's a good happy little GPU.

  • @juggernautxtr
    @juggernautxtr Před 5 lety +5

    what ever comes, comes, enthusiasm isn't a crime. I still bought a vega 56 and did what you showed in your video on it.....happy as hell with my 1600X and power color red dragon vega card.
    this is still the best tech news i watch, probably always will be. the efforts made to bring us this information is outstanding and i think we that watch consistantly know to give thanks to you as an honest as can be given veiw.

  • @TrueThanny
    @TrueThanny Před 5 lety +8

    34:16 Would it? In Zen/Zen+, the speed of Infinity Fabric is tied to the memory clock. Isn't Zen 2 supposed to decouple that link? So isn't Infinity Fabric running as fast as it can regardless of memory clock speed on Zen 2? Could they have used that memory speed to demonstrate that fact?

  • @MrGunnarPower
    @MrGunnarPower Před 5 lety +36

    I love my 1800X but if the Zen 2 reports are true AMD might be forcing me to upgrade. I cannot wait.

    • @pig666eon8
      @pig666eon8 Před 5 lety +2

      i have a 1700x and im upgrading its been a few years and its time regardless of how good zen 2 will be

    • @koford
      @koford Před 5 lety +2

      @@pig666eon8 I have the 1700 (non x), time to upgrade.

    •  Před 5 lety

      I'll keep my 1700 for a good while. Nothing wrong with it :) (Locked at 3.7)

    • @koford
      @koford Před 5 lety

      @ yea mine locked to 3.7 too. can push but nahh. it works, nothing wrong with it.

    • @kyrie69
      @kyrie69 Před 5 lety

      They're going to twist your rubber arm.

  • @Naffacakes98
    @Naffacakes98 Před 5 lety +1

    Is there any info on whether b350 motherboard will work with zen 2 chips. I saw a video from dannyzplay saying it won't. Kinda worried.

  • @jimmanis6717
    @jimmanis6717 Před 5 lety +6

    I'm optimistic, I think there will be a nice boost in IPC and a decent boost in frequency. add the two together and throw in the tweaks they surely did to the memory controller and it should be very competitive on a per core basis and much better in a price/performance scale.

  • @SoundFX09
    @SoundFX09 Před 5 lety +11

    These are the types of videos that I like. You stepped back from your Primary analysis and looked at the Zen 2 Architecture from a different perspective.
    Perhaps not everything is all as it seems at AMD, and this is what will keep the discussion going for us all.
    Keep up the great work!

  • @marsdeat
    @marsdeat Před 5 lety +8

    Some people have half-hour TV programmes they watch week-in week-out. I have AdoredTV :P

  • @ciaranc3742
    @ciaranc3742 Před 5 lety +1

    I would be wondering with what you said the 3850x was again the 9900k but it was lowered on the voltage. This would mean it would also use the better silicone. The cpu shown at ces only had one chiplit with cores, so could be the single 8 core chiplet against the 9900k.

  • @nonamehere4195
    @nonamehere4195 Před 5 lety +3

    One benchmark that _could_ be L3 heavy is LuxMark with Hotel scene. It is intended to bench raytraced rendering on OpenCL GPUs, but also has a plain C++-on-the-CPU rendering mode. Would've been interesting to see how its samples per second change with 2+0 and 1+1 core arrangements.

  • @bradmorri
    @bradmorri Před 5 lety +12

    Cinebench does benefit Zen because it does mostly run out of the L1/L2 cache. That has been obvious since Zen 1. The things that you do not mention in this video and maybe did not consider or fully understand are the following:
    L1/L2 and L3 cache on Zen are all running at the CPU Clock speed, not the speed of the installed system memory. The L1/L2 and L3 are all limited to their own CCX module within the die and communication between CCX Modules is reliant on the Infinity Fabric "on die network" which is of a hub and spoke design that has equal speed connections between a central switch, the two CCX modules, the Memory Controller and PCIE bus with the maximum connection being limited to the max speed of dual channel memory installed in the system. Likewise a single CCX module also has the same bandwidth available to it that the dual channels of memory have. As with an office Lan, when too many users want access to a central server the network connection on a hub and spoke network that connects the server to the switch becomes a bottleneck. That is the reason why typically a central server might be connected to the switch with a 10GB/s link and the workstations all run at 1GB/s.
    The Infinity Fabric that transports the data between cores and between the system memory is performing the same functions as the Intel Ring Bus. However ring bus is more like a token Ring network where every device is guaranteed access to the network and is clocked at roughly double the rate of the Infinity Fabric to allow throughput on the ring to exceeds the capacity of the system memory itself and the PCIe bus. Im sure that power considerations were the likely reason for the compromised design
    Like with Intel, faster Ram is beneficial to a small extent. The major benefits to Zen though, is the increased throughput on the IF because of the higher frequencies allow more data transfers per second and push the bottleneck to system memory that inherent to the zen1/1+ architecture higher up the cpu performance curve. as demonstrated by teh 9900K, The memory chips themselves are not the bottleneck it is the shared transport in between cpu cores, pcie devices and memory. The Ringbus doesn't have the same bottleneck as it allows roughly twice the throughput between cores, PCIe controller and memory controller to start with.
    You can test it yourself on an Intel system by setting the cache multiplier to half of what the stock settings are and try playing a 1080p game. The 9900K will play games with performance more like a 2700X.
    While I do not have any hard facts other than what has been discussed about Zen 2 here, based on my knowledge of Zen 1 architecture and its inherent design limitations together with what we have seen so far in AMD demos of Zen 2. I am pretty confident that we will see:
    1. Zen 2 CCX modules now contain 8 cores with the L1/L2 and L3 doubling in size compared to Zen1 4 core CCX modules, the L3 Cache will be shared by all 8 cores.
    2. The infinity Fabric clocked separately from the memory speed, most likely clocked at or near CPU frequency.
    3. The IO die will contain some level of L4 cache, that has not yet been disclosed by AMD, that is shared by all the installed CCX modules. The L4 Cache would allow the CPU cores to switch between modules without the need to continually go back and access relatively slow system memory when switching threads between the different modules working something like the 128mb edram cache on broadwell and the Iris pro mobile haswell chips.

    • @cristiansalazar6622
      @cristiansalazar6622 Před 5 lety +1

      I was thinking the same !!! Specialy with L4 cache .
      With the infinity fabric , i remember the Tofu 2 in the fujitsu spark 64 fx , they have less pin conections but a faster speed and reduce latency.

    • @bradmorri
      @bradmorri Před 5 lety +3

      @@cristiansalazar6622 I certainly think that the 12 core part coming first makes the most sense. Thread ripper chips are likely to follow Ryzen by some time so a 16 core Ryzen eats into their HEDT business when It doesnt have to be that way right now.
      Intel look like they have a 10 core product coming next so a higher IPC 12 core Ryzen, even if the IPC doesn't quite match Intel at single core performance, should compare favorably with the top end mainstream Intel SKU

    • @sergiomadureira9985
      @sergiomadureira9985 Před 5 lety +2

      brad morris This backs up AMDs claim that Zen 2 will be very good for gaming; if your points are accurate, thats all targeting to lower latency which games are so sensitive of. Question: do you think the 8C ryzen cpu will be better for gaming than the 12C? 8C needs only one 8C chip, the 12C will have 6C+6C which may still introduce latency, but the 12C should have more total cache.
      Thats important also in terms of comparing to its rival, and still king of gaming Intel, the best 8C ryzen expected to cost around half of the 9900k bringing the value envelope to another level, and intel will be in trouble because singlethread performance is the last crown intel still has. If thats in jeopardy they will really have to grind and innovate and lower prices, which is all we consumers want

    • @bradmorri
      @bradmorri Před 5 lety +3

      @@sergiomadureira9985 the L4 Cache is complete speculation on my part but I do believe that it mitigates some of the core to core latency issues that Zen has demonstrated to date.
      Similarly, separately clocking the Infinity fabric even if it is only at say - x1.5 memory frequency will go a long way to provide a more Intel like gaming experience.
      Rumored PCIe 4.0 will also help as it suggests that IF bandwidth will double as well.
      Getting a 25% IPC gain over zen 1 by more efficiently parallelizing instructions (floating point and integer calcs running in parallel for example) together with a more efficient internal data transport architecture should combine to allow for that possibility.
      If I was buying something today and money was no object I would go with a 9900K solution so I don't care about this brand being better than the other. Having said that, I am quietly confident that the 3rd Generation of Ryzen is the one that really pulls everything together and makes a name for the Ryzen series of chips. Thunderbolt 3 going royalty free and usb4 coming also bodes well for including it with AMD platforms.
      Competition is a good thing. It pushes innovation which is something that we have not really seen much of for some time

    • @bradmorri
      @bradmorri Před 5 lety +1

      @@sergiomadureira9985 I think that we should all take manufacturer's claims at face value, they want you to get excited and buy their product. I truely hope that they are genuine claims but that will have to wait until the chips can be tested out in the wild.
      with regards 8c vs 12 for gaming, I honestly dont know. If AMD learns from the past and if they migitage the deficiencies of teh design then both should perform pretty well. If Jim's leaks are real, then the 12 core looks like it will provide the best single core performance.
      The new generation does stand a chance of reducing the latency to close to what Intel is doing now. An 8c will not have to deal with the same dual level thread switch that the current chips do. a 12 core will have the divide in the middle. The Windows scheduler has shown to not be all that smart when it comes to multi die or multi CCX based chips so maybe we will see benefits there as well.

  • @Uro666
    @Uro666 Před 5 lety +4

    I'd say most gains outside of the obvious from the 7nm die shrink could be down to the I/O and IF2. There hasn't been much talk about that I/O die and what it may or may not contain as well as any improvements to Infinity Fabric 2 through the switch to 7nm on the Zen Core IFOP's as well as 7-14nm cross compatibility and any performance gains from that and the IFIS links across the CPU package.
    Great video again, appreciate your analysis Jim.

    • @flcnfghtr
      @flcnfghtr Před 5 lety

      FYI the IO die on Matisse is the exact size you get from taking a Zeppelin chip and removing two CCXs. There isn't any space for an L4 cache.

    • @Poctyk
      @Poctyk Před 5 lety

      @@flcnfghtr tbf, the fact that it has the same size doesn't mean it has the same content. You would need to do quite a bit of redesign of what is "left" after you remove cores to accommodate this change to chiplet architecture.
      We'll see. Release should be a few month from now

  • @sentinalprime9246
    @sentinalprime9246 Před 5 lety

    What's the name of the game in 2:22???

  • @AspectClip
    @AspectClip Před 5 lety

    What browser extension are you using that shows you subscriber count for each commenter at 13:31 ?

  • @michaelden
    @michaelden Před 5 lety +6

    I have the 2400G APU and rarely buy / need the power of a discrete GPU so I'm holding off upgrading until Zen 2 + Navi comes to APUs or AMD release a chip similar to the Intel+Vega used in Hades Canyon.

  • @dongurudebro4579
    @dongurudebro4579 Před 5 lety +38

    BTW the biggest problem of zen2 (Ryzen 3000) is and most likely will be availability, not only for the CPU´s but the boards too!
    Thats also another reason to delay the 16 Core 1-3 Months.

    • @mpk6664
      @mpk6664 Před 5 lety +12

      The 3600x and below can use the old AM4 boards. it'll be a problem on the higher end CPUs though.

    • @TheCgOrion
      @TheCgOrion Před 5 lety +9

      There might be the possibility of the 12c/24t chips running in the prior boards as well. It all depends on the power draw. My guess is the 12c CPU will be about the same usage as the old 8c part, at worst.

    • @NBWDOUGHBOY
      @NBWDOUGHBOY Před 5 lety +1

      @@TheCgOrion i saw somewhere they said that the ryzen 3000 wont run on b350. But will run on X370 and up.

    • @TheCgOrion
      @TheCgOrion Před 5 lety +1

      @@NBWDOUGHBOY Nice. Thank you for the information. My Ryzen 7 is on X370, so hopefully I'll be able to upgrade it in the future.

    • @syncmonism
      @syncmonism Před 5 lety +2

      @@TheCgOrion Yeah, that would depend on the exact model of board, and how good the power delivery system is.
      For reasons I don't understand, apparently MSI seems to offer better power delivery with their mid-ranged AM4 boards, and at good prices. I've seen a few different reviews which talk about this, but it's typically not easy to get good info about power-delivery, as motherboard makers usually don't make this clear, with specs that are often misleading at best, if not outright lies. Gigabyte has even gone as far as to load up some of their boards with components it doesn't need to make it LOOK like the power delivery system is better, without actually providing a better than average power system.

  • @mortyforty8404
    @mortyforty8404 Před 5 lety

    Hearing that old recording it shows how much you've changed your accent to make it more understandable. As someone with english 2nd I apreciate that very much, thank you.

  • @altimmons
    @altimmons Před 5 lety

    I saw somewhere- perhaps even here, though I think it was anandtech- was that the power consumption on threadripper scaled exponentially with utilization due to the infinity fabric power utilization. It mean that as it clocked up, nearly all the power and thermal headroom went into inter core communication and not into increased clocks. It’s why that had such low boost clocks. I wonder if that could continue to be the problem. Is the infinity fabric dooming these otherwise good chips?

  • @JethroRose
    @JethroRose Před 5 lety +5

    AMD may be showing their best hand... but at least they aren't using 1.6kw refrigeration units.... :D

  • @jonathans303
    @jonathans303 Před 5 lety +9

    What has made your voice change so much? Your accent is different but also you speak with a much deeper voice, I definitely prefer it now!

    • @janaebert3059
      @janaebert3059 Před 5 lety +6

      Alcohol and cigarettes xD

    • @krioni86sa
      @krioni86sa Před 5 lety

      drugs and sex

    • @syncmonism
      @syncmonism Před 5 lety

      People who get professional voice training for broadcasting are taught to speak in a deeper voice, I believe. Of course, I don't think Jim has actually gotten any professional training, not unless it's from one of those inexpensive educational video websites.

    • @jeffkuzzen
      @jeffkuzzen Před 5 lety

      He got older.

    • @nadirjofas3140
      @nadirjofas3140 Před 5 lety

      Different microphones e.t.c

  • @dupandashan
    @dupandashan Před 5 lety

    Can you make video about shadows? Since beginning of gaming shadows drop performance and now I see ray tracing only for reflections. And something like ray vs. path tracing shadows.

  • @mhh3
    @mhh3 Před 5 lety +1

    2.2ghz bost clock on the 64core epyc cpu seems to be true, a new leak came out. so does this mean that the ipc gains must be massive?

  • @zerospampls3980
    @zerospampls3980 Před 5 lety +3

    Hey guys a dumb question right here:
    Would it be possible to use HBM on an APU with a Ryzen Chiplet and navi graphic?
    The coolest part of that would be if you could use the HBM as a L4 Cache/RAM, does anyone know if thats technically feasible?
    If that would work that would be a crazy good product!

    • @mduckernz
      @mduckernz Před 5 lety +2

      @@cheescake98 Not with appropriate prefetching. I mean yeah it's no L3 (in speed I mean) but it's so much better than RAM that it's worthwhile, especially for APUs which are very constrained by memory bandwidth (it's only not all that apparent because they are so low end)
      Re: use as RAM adjunct, could attach via PCIe but it would take up so many lanes to be effective. Better to use a dedicated interface. Maybe we'll go back to co-processors sockets haha (except for memory)

    • @simasimson5798
      @simasimson5798 Před 5 lety

      They are doing some research about 3D stacking and doing something similar to what you said. But there's not much about it, just rumors and little to no info news articles

    • @tomstech4390
      @tomstech4390 Před 5 lety +2

      Technically possible. Yes certainly a single hbm2 stack of 4GB would be an immense high bandwidth cache (HBC) and bring allot of improvements to be igpu performance (which is going to be needed with better navi igpu being ALLOT faster then traditional small gcn igpus currently used). Plus HSA where the igpu is used as a floating point unit accelerator with data being flagged by cpu and gpu for use by both would be awesome.
      We've already seen tiered storage become a thing. We now have l1 cache then l2 then l3 cache then a big gap to system ram then optane caches and/or ssd then a big gap to hdds.
      The industry is constantly trying to fill the gaps at the moment cost effectively and going from a 16mb l3 cache then to 16gb of ram...a 4gb HBC does that very well.
      The big problem is cost.
      A hbm2 stack is close to the same size as an 8 core chiplet so costs the same to make. navi based apus will be monolithic style single dies like current apus are with 8 cores in one half of the die and navi20 gpu being on the other side and a price of £150. to add a hbm2 stack to that would make it £200 and the performance increase wouldn't scale with price.
      It would still be cool to see and some propriety system still might get it. There are apus out there with 4c8t already and 2560 vega cores which would benefit but they're not socketed.
      techgage.com/article/a-look-at-amd-radeon-vega-hbcc/

  • @siveric32
    @siveric32 Před 5 lety +11

    You are a great analyst, excellent content and great channel
    Keep it up Jim :)

  • @FurbyOfDeth
    @FurbyOfDeth Před 5 lety +1

    I just wanted to say i love your analysis vids (i fount your cash vid vary informative) keep up the great work.

  • @ArtisChronicles
    @ArtisChronicles Před 5 lety

    I think I actually remember that old video that you showed in this one. Makes it feel like it was so long ago now.

  • @MrSamadolfo
    @MrSamadolfo Před 5 lety +9

    🙂 yay, a new release means better discounts on last gen 🐢😍

  • @WatchingFromHeaven
    @WatchingFromHeaven Před 5 lety +7

    hay mate, could you make some historic course about S3 Graphics, IBM CPUs, or about VIA 🤔
    as always, your content is da best

  • @leeebbrell9
    @leeebbrell9 Před 5 lety

    Nice, another quality video, its really good the way you explain everything so clearly. Thanks.

  • @samborton6613
    @samborton6613 Před 5 lety

    Hey Jim, just wanted to let you know that this was a really interesting and well done video. It's very interesting to hear a more meta discussion comparing tactics and "best case" scenarios and the like for what AMD has done previously vs with Zen 2. I have to wonder though, even with the 7nm boost in clock speed, do you REALLY think that they can do 5ghz on 16 cores on desktop? I'm a hopeful person but I have a hard time seeing them getting even 12 cores up to 4.8 or 4.9. 5ghz just seems like too much of an ideal situation to get without cranking up voltage, which we know from ryzen 1 and 2 hits a wall really fast with clock speeds (4.1 on ryzen 1, and like 4.4 on ryzen 2).
    Anyways, I lloved your analysis and thoughts, and I can't wait to upgrade my 1600 to a shiny new 12 core zen 2 CPU later this year. Cheers!!

  • @The_Nihl
    @The_Nihl Před 5 lety +3

    Its hard to think that Zen2 will suck. Intel, and many people basically thought Ryzen will be DOA.
    It was not. quite opposite in fact. Block diagram alone of Zen let to believe, that it was unfinished product, an early taste of what they been cooking since 2012. Designing good architecture takes many years, and Ideas which was not incorporated into Zen, and all the tweaks found in the meantime, should be incorporated in Zen2.
    My Wallet is ready.

  • @JoeRichardRules
    @JoeRichardRules Před 5 lety +12

    What a beautiful voice to hear on St. Patrick’s Day

    • @gctdonyre
      @gctdonyre Před 5 lety +3

      🤔 It's a Scottish accent though, not Irish accent.

    • @chadem311
      @chadem311 Před 5 lety

      The Shape Very different, no doubt. But a Scottish accent is a lot closer to an Irish accent than anything else, so the confusion is understandable. Both Gaelic/Celtic in origin.
      - American who is very familiar with the varied accents of 🇬🇧

  • @cybercat1531
    @cybercat1531 Před 5 lety

    35:30 why the decaped Winbond EPROM?

  • @anonymoususer3561
    @anonymoususer3561 Před 5 lety

    When I came to this channel for the first time, I thought your voice was weird. After listening to the old video, I have to say, keep up the good work, you have improved so fucking much

  • @colesym84
    @colesym84 Před 5 lety +5

    I would still be happy with an 8 core chip at 4.5ghz, if I can clock it to 4.8ghz+, regardless of efficiency doing a Bonny and Clyde.

  • @fokjohnpainkiller
    @fokjohnpainkiller Před 5 lety +20

    FINE! I'll sleep at 3am! Jeez...

    • @Maeryaenus
      @Maeryaenus Před 5 lety

      South Africa?

    • @fokjohnpainkiller
      @fokjohnpainkiller Před 5 lety

      @@Maeryaenus Greece, probably same timezone. Is SA 2 hours ahead of London time?

    • @Maeryaenus
      @Maeryaenus Před 5 lety

      @@fokjohnpainkiller yep. I ask SA because it's the most likely country with english names in this timezone

  • @turntablescience1
    @turntablescience1 Před 5 lety

    Will there be a Zen 2 desktop APU ?

  • @sergiomadureira9985
    @sergiomadureira9985 Před 5 lety

    Great video as always Jim, just touching a point (positive) that you didn’t to mention. I think it was you in another video that presented a TSMC document where they shown in a graph that 7nm has really good clockspeeds capabilities.
    Still, that doesn’t explain the low clock ES, maybe they having trouble binning them ,as it is in fact a 1/2 node shrink from previous process, idk, guess we’ll just have to wait and see
    Keep up the good work 👍

  • @nichogenius1
    @nichogenius1 Před 5 lety +5

    Was the accent shift intentional? I remember this video, but I never once noticed a change in your accent in all the videos since.

    • @needausernamesoyeah
      @needausernamesoyeah Před 5 lety

      It could be a different microphone. Or maybe jim has developed his "commentors voice" a little.

    • @nichogenius1
      @nichogenius1 Před 5 lety +1

      @@needausernamesoyeah I think the latter is more likely ... the change wasn't quite an accent change as the speech patterns are pretty much the same... he just uses a much more assertive, deeper tone now.

  • @MrDaChicken
    @MrDaChicken Před 5 lety +17

    Jim tries to find a way to be negative about 3xxx Ryzen.
    And can't.
    I almost feel bad here.
    Really looking forward to dropping a 12 core/24 thread "x" series ( 3700x? We think? ), in to mY Crosshair 6 after a bios flash and being a happy camper. (1600x currently).

    • @sergiomadureira9985
      @sergiomadureira9985 Před 5 lety +2

      MrDaChicken we could really feel his effort to be negative when all his senses are going the opposite direction. But I understand why he made this video, he has been accused of being a AMD shill and getting a lot if hate lately (as he talked about in his last video), and wanted to give us a different perspective

  • @IRNatman
    @IRNatman Před 5 lety

    Holy cow, your Polaris video was the first video I ever watched from you, I had no idea it was your first video after switching from a let's play channel. Also you do sound a lot different now, I just never noticed because you've changed over time.
    And of course, amazing video as always. :)

  • @AMScotty
    @AMScotty Před 5 lety

    Since the chip is smaller (7nm) does that mean lower clockspeeds can still achieve the same or better performance than 14nm?

  • @sounakkar
    @sounakkar Před 5 lety +3

    Now amd managed to get 50%ipc gain by zen so theres nothing impossible for amd

  • @AggressiveHiDef
    @AggressiveHiDef Před 5 lety +19

    They are keeping ZEN 2 very tight lipped. There's aspects in the processor design that will not be revealed till days before official launch. That's when people will go WOW.

    • @AggressiveHiDef
      @AggressiveHiDef Před 5 lety +1

      @@buzzworddujour LOL

    • @maydaygoingdown5602
      @maydaygoingdown5602 Před 5 lety

      @@buzzworddujour more than likely.

    • @gabriellucena6583
      @gabriellucena6583 Před 5 lety +1

      Hopefully it's like the Zen launch tight lip. Which resulted on a good WOW and not a bad WOW.

    • @AggressiveHiDef
      @AggressiveHiDef Před 5 lety +1

      @@gabriellucena6583 What's interesting is that ZEN2 may be a complete design overhaul versus the original ZEN. Really can't wait to see more details on it.

  • @NeoVoodooTech
    @NeoVoodooTech Před 5 lety

    I run quite a few 580's for mining and its surprisingly how low wattage ellesmere will go if you underclock the core and drop the voltage.

  • @danstan8552
    @danstan8552 Před 5 lety

    How do you explain your accent change? On a side note... excellent video as always.

  • @TrevorLentz
    @TrevorLentz Před 5 lety +4

    My biggest concern with Zen 2 is motherboard compatibility with older boards.

    • @reiito8727
      @reiito8727 Před 5 lety

      x470 should be fine with a bios update at least

    • @gordongoodman8342
      @gordongoodman8342 Před 5 lety

      If that is your biggest concern in life, you should consider a reevaluation of your life priorities.

    • @gordongoodman8342
      @gordongoodman8342 Před 5 lety

      @@lancewhitchurch512
      In his post.

  • @zadintuvas1
    @zadintuvas1 Před 5 lety +4

    If Intel releases 10 core mainstream CPU then AMD would probably have to launch at least 12 core ones.

    • @kraveN911
      @kraveN911 Před 5 lety

      Well, Lisa did say in techjournalist interview @ CES 19. "If you look at the evolution of Ryzen, we've always had an advantage in core count."

  • @devilslayersbane
    @devilslayersbane Před 5 lety

    Hey, I've always thought you were extra fair on your video's, m8. Good to see you're sticking to that fairness in the best way possible.

  • @kamaljotsingh6675
    @kamaljotsingh6675 Před 5 lety +1

    I'm just stunned by the amount of work you put in your videos -- collecting information from various sources, running benchmarks yourself and doing the analysis, to reveal the these tech companies' business models, and saving us from being fooled. Respect.
    Great accent though ;)

  • @dinobot_maximize
    @dinobot_maximize Před 5 lety +4

    lol the AMD rebellion marketing is accurate though. a rebellion from intel and nvidia empires

  • @robc3863
    @robc3863 Před 5 lety +4

    Dubious headline mate, very dubious. AMD have all the info on their competition, so they're taking time while stocks run low... to make sure their new tech kicks ass. :)

    • @ShamanKish
      @ShamanKish Před 5 lety +1

      AMD is definitely not in a hurry.

  • @bigbuckoramma
    @bigbuckoramma Před 5 lety

    Holy shit! I can't believe how much you have changed your voice over time! That's crazy! I never had an issue with your accent, so it has gone totally unnoticed to me.

  • @McKiwi2
    @McKiwi2 Před 5 lety

    It's so strange, hearing your voice from back then. It sounds so different, and yet, I didn't even notice how much it's changed over the years. Talk about a flash from the past, that was certainly quick trip down memory lane. I have a feeling we'll get around to mentioning this specific time and video sometime in the future, maybe.
    Anyways, cheers mate.

  • @Moadeeb_
    @Moadeeb_ Před 5 lety +4

    AMD has great products,.. and some of the worst marketing know to humanity.

    • @tazboy1934
      @tazboy1934 Před 5 lety

      Sony ,Panasonic and Philip too

  • @dontbother7330
    @dontbother7330 Před 5 lety +3

    I waited 6 months after zen1 launched before picking up a R7 1700. Wasn't a high-clocking sample so I built a new system around it for my dad. Jumped early on the 2700X and am much happier now.
    With 7nm heat density, having dual chiplets, and likely a bucketload of binning I can't help but wonder if your average 12 core part will be the better performer for the cooling and for the money. I can't see those halo high clockspeed 16 core chips being $300.
    I'll try and wait until the end of the year to make a decision to upgrade (for fun) or to hold off until the next refresh.

  • @blvk3
    @blvk3 Před 5 lety +1

    one question, jim
    who doesnt sandbag?

  • @iLAMV
    @iLAMV Před 5 lety

    Back then when I wasn't subbed to this channel, I watched occasionally one of your videos from time to time,
    but man, your accent did actually change drastically since the Polaris vs Maxwell video.

  • @dongurudebro4579
    @dongurudebro4579 Před 5 lety +12

    The heat shouldn´t be the deal breaker here, even if its like 20C° hotter than zen+ its still only as hot as an intel CPU... so yeah we shouldnt have worries there! :)

    • @dongurudebro4579
      @dongurudebro4579 Před 5 lety +6

      Ah and if it is like 40C° hotter (which it isnt) they would just let only 1-2-4-6 of 8 Cores boost high up.

    • @myroslav6873
      @myroslav6873 Před 5 lety +1

      ? It would be a deal breaker for me. I didn't buy new i7 or i9 as those chips are freaking barbeques. I tried Acer Nitro laptop, and after 1 hour of gaming 4 core 8 thread Intel chip in it reached 94 degrees Celsius! I don't want new Ryzen CPUs to turn into that.

    • @TheBilaras97
      @TheBilaras97 Před 5 lety +1

      to be fair intel cpus can run at 100c for a long time with no issue and i dont know if the same can be said for ryzen

    • @myroslav6873
      @myroslav6873 Před 5 lety

      @@TheBilaras97 perhaps, but I'm still uncomfortable with those temps. Never had parts run hotter than 80 degrees in my systems.

    • @TheBilaras97
      @TheBilaras97 Před 5 lety +1

      @@myroslav6873 for some reason people think 80c is the max,despite having laptos with intel cpus running at 100c amd throttling for years with no problem.I also remember a test someone did(cant remember)who run a desktop intel cpu at 100c for a year and had no problem at all,there is a reason intel puts 100c the point of throttling and not 80c they probably know what they are doing,i think its mostly in your head since older chips needed to run colder and people have continued with the same mentality

  • @DanielWW2
    @DanielWW2 Před 5 lety +6

    Terrible timing for this Jim. :P
    AMD just unveiled there next step. It was obvious they would take it, but they are also going down the 3D stacking route. They had to even without Intel announcing its 3D stacking technology.
    You know what is expected of you now. No more sleep. :P

    • @tobiassteindl2308
      @tobiassteindl2308 Před 5 lety +1

      AMD planning on 3D stacking?
      source pls

    • @adoredtv
      @adoredtv  Před 5 lety +3

      Sleep? What's that...

    • @DanielWW2
      @DanielWW2 Před 5 lety +1

      @@tobiassteindl2308 www.tomshardware.com/news/amd-3d-memory-stacking-dram,38838.html

  • @sturmer3616
    @sturmer3616 Před 5 lety

    Im not expert and thats why asking. How well HBM2 works as cache in processor?
    And off topic why amd GPUs have only 64 CUs? Why not 128?

  • @dannygc1205
    @dannygc1205 Před 5 lety

    can i just say...haven´t even watched the video yet, but s soon as i saw it in my feed i started smiling, i dont know if its your soothing voice/accent, or beause of your meticulous research[which is a turn on for me(facts)] but i just love your videos, i watch them like a tv show....don´t stop S2

  • @MrTopli
    @MrTopli Před 5 lety +14

    Please why do you upload right when I have to sleep. I have tests tomorrow you know.

    • @matilija
      @matilija Před 5 lety

      Unless he knows you personally, no, he doesn't know, and why should he care? He lives likely in a different timezone and uploads based on what is convenient for him, not you. Now, all seriousness aside, I just have to say, good luck on your tests. :P

    • @user-ch5ij5yd2g
      @user-ch5ij5yd2g Před 5 lety +1

      @@matilija yomama! :D

  • @pvalpha
    @pvalpha Před 5 lety +4

    I always appreciate your analysis. I think you've been on the right track this entire time. Navi... I think AMD is having problems because of the size of the die. Nothing more. I fully expect the moment AMD can properly chiplet their GPU's they'll be on a far stronger footing even WITH the limitations of chiplet design on their GPU software. Software can be reengineered well enough, even if AMD's efforts haven't been at the best there. As far as intel, I'm praying that they can produce one heck of a GPU. Because 1) Intel Integrated Graphics sucks and people who have to use intel because there's no decent competition (such as mobile) deserve better and 2) someone's got to compete with nVidia. And Intel taking nVidia off their game will force AMD to redouble their efforts. Because once nVidia mindshare starts dropping that's an opening for AMD to retake some of that for themselves.
    As for your audio quality from your earlier recordings... Sounds to me you've learned to enunciate a bit better and got better audio equipment and acoustics where you do your recording. You'd be shocked how the room acoustics and audio equipment can really change the way people sound.

    • @defeqel6537
      @defeqel6537 Před 5 lety

      Do Intel's integrated GPU's really suck though? Mainly, how is their performance/watt or performance/area when compared with AMD?

  • @hunterferrick4816
    @hunterferrick4816 Před 5 lety

    Thank you for your insight on everything that you do. I love your reviews, and the knowledge that you bring to the tech world. I'm glad I got to your patreon discord as well!

  • @ThePTFOGaming
    @ThePTFOGaming Před 5 lety

    only youtube channel where i always watch the video to the end, no matter how long it is. your perspective on the industry is incredibly interesting as always

  • @SleepyRulu
    @SleepyRulu Před 5 lety +8

    I am excited for zen 2 news and information

  • @yottaXT
    @yottaXT Před 5 lety +12

    That awkward moment when you publish the best debunker of your own initial theory xD.

  • @refractionpcsx2
    @refractionpcsx2 Před 5 lety

    I didn't realise how squeaky you used to be haha, how times have changed! Great video mate, considering how hard it is to say if it's gonna suck right now, you made some valid points :)

  • @muziqaz
    @muziqaz Před 5 lety

    That voice change though, are you sure it was you? :D Great video as always :)

  • @esdblog6100
    @esdblog6100 Před 5 lety +3

    Why AMD is sandbagging? Simple, no hype pre launch is good thing. Remember bulldozer, it was not bad CPU, but hype was way too high. It was better to blow away competition on lunch.

    • @esdblog6100
      @esdblog6100 Před 5 lety +1

      @Chiriac Puiu It go well against first gen i7 and little behind second gen after which progress basically stopped. I have both i7-920 and fx8350, so can tell you that in real world single/dual threaded engineering applications bulldozer run why smoother and bit faster then overclocked first gen i7.

    • @esdblog6100
      @esdblog6100 Před 5 lety +1

      @@theonetruelenny9883 cpu.userbenchmark.com/Compare/Intel-Core-i7-2700K-vs-AMD-FX-8150/1985vs2006
      2/3 performance for $332 vs $245 7/10 price. Is it that bad? It is if all you need highest benchmark score, for the rest it is good value for money.

  • @Rentta
    @Rentta Před 5 lety +3

    Don't get wheel for forza horizon. By far best experience is with pad (i have both it suits pad way better as an arcarde racing game)

    • @nickskizekers1906
      @nickskizekers1906 Před 5 lety

      i second this, the xbox one pad is awesome for forza pc, the trigger rumble is amazing for feeling what your wheels are doing

  • @franzb69
    @franzb69 Před 5 lety

    how come your voice got deeper? change in mic or did your real voice just come in in the past couple of years?

    • @adoredtv
      @adoredtv  Před 5 lety +1

      I was deliberately speaking in a higher pitch before because I thought it was easier to understand. Even today I talk in a higher pitch than my natural voice I think.

    • @franzb69
      @franzb69 Před 5 lety

      @@adoredtv maybe try it out. We might like it.

  • @Andrei-ck7lv
    @Andrei-ck7lv Před 5 lety

    At @23:32 what do you mean by R9 2700x?
    LE: Nvm I think it was just a typo. For a second I actually believed there was a new processor I didn't know about :D

  • @w04h
    @w04h Před 5 lety +5

    8:32 this is what triggers me the most about mainstream benchmarking YT channels who compare CPUs at ultra setting - 60 fps at best and stable 99% GPU utilisation... and than they base the winner on 2-4 fps difference

  • @gogo8092201
    @gogo8092201 Před 5 lety +7

    Hmm. You do a pretty good job of playing devils advocate to yourself. Might be time to team up with another youtuber that does analysis. Maybe rapid fire debate-like videos being launched on each of your channels. I don't know who that would be, but people like conflict, and the adversarial system is very good at helping others arrive at a conclusion. Hell, even if someone agrees with your analysis, they can go all-in devils advocate like a lawyer.
    I like that you tested the 2+2 and 4+0 core configuration. I was wondering in a previous video how that would affect performance in different workloads. Since that could be a method of segmentation. (This chip has 8+0 and is $110, this one has 4+4 and is $100 etc).

  • @juanalonso4753
    @juanalonso4753 Před 5 lety

    What if you ran the racing benchmark with the intel equivilant to what amd showed in the demo to see more or less where the 2700x lies with the faster intel chip and compare it with the demo that amd showed

  • @Austin1990
    @Austin1990 Před 5 lety

    Thai was a very fun video! I love the optimist + critic approach.

  • @DinoSabanovicRandomDCRO
    @DinoSabanovicRandomDCRO Před 5 lety +4

    I make templates for Davinci Resolve and what AMD gave me is 8 core 16 thread cpu for 200$. I could only dream of that 3 years ago! Also RX 580 8GB for 120$? Yes! Thank you, AMD.

  • @syncmonism
    @syncmonism Před 5 lety +8

    I never knew that Jim used to be a hobbit! XD OMG, his voice was so much more high-pitched back then.

  • @przemysawukawski4741
    @przemysawukawski4741 Před 5 lety

    Hi, many times in Zen vs Core architecture reviews I see mentions about AVX, however in almost none I have seen mentions about SHA extensions all Zen CPU poses... I think this can be quite interesting for you to dig into that. In a nutshell - Intel SHA Extensions are supported by almost all cryptographic libraries but till now almost no Intel CPU had those instructions implemented. They do improve the performance of SHA256 by a lot and as a result it is causing the Zen based CPUs an ideal thing for WWW servers with TLS/SSL traffic and for all other encryption related workloads.
    What is important about the SHA256 is that in contrast to AVX, they are supported by almost every software that uses SHA 256 algorithm.

  • @TheReal_ist
    @TheReal_ist Před 5 lety

    3:40 HE'S SOO YOUNG and adorable sounding. lmao
    Soooo cool to see how much you've matured as a person along with your content as well. Good shit mate :)