Analysing Navi - Speculation and Leaks - Part 1

Sdílet
Vložit
  • čas přidán 17. 04. 2019
  • What is Navi likely to be?
    ♥ Subscribe To AdoredTV - bit.ly/1J7020P
    ► Support AdoredTV through Patreon / adoredtv ◄
    Bitcoin Address - 1HuL9vN6Sgk4LqAS1AS6GexJoKNgoXFLEX
    Ethereum Address - 0xB3535135b69EeE166fEc5021De725502911D9fd2
    ♥ Buy PC Parts from Amazon below.
    ♥ NEW USA Store! - www.amazon.com/shop/adoredtv
    ♥ Canada - amzn.to/2ppgYsX
    ♥ UK - amzn.to/2fUdvU7
    ♥ Germany - amzn.to/2p1lX6r
    ♥ France - amzn.to/2oUAK2Z
    ♥ Italy - amzn.to/2p37Uui
    ♥ Spain - amzn.to/2p3oIBm
    ♥ Australia - amzn.to/2uRTYb7
    ♥ India - amzn.to/2RgoWmj
    ♥ Want to help with Video Titles and Subtitles?
    czcams.com/users/timedtext_cs_p...
    -- Video Links Below --
    www.pcbuildersclub.com made the Navi cover image.
  • Věda a technologie

Komentáře • 791

  • @kendallpino3524
    @kendallpino3524 Před 5 lety +453

    "I am gonna be brief" almost 30 min vídeo 😂😂

    • @Najvalsa
      @Najvalsa Před 5 lety +72

      "Part 1" haha

    • @oldtimergaming9514
      @oldtimergaming9514 Před 5 lety +28

      Yeah, but that time, even at 30+ minutes seems to go by so fast.

    • @pradeepkumar-qo8lu
      @pradeepkumar-qo8lu Před 5 lety +9

      28:34 minutes is brief compared to his other analytical videos

    • @rackneh
      @rackneh Před 5 lety +1

      Yeah he spends half the time saying that he was never wrong to begin with, that's kind of half his videos always 😂😂

    • @peterjansen4826
      @peterjansen4826 Před 5 lety

      Fortunately there still are some people on CZcams who consider that brief. :)

  • @MarceloTezza
    @MarceloTezza Před 5 lety +392

    The moment you keep looking at the time stamp because you dont want the video to end.

  • @antreastoumazou2736
    @antreastoumazou2736 Před 5 lety +365

    Don't listen to them haters boss! We're here for only three things:
    Your leaks
    Your accent
    And your long analysis videos
    You're doing bits man. Keep em coming xD

    • @DJ_Dopamine
      @DJ_Dopamine Před 5 lety +13

      This guy has the biggest leaks since the Titanic 😂

    • @eabelcourt
      @eabelcourt Před 5 lety +4

      Jims accent keeps my daughter enthralled for a whole 30 minutes which is a god damn miracle keeping her attention for any longer than 30 seconds!

    • @_BangDroid_
      @_BangDroid_ Před 5 lety

      It's funny you mention accent, I find myself doing Jim impressions throughout the day.

    • @billschauer2240
      @billschauer2240 Před 5 lety

      What accent?

  • @604RPM
    @604RPM Před 5 lety +302

    Oh, come ON, Jim! Bad news BEFORE good news! Everybody knows that!

    • @ypsilondaone
      @ypsilondaone Před 5 lety +11

      I was really excited right to that point where he said that :/

    • @aflyingmodem
      @aflyingmodem Před 5 lety +7

      I’m calling it now either Navi is going to be delayed, Navi will only reach 1070 ti level with the special edition only reaching 1080 level, Navi will simply be a more refined Vega on 7nm, or some kind of combination of the choices I’ve already said.

    • @Justchuck69
      @Justchuck69 Před 5 lety +8

      @@aflyingmodem Vega 56 now on par gtx 1070 and the Vega 64 is on par with the gtx 1080 so if they have no performance gains then the power requirements would be way lower than in Jim's charts here ! So I'm guessing you would be wrong !

    • @fahim2690-b2r
      @fahim2690-b2r Před 5 lety +10

      @@Justchuck69 O rly czcams.com/video/aqxL27pmYpE/video.html, Vega 56 is almost always faster than a 1070 and barely behind a 1080. Anyways I expect the top-end (of the midrange lineup) Navi to be around the 1080/Vega 56 + 10% level.

    • @grizzly6699
      @grizzly6699 Před 5 lety +2

      @@ypsilondaone I'm not sure I want to watch part 2 with the bad news now:( What are you doing to us Jim!

  • @imergence9628
    @imergence9628 Před 5 lety +187

    It's always a good day when Jim uploads

  • @BlueTJLP
    @BlueTJLP Před 5 lety +62

    I gotta give credit where it's due, Jim. I really love the way that the different segments of your videos are always so coherent and how they all tell one story by referencing previous segments. That is what makes it so pleasant to sit through 30 minutes of a video every single time.

    • @baronvonlimbourgh1716
      @baronvonlimbourgh1716 Před 5 lety +9

      That is probably also why we only get one video a week. Writing a good script is a lot of work.
      I bet it takes him days and many many rewrites and shuffeling stuff around.
      But it does pay imho. No matter how difficult or technical the subject it is always very clear and easy to understand.
      A lot of chanels could learn a lot from this.

    • @adoredtv
      @adoredtv  Před 5 lety +10

      The research is a day or two, the script a day or two and recording and editing at least a day and often 2 days. Overall I'm really under pressure to continue delivering one of these every week, now that I take every weekend off (in theory) ;)

    • @baronvonlimbourgh1716
      @baronvonlimbourgh1716 Před 5 lety +6

      @@adoredtv in theory.. i know how you feel lol.

  • @seylaw
    @seylaw Před 5 lety +185

    The end sounds gloomy. Is it Vega all over again? Too little, too late? Do they need yet another respin? What a cliffhanger over the easter holidays...

    • @CaveyMoth
      @CaveyMoth Před 5 lety +37

      Holy crap, this is like the ending of Infinity Wars.

    • @ConorDoesItAll
      @ConorDoesItAll Před 5 lety +32

      Probably too little too late. Nvidia will probably just release the 2070ti and start getting the RTX 3XXX series ready on 7nm which will unfortunately blow AMD out of the water.

    • @freepok
      @freepok Před 5 lety +32

      @@ConorDoesItAll a 2080ti on 7nm would destroy everything, ouch, let's hope they can do something, I really want to buy amd for my next gpu

    • @seylaw
      @seylaw Před 5 lety +6

      And there is more competition coming on the horizon, Intel. So it is not that AMD has much time to bring their GPU roadmap execution in order if they want to stay relevant in that market.

    • @oldtimergaming9514
      @oldtimergaming9514 Před 5 lety +11

      @@ConorDoesItAll Also remember that power savings and freq boost are slowing down so I don't think it will be as big a jump as you think. You also know that they will raise prices by 50 percent while fps goes up by say 20 percent. Nvidia is digging a hole, slowly, over time. Wallets can only stretch so far.

  • @Rhythmattica
    @Rhythmattica Před 5 lety +5

    Ive been involved with computers since 1980 , though Mainly Music and Games.....
    So Im quite understanding of the basics, but your channel, focusing on the possible upcoming tech with such deep analysis , even as a Laymen, Its so researched and concise , by showing and explaining your resources, everything makes absolute sense. I learn so much.
    Seriously, this is the best channel in its field , actually across any medium.
    The time it must take you I cant comprehend, but know Its very appreciated.
    Love your work.
    Disclaimer:
    My inlaws are Scottish but it has no bearing on my opinion.
    Though Scotts are awesome !

  • @Taliyon
    @Taliyon Před 5 lety +5

    The Navi saga has been such a roller coaster. I can't believe how much I've learned from following the ridiculous development cycle. This has all been such a treat!

  • @DrearierSpider1
    @DrearierSpider1 Před 5 lety +119

    I don't care what AMD's GPU naming scheme is, so long as it stays consistent. I bet you a bunch of lost sales are because people don't understand the product tiers after HD 7XXX became R# 2XX/3XX/Fury became RX 4XX/5XX became Vega XX became Radeon VII. FFS STICK WITH ONE NAMING SYSTEM AMD!!!

    • @SpectrumTwist
      @SpectrumTwist Před 5 lety +5

      to be far, both nvidia and amd/ati among a few other usually easily forgotten manufacturers, hell even intel's own naming convention has constantly seen a wild ride of absurd or arbitrary values. We had the TNT series before moving to Geforce... then the numerous models and numbers within each release series, initially 1 up to 3 before the 4 started throwing some extra numbers into the mix for additional modeling, then 5 series as FX which basically dropped the individual single digit series number in favor of just going "5xxx"... followed by something more followable as the geforce 6xxx all the way up to the 9xxx before rotating it's neck around and rapidly launching a short lived 1xx which were mostly again a rebrand of the 9xxx which was a rebrand somewhat of the 8xxx, form there we saw the 200 - 700 series before a bit of a leapfrog on the 800 (much akin to why amd leapfrogged the hd8000 series) and then 900 before venturing into the 1000's. ATI/amd have done much of the same thing, it's just seen a lot of changes again in the last few years. Intel is certainly guilty of the confusion too.

    • @Tudorgeable
      @Tudorgeable Před 5 lety

      The guys buying vega and radeon vii are probably the handful of people. I agree, the switch to r7 r9 was dumb

    • @DrearierSpider1
      @DrearierSpider1 Před 5 lety +14

      @@SpectrumTwist I disagree. Nvidia and Intel have definitely changed product names, but not nearly to the degree of AMD GPU's. Going from GTX 7XX to GTX 9XX isn't that confusing, and it's clear which one is newer. Intel Core iX also has been the same for almost a decade. All the changes I listed happened in the last ~7 years, and I even forgot Fury.

    • @Chuckiele
      @Chuckiele Před 5 lety +2

      @@DrearierSpider1 There wasnt really a way to go after the Radeon HD 7XXX. They eventually would run out of 4 digit numbers and 5 digits are dumb so they went back down to 3. Intel is facing the same problem soon. There isnt much to come after a 9900K, a 10900K would just sound insanely stupid. Same problem what Nvidia had after the GTX 1000 series. Going with 2100, 2200, etc would seem like lacking progress, so they went with 2000 and probs 3000. Just the 1660 doesnt fit in there at all and next generation will probs be even weirder. You cant keep a single naming scheme for 20 years, it will eventually run out of numbers.

    • @TheCountess666
      @TheCountess666 Před 5 lety +1

      it's rx400/500/vega. so exactly the same as the last one.
      radeon 7 is the only odd one out, and probably not originally planned for.

  • @sergeys4617
    @sergeys4617 Před 5 lety +14

    man you killing us

  • @Bobcat665
    @Bobcat665 Před 5 lety +9

    The next architecture IS NOT going to be called "Arcturus"? Are you SIRIUS???

  • @kianmoiny7860
    @kianmoiny7860 Před 5 lety +69

    “This video was the last of the good news on NAVI” 🎤⬇️
    Love the new Outro btw

  • @kojack57
    @kojack57 Před 5 lety +35

    I've said it before and I'll say it again. ATVs, AMDs Master Plan video has it all. Even if you aren't in the slightest bit interested in this area it is still a must watch. I should know, half the views are mine. Not really but, quite a few are. Amazing piece of work.

    • @-GameHacKeR-
      @-GameHacKeR- Před 5 lety +1

      The other half is probably mine. if jim could only see how many times I've watched his video's he'd be shocked XD

    • @billschauer2240
      @billschauer2240 Před 5 lety +1

      @@-GameHacKeR- Nope, mine.

  • @guidosalducci2553
    @guidosalducci2553 Před 5 lety +31

    Jim, your uploads are fantastic. I love the information and the speculation that you put out there. It's better than drama sometimes.

  • @Leyvin
    @Leyvin Před 5 lety +8

    18:56
    The Maximum CU Values are: Navi 10 / 60CU., Navi 12 / 40CU., Navi 16 / 20CU
    This as a note is because the GCN 2.x Architecture uses a 5 Core / Cluster instead of a 4 Core / Cluster of GCN 1.x Architecture.
    As there are 4 Compute Pipeline on 4-6 Cluster Sets.
    This means that the Maximum the Architecture itself can support is 80CU (4x4x5) for GCN 2.x and 64CU (4x4x4) for GCN 1.x per Monolithic SoC.
    As a keynote., this is why Radeon VII / Instinct are both 50/60CU *NOT* 56/64CU.
    In fact for all intended purposes Radeon VII is Navi 10., the difference being it's a Monolithic Design (the I/O Control being 7nm, which isn't optimal and flanked by the 4 Compute Cluster Cores) … where-as for Navi these are not part of the SoC "Chiplet" and will likely be 14nm due to it being Cheaper to Produce and Compatible with Zen(2).
    This doesn't mean it will be "Smaller" however., as AMD RTG have let slip that they've been able to reduce the Latency on Infinity Fabric v2.0 to where it could be used for High-Performance Application (hence why the Zen 2 Chiplet don't Cross-Talk) … so it will likely be a L3 Style (HBM) Cache.
    As a result., this would provide 2 Benefits; one being that they could produce the Compute Clusters as Chiplet over a Monolithic SoC (i.e. 1-3 Compute Chiplet) being able to Scale by adding or removing these "As Required" … and the second is that with the additional space and larger (cheaper) 14nm Process, they can then include something similar to the Terascale Ultra-Threading Processor within the Control I/O; something they've essentially not had the room or complexity for with the Monolithic Designs.
    After all., even an "All-In-One" Control I/O is going to be a fraction of the size for a GPU than it is for a CPU.
    (Based on the size of the HBM2/GDDR5 one in Vega / Polaris., we're talking about half the size of the current Ryzen 3rd Gen Control I/O... that's a lot of Spare Space (and remember this is the case because it's a Lower Number of Chiplet; so fewer overall Memory Interfaces are required; plus Ryzen Control I/O need the whole Northbridge in there as well).
    Having a Thread Management Engine within the Control I/O could mean that without strictly any Architecture Changes BETTER utilisation of the Available Hardware Threads without High-Skilled Low-Level Graphics Programming becomes possible, especially if it includes Machine Intelligence like Zen that Optimise-Over-Time the Branch Prediction, as well as better L3/L2 Utilisation.
    Given Memory Calls are CPU/GPU the biggest pipeline bottleneck due to latency.
    Remember part of the 5 Core Per Cluster; is primarily so that typically unused Threads can be used without affecting the Standard 4 Graphics Pipelines as say INT8 or FP16 resulting in potentially better performance when Optimised., and having a Thread Management Engine that automatically in-line optimises; say converting Colour Calls to FP16 / INT8, when typically they're not would free up not just an FP32 Pipeline but allow 2-4 to be Rapid Packed further increasing the Throughput.
    A bit like having SMT / Hyper-Threading for the GPU., which is essentially what NVIDIA's Gigathreading Processor already does.
    Navi is of course the 3rd Generation GCN 2.0 Architecture., this means 'Shiva' does end up being a "New Architecture"; although the claim it's the "End of GCN" is hyperbolic.
    I think what is likely meant by this is that unlike GCN 2.0, it's not going to be compatible with GCN 1.0 … and instead entirely focus on expanding the GCN 2.0 Concept.
    This will almost certainly be expanded with the addition of Fixed-Function Pipelines for things such-as Ray-Tracing or Machine-Learning.
    In essence Modernised Terascale that is stripped down from all of it's Graphics Pipeline Elements into just Pure Mathematical Co-Processing along side the Compute Cores (and I almost guarantee it will follow the VLIW/5 Approach... i.e. 4 General Purpose + 1 Fixed Purpose Pipeline)
    I mean this makes the most sense as GCN (Compute Units) are just infinitely scalable., what are however needed going forward are elements that either have to be designed as a Secondary Core (ala Turing); but that has issues with Data Sharing, or like how Zen has it's 2 Additional "Half Pipelines" for SMT. And well which will make the most sense for AMD to follow here?
    What they might do is push this out to the Workstation / Developer Market first within this Generation of Navi; knowing that when they release the Consumer version in 2020., that Developers will already have a handle and be ready to Deliver Software capable of taking advantage of it. (Again learning from NVIDIA's Mistake)
    Essentially NVIDIA "Jumping the Gun" as it were., will ultimately work out to AMD releasing a (Full) Product Range that supports it, just as it starts becoming utilised.
    < • >
    Honestly I expect the Architecture and Hardware will be basically directly inline with where it should be for Generational Improvements and Innovation... I also expect to see it being seen as "Underwhelming"... but frankly this has much more to do with the fact that AMD doesn't have an issue with their Hardware., but their Marketing and Branding.
    *THAT* is what and where they really need to work on, in order to change Public Perception of them; and not simply be seen as the "Cheap Alternative"
    I've seen quite a few claim "Oh, well they need to release something with either outstanding performance or outstanding price"., but has that ever worked in the past?
    No, of course not. Even if it had the potential to, which I doubt., just look at NVIDIA's behaviour over the past 4 years, ever since Polaris spooked them.
    They fell back onto their tried and true "Don't Control the Hardware, Control the Ecosystem" … that forces people into NVIDIA's Ecosystem.
    It wasn't just Polaris that spooked them., but the Low-Level APIs (Vulkan and Direct3D 12); which they simply couldn't control., so what did they do? They heavily incentivised Developer to remain on DirectX 11, pandering to the fact most Developers / Publisher don't exactly want to change their Development Toolchain / Ecosystems.
    AMD with their GPU Open initiative _could've_ corrected this., but they've arguably all but abandoned it, just as they've done with so many good ideas they've had over the years.
    Like they keep getting Royal Flush Straight Hands., but still they fold to NVIDIA's Bluff. And this is behaviour they HAVE to stop; because they're not really competing with NVIDIA but themselves.

  • @neerithedragon298
    @neerithedragon298 Před 5 lety +11

    Enjoy your break in the Swedish wilderness!

  • @damienjeremytrotman94
    @damienjeremytrotman94 Před 5 lety +5

    When I see an AdoredTV Update... I get excited like a kid when his dad comes home from work.

  • @maynardcrow6447
    @maynardcrow6447 Před 5 lety +8

    I said that months ago that they market navi and ryzen together as a "perfect package." Let navi hop on the success train that ryzen is cruzing on. "Rydeon"

  • @FuriosoBC
    @FuriosoBC Před 5 lety +3

    Jim, well done - looking forward to part 2. Wishing you a restful and enjoyable break!

  • @MooresLawIsDead
    @MooresLawIsDead Před 5 lety +9

    Another great video Jim! It sounds like I/O chiplets are indeed very likely for Navi. I really think there is a good chance that the RX 3080 is a cut down Navi 10 die with a 256-bit I/O, the PS5 is a slightly less cut Navi 10 die paired with a 320-bit I/O, and that "special edition" card is the RX 3090 with the full Navi 10 die and a 384-bit I/O.
    I gotta say that I won't be entirely surprised if Navi stops at 64 CU's too, but I will find it a bit odd and disappointing. I definitely thought it would be more likely than not that Radeon would want to at least go a bit above 64 - to say 80 or so CU's. That way they could at least cut down the card to something a tad above 4096 for gaming, and sell the full dies to other customers until yields improve.

  • @gavdunnin
    @gavdunnin Před 5 lety +3

    Hope all is well, still looking forward to the upload and appreciate all the work you must put in to these vids

  • @The_Nihl
    @The_Nihl Před 5 lety +7

    hmmm...waiting for part 2!

  • @RealDukeOfEarl
    @RealDukeOfEarl Před 5 lety +10

    Always utterly fascinating, even stuff I'm not interested in is interesting when this channel covers it.

  • @Najvalsa
    @Najvalsa Před 5 lety +13

    If those Navi specs are true, then they seem reasonable to me. Albeit, the ROI at those MSRP's isn't looking too hot, although that might not matter if their die is "scalable" (whatever that ends up meaning).
    I say this because of what we know about SS/GF 14nmLPP to TSMC 7nmHPC:
    Area Reduction = 0.68x (32%). The maths for anyone wondering is, (((((9.5 / 16.6 * 100) + 100) / 2) + (9.5 / 16.6 * 100)) / 2) / 100. For Maxwell to Pascal for example, change 9.5 and 16.6, to 18.3 and 27.5. These numbers being the actual sizes of the nodes, and not the marketed sizes.
    Power Reduction = 0.5x (50%)
    .
    Performance Increase = 1.25x (25%).
    As far as TDP:
    V64 with a tightened v/f has a GPU only power draw of ~170W at ~0.95V average, call it 180W.
    A 64 CU P10, with a tightened v/f would have a GPU only power draw of ~180W too.
    I have several formulas of figuring out TDP, but the one that seems to be the most accurate across different uarchs ends up with a TDP of ~160W. The maths of it is:
    (180W / 64 14nm CUs * # of 7nm CUs required to reach equal performance to 64 14nm CUs) / 1.25x performance increase + 50W (RAM and misc components) = TDP.
    The parity CUs at 7nm compared to 14nm CU's are found by taking the effective CU performance at 14nm and dividing it by the performance increase from node to node (1.25x).
    For V64, the effective CU performance is more like 60 CUs, due to a ROP (and maybe something else) limitation. So instead of a V64 having a clock for clock performance advantage of ~14% over a V56, it's actually more like half that at ~7%. So if the hardware specs of V56 were to be scaled up linearly to match the per CU performance of the V64, it'd end up at 60 (56 * 1.07) CUs.
    Therefore it'd take 48 (60 / 1.25) CUs at 7nm to have the same performance as a V64.
    Add a 10 - 15% IPC increase, which is similar to what we saw from Fiji to Vega, and the rumoured performance is achieved.
    The complete calculation then ends up being:
    (180W / 64CUs * 48CUs) / 1.25x + 50W = 160W TDP.
    There is another, maybe more comfortable formula, that goes:
    180W * 0.5x * 1.25x + 50W = 160W TDP also.
    Averaging out the results with the four other formulas, ends up at 160W TDP.
    Either I'm overshooting, or the rumoured TDP is too low. But either way, it's close enough.
    As far as die size, there are two possibilities because a P10 64 CU die would actually be ~100mm2 smaller than a 64 CU V10 die.
    A 64 CU N10 die would then be:
    V10 = 486mm2 * 0.68 = 331mm2 (same as V20, unsurprisingly)
    P10 = 386mm2 * 0.68 = 263mm2
    For a 48 CU N10/3080 die to be at equal performance with a V10 or P10 die, it'd be:
    V10: 331mm2 / 1.25x = 265mm2
    P10: 263mm2 / 1.25x = 210mm2
    Based on V10 the 3080's effective die cost comes in at ~$90, and based on P10 it comes in at ~$60. Comparatively, the 580's die costs ~$25, and the V64's ~$60.
    Adding up the component costs and I think the pricing would be closer to $300 for the 3080, but there are so many variables that it's close enough to $250 for it to be possible.
    They could also go the GP100 to GP102 route, and remove ~20% of unnecessary for gaming FUs (function units), which would drop the die cost's by ~$20 - 30, but who knows if that's possible.
    On the APU side, without the I/O, they could do an ~80mm2 chiplet with potentially up to 20 CUs which would pull ~45W at similar clocks to the 3080.
    Going the GP100 to GP102 route, would increase the CU amount by four, but also the TDP by ~10W.
    EDIT: I missed the obvious.

  • @CharcharoExplorer
    @CharcharoExplorer Před 5 lety +68

    I want 8 Geometry Engines, 4096 cores, 2GHz clock speed, and 128 ROPs.
    Radeon VII and Vega are limited in Geometry and ROPs (less so to be fair). They have BAD CU utilization in games. Even Wolfenstein 2 doesn't use all of Vega's or Radeon VII's capability, it has a lot of idle cores and idle time.
    All these wants are more than easy on 7nm.

    • @adi6293
      @adi6293 Před 5 lety +7

      Definitely 128ROP's, I moved from Vega 64 to Radeon 7 and it was a good move but still need more power for 4K 60FPS Ultra 😅

    • @CharcharoExplorer
      @CharcharoExplorer Před 5 lety +4

      @@adi6293 The geometry engines will help more :P but yeah.

    • @CharcharoExplorer
      @CharcharoExplorer Před 5 lety +5

      @@MaggotCZ
      Big Navi obviously.
      Polaris is in consoles (x1X) and Polaris-Vega is in PS4 Pro yet RX 590 and RX Vega 64 LC exist. So yeah...

    • @SpectrumTwist
      @SpectrumTwist Před 5 lety +4

      This may seem completely off the wall, but there was a mild discussion had recently, in which with AMD ramping up the sensors dumped into their cpus and now GPUS (as seen on the R VII), supposedly there is to be better utilization information pertaining to individual CUs potentially down the pipe. Would this be in navi, i doubt it, but it's not outside of the realm of possibility given the change from Vega 56/64 to the Vega used in R VII. IF amd is able to provide a breakdown of individual CU utilization, since it would be rather impractical to try and get utilization of all the SPs themselves (specially as they increase in number), this could potentially help with alleviating the utilization issue by being able to see what exactly is going on to a far greater level of accuracy. I mean it makes sense that they could do this, why haven't we had this as an option in the past already seeing as CPUs with even SMT having utilization information available for every thread individually. IF anything they should be able to report utilization based on just the 4x ECE. This may be more validated by the fact that even microsoft has only recently included a GPU utilization graph in the task manager now and with the newer WDDM, there's no real reason why we could have a more detailed breakdown with "multiple graphs" shown on the gpu utilization page akin to showing more cores/threads on the cpu performance tab.
      I can't see navi pushing rops beyond 64, not unless we were to see a bump in CU count beyond 64... but again there is nothing to suggest we'd see more than 64 CUs in anything other than the top end navi anyways.

    • @heniekgoab8746
      @heniekgoab8746 Před 5 lety

      64 ROP is more than enough even for HDR 4K

  • @pappeoeki
    @pappeoeki Před 5 lety +39

    Another interesting and much appriciated video! thank you for the hard work!

  • @Spikeypup
    @Spikeypup Před 5 lety +5

    Thank god you posted this, I was hungry for more Navi info as we get closer, and I missed the wonderful sound of your voice...so soothing...I needed it! Thanks buddy!

    • @Spikeypup
      @Spikeypup Před 5 lety +2

      Jim, I dunno about you, but I'm finally excited with REAL tinglies in my tummy about what Navi may hold for us... if the prices and performance are what we are expecting, we're looking at one hell of a battle come 2020 with Intel Xe on the table... Great work as always my friend.

  • @loganwolv3393
    @loganwolv3393 Před 5 lety +42

    430$ GPU that is only 15% slower than a 1200$ gpu? Not bad,i hope it's true.

    • @Oscar4u69
      @Oscar4u69 Před 5 lety

      me too :/

    • @pedrosoares7273
      @pedrosoares7273 Před 5 lety +2

      Think you got that wrong, the RTX 2080 is 750$ roughly, the TI version is 1200$

    • @LiLBitsDK
      @LiLBitsDK Před 5 lety +11

      @@pedrosoares7273 didn't it say RTX 2080 + 15% ? that means it is approx 15% slower than the 2080Ti no? a $1200 card?

    • @coolbeans6148
      @coolbeans6148 Před 5 lety +4

      10% slower.
      He said Navi 20 is plus 20% of a rtx 2080, rtx 2080 ti is 30% faster than the 2080...

    • @loganwolv3393
      @loganwolv3393 Před 5 lety +1

      @@coolbeans6148 I knew that it rtx 2080 ti was 35% faster than the rtx 2080 but whatever it's a small margin anyway.

  • @wizpig64
    @wizpig64 Před 5 lety +11

    *looks up from the mueller report*

    • @TheTopMostDog
      @TheTopMostDog Před 5 lety

      Does it really fucking matter? Does public uproar even really do anything, one way or another? Things will continue of their own volition wither or not you read into other people's garbage drama. The sooner you rid yourself of it, the better you'll be for it.

    • @jtlinux7174
      @jtlinux7174 Před 5 lety +2

      Pauses Mueller report, reads adored TV.

    • @WinterCharmVT
      @WinterCharmVT Před 5 lety +1

      Nope. There is lots of evidence of collusion, just not enough for an indictment.

    • @TheTopMostDog
      @TheTopMostDog Před 5 lety

      Fucking plebs

    • @oldtimergaming9514
      @oldtimergaming9514 Před 5 lety +2

      @@WinterCharmVT Turn off CNN, your brain is fried.

  • @tonkatoytruck
    @tonkatoytruck Před 5 lety +20

    I have been waiting for your update on Navi with baited breathe. And, you would think I would be disappointed that we only got half the story, but I am not. Now, I have something else to look forward to since AMD's product release has been delayed. I will freely admit, I have rooted for AMD since the early days and have continued to give them my patronage out of pure principal. Now, we will just have to sit back and see if even the most die hard Intel fan boys see the light because they seem to blindly spend exorbitant amounts of money for Intel products to date. Thanks for all your hard work. I enjoy every edited minute of it, and I know that is a lot of work, in itself, not to mention the research and due diligence required.

    • @momo-the-unknown1589
      @momo-the-unknown1589 Před 5 lety +1

      Baited breath is breath that smells like fish? Or did you mean bated?

  • @M00_be-r
    @M00_be-r Před 5 lety +13

    Cheers Jim going to watch just before bedtime, have a good easter/weekend.

  • @superhk19
    @superhk19 Před 5 lety +10

    When is part 2

  • @Loosmoose942
    @Loosmoose942 Před 5 lety +14

    When is Part 2 coming? He said this week XD

    • @tockar
      @tockar Před 5 lety +7

      Video is delayed just like Navi: 1st bad news. :D

  • @wb3123
    @wb3123 Před 5 lety +17

    I hope the performance of Navi in these rumors is true. It would shake up the PC market. Zen2+Navi would become a match made in heaven.

    • @imo098765
      @imo098765 Před 5 lety +2

      That has been done before, we got gtx 980 level on the gtx1060 which if we look at the gtx1080 vs this $250 Navi card, its the same situation if its true

    • @wb3123
      @wb3123 Před 5 lety

      @@imo098765 Yeah you are right it is about time we had another good upgrade like RX 480/GTX 1060.

    • @ericliu8434
      @ericliu8434 Před 5 lety

      @@imo098765 thats becuase 28 to 16nm was a incredibly node shrink. 14 to 7nm is kind of a medicore nodeshrink, atleast for AMD. Eg- 7nm radeon 7 only matching 16nm 1080 ti for crying out loud.

    • @imo098765
      @imo098765 Před 5 lety

      @@ericliu8434 If Im not mistaken going from 14nm to 7nm is just as big as 28 to 16 because it is half the size. So it is relatively a bigger decrease

    • @ericliu8434
      @ericliu8434 Před 5 lety

      @@imo098765 The math works out that way. But as the saying goes "its not about what transistor density you have, its about how you use it" or something like that. Amd themselves projected 25% more performance at the same power at 7nm, well below what maxwell to pascal was. And radeon 7 is the proof of that. Nvidia's R&D team is vastly larger and better funded, the massive amount of architecture level optimization that goes into modern nvidia cards is why they get so much more out of node shrinks. Amd isnt going to solve this problem until they start making billions more dollars.

  • @rjeftw
    @rjeftw Před 5 lety +1

    Looking forward to the next video! Thanks for always trying to get us these juicy details!

  • @TheVeratrix
    @TheVeratrix Před 5 lety +13

    We know you're a pro Jim. Don't let a few fools absorb too much of your time and energy. Ignore them if you can, defend yourself if you must.

  • @rishirajrajkhowa1648
    @rishirajrajkhowa1648 Před 5 lety +4

    Man you are killing me, its taking so long!!!

  • @Ray-dx2pf
    @Ray-dx2pf Před 5 lety +3

    Oh no bad Navi news!? You got me on the edge of my seat Jim.

  • @GeoffCoope
    @GeoffCoope Před 5 lety +21

    WE ARE LEAK STARVED. Liked video 2 seconds in.

  • @Kneedragon1962
    @Kneedragon1962 Před 5 lety

    Very interesting. Thank you Jim, awaiting your part 2 after Easter with real anticipation...

  • @DeadNoob451
    @DeadNoob451 Před 5 lety +1

    Thanks for your coverage once again. And don't worry too much about the press doubting you or your info, im sure most of us don't care what they think. I for one come here first for my tech stuff exactly because the info is so fresh that it may still change before release and the high quality analysis/history videos.

  • @mendez256
    @mendez256 Před 5 lety

    I just spent a week or so binging everything from the Polaris build up until now while on my daily commute. It is amazing to see how far you have come and the kind of contacts you have gained. Never would have thought back in 2016 that you'd be being fed such amazing info from inside sources.
    Was very interesting going back to rewatch the older stuff with hindsight. Many laughs were had at some of the earlier speculation and analysis leading up to launches but those all faded. I wasn't checking as I went through but it sure feels like as time has gone on your speculation on future events has become a lot more accurate.
    But after all of that content all I can say is. More, please.

  • @HuntaKiller91
    @HuntaKiller91 Před 5 lety

    Thanks for accompanying me along the road with all these infos..im otw from my home to my parents home for literally 30mins and these vid is really made me excited for these Navis.. especially since our household Pc and laptops are currently all-AMD hardware..that Navi 20cu apu will suit well to replace my 2200g 4k htpc

  • @jiwookim348
    @jiwookim348 Před 5 lety +8

    ive been here far too often i want part 2 :(

    • @adoredtv
      @adoredtv  Před 5 lety +3

      Working on it, I had a week off last week instead.

  • @hakis86
    @hakis86 Před 5 lety +9

    Part 2 next week, you said... it is now sunday.. I need it! :D :O

    • @adoredtv
      @adoredtv  Před 5 lety +3

      Working on it, I had a week off last week instead.

    • @hakis86
      @hakis86 Před 5 lety

      @@adoredtv Well deserved! I'll stock up on popcorn meanwhile :)

    • @tigerd7528
      @tigerd7528 Před 5 lety

      @@adoredtv Do we have an ETA now? It's killing me.

    • @adoredtv
      @adoredtv  Před 5 lety +1

      @@tigerd7528 planned for tomorrow.

    • @benhur2806
      @benhur2806 Před 5 lety

      @@adoredtv That will go perfectly with my Pizza then... :D

  • @Julle399
    @Julle399 Před 5 lety +6

    18:40 I want to believe this chart

  • @eliotrulez
    @eliotrulez Před 5 lety

    Thx for your comprehensive information. Have a nice break!

  • @rndompersn3426
    @rndompersn3426 Před 5 lety +2

    The amount of effort you put in is incredible.

  • @Dragonkindred
    @Dragonkindred Před 5 lety +3

    Another fantastic video Jim. Talk about fuelling the hype train! I can't wait for the next one. ☺

  • @pvalpha
    @pvalpha Před 5 lety +1

    Thank you for these videos. Have a very, very good long weekend! And I'm looking forward to the next part! Good news or bad news for tech, I always appreciate the thoughts and ideas you bring to the table for each one. Personally I like to daydream about what might be, but because we have to live in the world that is, I always will appreciate someone who can bring the reality of the situation into the light. :) Even if it is just because you have good sources. :)

  • @wandererdragon
    @wandererdragon Před 5 lety

    Thanks for video! Happy Easter!

  • @thehumangerm
    @thehumangerm Před 5 lety +3

    Going with a chiplet design also makes AMD's delays and research funding reductions on the GPU line make a lot of sense with the company restructure outside of just saving money in a cash strapped company. It seemed a bit reckless to throw away the entire GPU division like that. If they planned on going to chiplet it would seem like a funding waste to develop a monolithic chip design letting much of the RND issues get resolved while developing Ryzen leaving just a chiplet to mainly design on a tested 7nm then architecture. Would not be shocked if they sourced some of those engineers to work on infinity fabric and die reduction in the interim. The Radeon VII is just were monolithic chip design left off and they released it with little marketing spending to recoup that RND cost not to really make a profit and act as a holdover until Navi for developers. If that all holds true it really was a brilliant strategy and explains why Ryzens chief designer was poached to play catch up.

  • @rediornot811
    @rediornot811 Před 5 lety

    thank you for taking the time to do this

  • @LucaFiltroMan
    @LucaFiltroMan Před 5 lety +7

    Thanks for giving us blue balls, Jim .
    lol

  • @laszlo3547
    @laszlo3547 Před 5 lety +3

    Some video ideas:
    •Is the x86 architecture becoming obsolete? Is there a compelling enough reason to replace it in the near future?
    •What's happening with Intels 10 nm?

    • @laszlo3547
      @laszlo3547 Před 5 lety

      One more: Possibility of Nvidia switching to Samsung from TSMC.

  • @churchseraphim1380
    @churchseraphim1380 Před 5 lety

    Funny, i just checked your channel not even 10 minutes ago to see if you uploaded incase CZcams forgot to notify me. Can't wait to watch.

  • @yarox3632
    @yarox3632 Před 5 lety

    Awesome, love the new outro!

  • @prettysheddy
    @prettysheddy Před 5 lety

    Excellent! Just made my day. An Adored video to start my long weekend. 😁

  • @coolbeans6148
    @coolbeans6148 Před 5 lety +1

    Mind blown!
    You even cleared my confusion on the phoronix guy comment that I read a few days ago.
    ill be getting Navi 20

  • @antideric2315
    @antideric2315 Před 5 lety +6

    Nice Outro! I'm really looking foward to the launch of NAVI and 3rd Gen Ryzen :D

  • @drkRoss89
    @drkRoss89 Před 5 lety

    @AdoredTV Even if the research isn't 100%, it's still a deeply interesting rabbit hole to tumble down nonetheless and I'm more interested in the thought process as opposed the the actual result.
    Or in other words, the journey is more satisfying than the final destination.
    As for the detractors, the best thing you can do is let the research and the work you put in speak for itself. I still can't believe I'm still following after a good few years.

  • @harrybelele
    @harrybelele Před 5 lety +5

    Your channel logo changed visuals/audio for the better. Nice idea now you are about to hit the 100K milestone soon.

    • @kojack57
      @kojack57 Před 5 lety

      It beggars belief that the channel isn't at the 1,000,000+ milestone. Alas, such is the world of YT and the world at large I suppose.

    • @snetmotnosrorb3946
      @snetmotnosrorb3946 Před 5 lety

      Frankly I think the logo has gone in the opposite direction...

  • @snozzmcberry2366
    @snozzmcberry2366 Před 5 lety +1

    If you could've heard the noises that came out of my throat when I got the notification for this.. Cheers Jim!

  • @JuxZeil
    @JuxZeil Před 5 lety

    Great bit of info bud!! It's looking REALLY interesting for the chiplet design.
    'That''ll be for another video....'. You big tease........ Have a great holiday bud. ^__-

  • @Moodieblue
    @Moodieblue Před 5 lety +1

    always get all giddy when i see you've posted another video man

  • @sacamentobob
    @sacamentobob Před 5 lety

    perfect timing. Done with dinner. Now on for adored!!

  • @hugobalbino2041
    @hugobalbino2041 Před 5 lety

    Hello Jim, thank you for your time and hard work . As always great job. I do believe in you Jim. People don't realize how hard is to confirm this leak's and get information.

  • @mrhappy5789
    @mrhappy5789 Před 5 lety +9

    ff means fast finish, so yes conceding that you've lost

    • @whatszat5518
      @whatszat5518 Před 5 lety

      ForFeit

    • @Kenmanhl
      @Kenmanhl Před 5 lety +1

      When "ff" showed up on the screen, I thought it meant 255 in hex.

  • @baronvonlimbourgh1716
    @baronvonlimbourgh1716 Před 5 lety +27

    Don't defend yourself so much. You proved the trolls and haters wrong allready over the past years.
    They will hate no matter what. Normal people understand context and look at the bigger picture instead of focussing on unimportant details.

    • @Chuckiele
      @Chuckiele Před 5 lety +3

      Yeah, he predicted the chiplet design such a long time ago, as soon as Lisa has shown the Epyc on Next Horizen, all doubts should be gone for a long time.

    • @baronvonlimbourgh1716
      @baronvonlimbourgh1716 Před 5 lety +1

      @@Chuckiele true

    • @drkRoss89
      @drkRoss89 Před 5 lety +3

      @Adored TV Even if the research isn't 100%, it's still a deeply interesting rabbit hole nontheless and I'm more interested in the thought process as opposed the the actual result.
      Or in other words, the journey is more satisfying than the final destination.
      As for the detractors, the best thing Jim can do is let the research and the work he puts in speak for itself. I still can't believe I'm still following after a good few years.

    • @Chuckiele
      @Chuckiele Před 5 lety +1

      @@drkRoss89 Exactly. His speculations and all the thoughts he put behind them are the exciting part, if they come out to be true its just a nice bonus. :D

  • @SilkenLuna
    @SilkenLuna Před 5 lety +6

    I believe Silicon in this case can be called as stone too.. :)

  • @Dj0rel
    @Dj0rel Před 5 lety +15

    What puzzles me is how is AMD gonna feed that 20 CU iGPU the necessary bandwidth.

    • @nathangamble125
      @nathangamble125 Před 5 lety +1

      Through the I/O die.

    • @fishclaspers361
      @fishclaspers361 Před 5 lety

      Stacked dedicated DDR4 Dram? 64Bits@2.666mz minimum is probably much better than having it split between system ram and graphics bandwidth, plus Pascal level memory compression.

    • @Dj0rel
      @Dj0rel Před 5 lety

      @@fishclaspers361 It's gonna take a lot more than that. The video card with that kind of memory bandwidth (GTX 1030 DDR4) is actually slower then even current Ryzen APUs.

    • @fishclaspers361
      @fishclaspers361 Před 5 lety

      @@Dj0rel Well we can always increase the bit width and increase the memory clock. Or use HBM2 if yeilds allow it.

  • @mateuszkwietowicz2470
    @mateuszkwietowicz2470 Před 5 lety

    As always, it's great to watch your videos. Sorry you have to deal with the skeptics so much.

  • @bobbybobman3073
    @bobbybobman3073 Před 5 lety

    I get the notification, I see AdoredTV, I follow the link immedieately, simple.

  • @cahdoge
    @cahdoge Před 5 lety +7

    The prices seem pretty reasonable. I hope they stay thet way.

    • @freepok
      @freepok Před 5 lety

      I would say TOO reasonable, seing the Radeon 7 at like 700€. something way better, close to the 2080ti for 430 bucks? naaaaah, I don't believe it

    • @cahdoge
      @cahdoge Před 5 lety

      @@freepok nope, about 60-70% more performance for the same price bracket. And if they use chiplet design extensively, they can offer gGPUs at a extremly competetive price and still have higer margins than before.

    • @NANOTECHYT
      @NANOTECHYT Před 5 lety

      In before NVIDIA lowers the price of Turing cards.

    • @MarcABrown-tt1fp
      @MarcABrown-tt1fp Před 5 lety

      @@NANOTECHYT Oh, that would make Nvidia turing dGPU owners cry lol.

  • @tacticaltaco7481
    @tacticaltaco7481 Před 5 lety +2

    I'm used to "ff" being used to tell the other person to forfeit, not for saying they forfeit..

  • @ajc-th5ei
    @ajc-th5ei Před 5 lety +1

    Great work.
    I wanted to draw your attention to a couple other quotes from the "monolithic" article:
    "The challenge is that unless we make it invisible to the ISVs [independent software vendors] you’re going to see the same sort of reluctance."
    "But the GPU has unique constraints with this type of NUMA [non-uniform memory access] architecture, and how you combine features."
    "So, is it possible to make an MCM design invisible to a game developer so they can address it as a single GPU without expensive recoding?
    'Anything’s possible…' says Wang."
    “Yeah, I can definitely see that,” says Wang, “because of one reason we just talked about, one workload is a lot more scalable, and has different sensitivity on multi-GPU or multi-die communication. Versus the other workload or applications that are much less scalable on that standpoint. So yes, I can definitely see the possibility that architectures will start diverging.” (referring to the use of multi-die and multi-GPU for specific workloads rather than gaming).
    www.pcgamesn.com/amd-navi-monolithic-gpu-design
    The article was published nearly a year ago, about 6 months before the use of an I/O die was publicly confirmed. With that, they confirmed the system will see the memory as UMA instead of NUMA. With the follow up with Mark Papermaster by Ian Cuttress, there are questions related to if all is routed through the I/O die:
    "IC: With all the memory controllers on the IO die we now have a unified memory design such that the latency from all cores to memory is more consistent?
    MP: That’s a nice design - I commented on improved latency and bandwidth. Our chiplet architecture is a key enablement of those improvements.
    IC: When you say improved latency, do you mean average latency or peak/best-case latency?
    MP: We haven’t provided the specifications yet, but the architecture is aimed at providing a generational improvement in overall latency to memory. The architecture with the central IO chip provides a more uniform latency and it is more predictable."
    "IC: The IO die as showed in the presentation looked very symmetrical, almost modular in itself. Does that mean it can be cut into smaller versions?
    MP: No details at this time.
    IC: Do the chiplets communicate with each other directly, or is all communication through the IO die?
    MP: What we have is an IF link from each CPU chiplet to the IO die.
    IC: When one core wants to access the cache of another core, it could have two latencies: when both cores are on the same chiplet, and when the cores are on different chiplets. How is that managed with a potentially bifurcated latency?
    MP: I think you’re trying to reconstruct the detailed diagrams that we’ll show you at the product announcement!
    IC: Under the situation where we now have a uniform main memory architecture, for on-chip compared to chip-to-chip there is still a near and a far latency…
    MP: I know exactly where you’re going and as always with AnandTech it’s the right question! I can honestly say that we’ll share this info with the full product announcement."
    www.anandtech.com/show/13578/naples-rome-milan-zen-4-an-interview-with-amd-cto-mark-papermaster
    So with an I/O die on the graphics card, there is a potential that it will provide an UMA situation, masking the split die setup. That would address the making it invisible and anything is possible statement.
    Instead, the sensitivity on communications would seem the primary issue, whether it be cache or keeping the latency stable to prevent stale data or having one die run away on one type of calculation with the other die falling behind. Something like the HBCC or an I/O controller for the cache may help, but it would then be going off chiplet to pull the data in. Now, how to setup the IF for time sensitive frame calculations would take some engineering, along with getting the cache right, but it seems they could have a solution for the NUMA issue. Standardizing IF2 lengths for either memory or cache calls may also help, but I'm better at CPU than GPU analysis, so want to put that out there.
    Meanwhile, looking forward to part 2!

  • @cavegoblin101
    @cavegoblin101 Před 5 lety

    I am doing great Jim!
    Love your videos!

  • @Gruntsworth
    @Gruntsworth Před 5 lety +2

    I was just thinking last night when we were gonna get a new AdoredTV vid.

  • @SuperCapuka
    @SuperCapuka Před 5 lety

    I get more exited when Jim posts a new video than any episode of game of thrones.

  • @playcloudpluspc
    @playcloudpluspc Před 5 lety +2

    The special edition Navi 20 is exactly what I am looking for.

  • @Zipzeolocke
    @Zipzeolocke Před 5 lety +2

    this news about Navi was so expected… Could have easily predicted these results years ago. Because it feels like history repeats itself with every new generation. For example when Nvidia releases RTX 2080, it won't be long before AMD provide the same or slightly better performance for a much more affordable price. And then people complain because they already bought the RTX 2080 because it came first. It feels like AMD is always a year or so behind but at a much more affordable price. History repeats itself

  • @MarceloTezza
    @MarceloTezza Před 5 lety

    This week has been very exciting

  • @ExodeusIS
    @ExodeusIS Před 5 lety +6

    Gimme that sweet sweet Navi 20! Can't wait to ditch my GTX1070 for team red.

  • @WinterCharmVT
    @WinterCharmVT Před 5 lety +1

    I have been waiting for this :D

  • @01RIE01
    @01RIE01 Před 5 lety

    New upload by Adored? Im pouring a drink!

  • @Nightwing787
    @Nightwing787 Před 5 lety

    Yessssss! Been waiting for a video!

  • @dmytrotkachov6859
    @dmytrotkachov6859 Před 5 lety +3

    This is proper NVidia trolling naming scheme right there. RIP RTX 3080/3070/3060

    • @eltyo340
      @eltyo340 Před 5 lety

      lol can you imagine
      RTX 4000 series
      RX 5000 series
      RTX 6000 series
      ....and beyond xD

  • @xabiergranja
    @xabiergranja Před 5 lety +4

    C'mon AdoredTV, it's been nearly 2 weeks... we need part 2! :) JK take all the time you need... but not too long!

    • @adoredtv
      @adoredtv  Před 5 lety +5

      Tomorrow ;)

    • @Altirix_
      @Altirix_ Před 5 lety

      @@adoredtv excited for it! its technically tomorrow now :p where it at? jk

    • @tigerd7528
      @tigerd7528 Před 5 lety

      @@adoredtv 22 hours now. About time?

    • @adoredtv
      @adoredtv  Před 5 lety +6

      @@tigerd7528 Running late, it's a long video so won't be done by tonight. Tomorrow now.

    • @wwubwtndyoutube6027
      @wwubwtndyoutube6027 Před 5 lety +3

      AdoredTV thanks for telling us

  • @theperk2007
    @theperk2007 Před 5 lety

    This is gonna be good day with Jim's video

  • @barrelfish8106
    @barrelfish8106 Před 5 lety

    keep up the good work. we appreciate it!

  • @AlfieMakes
    @AlfieMakes Před 5 lety

    Yup, this is the video I was waiting for.

  • @DJ_Dopamine
    @DJ_Dopamine Před 5 lety +2

    Best Tech CZcamsr on CZcams.

  • @DeyvsonMoutinhoCaliman
    @DeyvsonMoutinhoCaliman Před 5 lety +1

    They helped me a lot by naming their products similarly to the mainstream trademarks.

  • @louiscloete3307
    @louiscloete3307 Před 5 lety

    Great video as always! Thanks!
    I have a question: I heard/read somewhere that the 3000 series APUs will be 12 nm, which made a lot of sense to me at the time, since the 2000 series APUs were 14 nm. Did you hear anything about that? Do you have any thoughts?

  • @BRUXXUS
    @BRUXXUS Před 5 lety

    Love your videos! Great logic mixed with any information that's known or speculated to come to come conclusions.
    Of course all this stuff needs to be viewed as speculation and having flexibility in what's concluded is common sense... for intelligent viewers.... ;)

  • @alikia9934
    @alikia9934 Před 5 lety

    by considering that a modern and capable GPU like RTX 2060 or Vega 56 needs more than 300GB/s memory bandwidth, is it wise to separate GPU core and memory controller on two seperate die?

  • @AliveInRap2k12
    @AliveInRap2k12 Před 4 lety

    could you ask them if they want to improve in reflection handling and peak lightning.. average and dynamic?

  • @F1997S
    @F1997S Před 5 lety +1

    Hey Jim! Great video as always, keep on going buddy 😆, did the recent supposedly "navi" leak that buildzoid was talking about confirm the bad stuff that you have heard or did it conflict with your findings? Do you think it was leaked on purpose to kinda counter act your warning you gave us last week that navi ain't what we think it is gonna be? Sorry for bothering im just curious, have a nice day

  • @samborton6613
    @samborton6613 Před 5 lety

    Loving that outro Jim!! Crack job