Analysing Navi - Speculation and Leaks - Part 1
Vložit
- čas přidán 17. 04. 2019
- What is Navi likely to be?
♥ Subscribe To AdoredTV - bit.ly/1J7020P
► Support AdoredTV through Patreon / adoredtv ◄
Bitcoin Address - 1HuL9vN6Sgk4LqAS1AS6GexJoKNgoXFLEX
Ethereum Address - 0xB3535135b69EeE166fEc5021De725502911D9fd2
♥ Buy PC Parts from Amazon below.
♥ NEW USA Store! - www.amazon.com/shop/adoredtv
♥ Canada - amzn.to/2ppgYsX
♥ UK - amzn.to/2fUdvU7
♥ Germany - amzn.to/2p1lX6r
♥ France - amzn.to/2oUAK2Z
♥ Italy - amzn.to/2p37Uui
♥ Spain - amzn.to/2p3oIBm
♥ Australia - amzn.to/2uRTYb7
♥ India - amzn.to/2RgoWmj
♥ Want to help with Video Titles and Subtitles?
czcams.com/users/timedtext_cs_p...
-- Video Links Below --
www.pcbuildersclub.com made the Navi cover image. - Věda a technologie
"I am gonna be brief" almost 30 min vídeo 😂😂
"Part 1" haha
Yeah, but that time, even at 30+ minutes seems to go by so fast.
28:34 minutes is brief compared to his other analytical videos
Yeah he spends half the time saying that he was never wrong to begin with, that's kind of half his videos always 😂😂
Fortunately there still are some people on CZcams who consider that brief. :)
The moment you keep looking at the time stamp because you dont want the video to end.
No lie, I checked 5 times.
i got bored after around 8 min
@@ottley32 same here kkkkk
fr
Really, I checked several times as the topic is interesting 😏
Don't listen to them haters boss! We're here for only three things:
Your leaks
Your accent
And your long analysis videos
You're doing bits man. Keep em coming xD
This guy has the biggest leaks since the Titanic 😂
Jims accent keeps my daughter enthralled for a whole 30 minutes which is a god damn miracle keeping her attention for any longer than 30 seconds!
It's funny you mention accent, I find myself doing Jim impressions throughout the day.
What accent?
Oh, come ON, Jim! Bad news BEFORE good news! Everybody knows that!
I was really excited right to that point where he said that :/
I’m calling it now either Navi is going to be delayed, Navi will only reach 1070 ti level with the special edition only reaching 1080 level, Navi will simply be a more refined Vega on 7nm, or some kind of combination of the choices I’ve already said.
@@aflyingmodem Vega 56 now on par gtx 1070 and the Vega 64 is on par with the gtx 1080 so if they have no performance gains then the power requirements would be way lower than in Jim's charts here ! So I'm guessing you would be wrong !
@@Justchuck69 O rly czcams.com/video/aqxL27pmYpE/video.html, Vega 56 is almost always faster than a 1070 and barely behind a 1080. Anyways I expect the top-end (of the midrange lineup) Navi to be around the 1080/Vega 56 + 10% level.
@@ypsilondaone I'm not sure I want to watch part 2 with the bad news now:( What are you doing to us Jim!
It's always a good day when Jim uploads
I'll upload your soul.
His name is Jim?
Yep, Jim Parker. We've known for a long time
@@imergence9628 Spider-Jim?
Ya damn right :)
I gotta give credit where it's due, Jim. I really love the way that the different segments of your videos are always so coherent and how they all tell one story by referencing previous segments. That is what makes it so pleasant to sit through 30 minutes of a video every single time.
That is probably also why we only get one video a week. Writing a good script is a lot of work.
I bet it takes him days and many many rewrites and shuffeling stuff around.
But it does pay imho. No matter how difficult or technical the subject it is always very clear and easy to understand.
A lot of chanels could learn a lot from this.
The research is a day or two, the script a day or two and recording and editing at least a day and often 2 days. Overall I'm really under pressure to continue delivering one of these every week, now that I take every weekend off (in theory) ;)
@@adoredtv in theory.. i know how you feel lol.
The end sounds gloomy. Is it Vega all over again? Too little, too late? Do they need yet another respin? What a cliffhanger over the easter holidays...
Holy crap, this is like the ending of Infinity Wars.
Probably too little too late. Nvidia will probably just release the 2070ti and start getting the RTX 3XXX series ready on 7nm which will unfortunately blow AMD out of the water.
@@ConorDoesItAll a 2080ti on 7nm would destroy everything, ouch, let's hope they can do something, I really want to buy amd for my next gpu
And there is more competition coming on the horizon, Intel. So it is not that AMD has much time to bring their GPU roadmap execution in order if they want to stay relevant in that market.
@@ConorDoesItAll Also remember that power savings and freq boost are slowing down so I don't think it will be as big a jump as you think. You also know that they will raise prices by 50 percent while fps goes up by say 20 percent. Nvidia is digging a hole, slowly, over time. Wallets can only stretch so far.
Ive been involved with computers since 1980 , though Mainly Music and Games.....
So Im quite understanding of the basics, but your channel, focusing on the possible upcoming tech with such deep analysis , even as a Laymen, Its so researched and concise , by showing and explaining your resources, everything makes absolute sense. I learn so much.
Seriously, this is the best channel in its field , actually across any medium.
The time it must take you I cant comprehend, but know Its very appreciated.
Love your work.
Disclaimer:
My inlaws are Scottish but it has no bearing on my opinion.
Though Scotts are awesome !
The Navi saga has been such a roller coaster. I can't believe how much I've learned from following the ridiculous development cycle. This has all been such a treat!
I don't care what AMD's GPU naming scheme is, so long as it stays consistent. I bet you a bunch of lost sales are because people don't understand the product tiers after HD 7XXX became R# 2XX/3XX/Fury became RX 4XX/5XX became Vega XX became Radeon VII. FFS STICK WITH ONE NAMING SYSTEM AMD!!!
to be far, both nvidia and amd/ati among a few other usually easily forgotten manufacturers, hell even intel's own naming convention has constantly seen a wild ride of absurd or arbitrary values. We had the TNT series before moving to Geforce... then the numerous models and numbers within each release series, initially 1 up to 3 before the 4 started throwing some extra numbers into the mix for additional modeling, then 5 series as FX which basically dropped the individual single digit series number in favor of just going "5xxx"... followed by something more followable as the geforce 6xxx all the way up to the 9xxx before rotating it's neck around and rapidly launching a short lived 1xx which were mostly again a rebrand of the 9xxx which was a rebrand somewhat of the 8xxx, form there we saw the 200 - 700 series before a bit of a leapfrog on the 800 (much akin to why amd leapfrogged the hd8000 series) and then 900 before venturing into the 1000's. ATI/amd have done much of the same thing, it's just seen a lot of changes again in the last few years. Intel is certainly guilty of the confusion too.
The guys buying vega and radeon vii are probably the handful of people. I agree, the switch to r7 r9 was dumb
@@SpectrumTwist I disagree. Nvidia and Intel have definitely changed product names, but not nearly to the degree of AMD GPU's. Going from GTX 7XX to GTX 9XX isn't that confusing, and it's clear which one is newer. Intel Core iX also has been the same for almost a decade. All the changes I listed happened in the last ~7 years, and I even forgot Fury.
@@DrearierSpider1 There wasnt really a way to go after the Radeon HD 7XXX. They eventually would run out of 4 digit numbers and 5 digits are dumb so they went back down to 3. Intel is facing the same problem soon. There isnt much to come after a 9900K, a 10900K would just sound insanely stupid. Same problem what Nvidia had after the GTX 1000 series. Going with 2100, 2200, etc would seem like lacking progress, so they went with 2000 and probs 3000. Just the 1660 doesnt fit in there at all and next generation will probs be even weirder. You cant keep a single naming scheme for 20 years, it will eventually run out of numbers.
it's rx400/500/vega. so exactly the same as the last one.
radeon 7 is the only odd one out, and probably not originally planned for.
man you killing us
Smalls?
The next architecture IS NOT going to be called "Arcturus"? Are you SIRIUS???
“This video was the last of the good news on NAVI” 🎤⬇️
Love the new Outro btw
ominous, I don't like how it sounds
I'm scared 😟
It's like "Game Of Navi" ending with an oooooh cliffhanger!
I've said it before and I'll say it again. ATVs, AMDs Master Plan video has it all. Even if you aren't in the slightest bit interested in this area it is still a must watch. I should know, half the views are mine. Not really but, quite a few are. Amazing piece of work.
The other half is probably mine. if jim could only see how many times I've watched his video's he'd be shocked XD
@@-GameHacKeR- Nope, mine.
Jim, your uploads are fantastic. I love the information and the speculation that you put out there. It's better than drama sometimes.
18:56 •
The Maximum CU Values are: Navi 10 / 60CU., Navi 12 / 40CU., Navi 16 / 20CU
This as a note is because the GCN 2.x Architecture uses a 5 Core / Cluster instead of a 4 Core / Cluster of GCN 1.x Architecture.
As there are 4 Compute Pipeline on 4-6 Cluster Sets.
This means that the Maximum the Architecture itself can support is 80CU (4x4x5) for GCN 2.x and 64CU (4x4x4) for GCN 1.x per Monolithic SoC.
As a keynote., this is why Radeon VII / Instinct are both 50/60CU *NOT* 56/64CU.
In fact for all intended purposes Radeon VII is Navi 10., the difference being it's a Monolithic Design (the I/O Control being 7nm, which isn't optimal and flanked by the 4 Compute Cluster Cores) … where-as for Navi these are not part of the SoC "Chiplet" and will likely be 14nm due to it being Cheaper to Produce and Compatible with Zen(2).
This doesn't mean it will be "Smaller" however., as AMD RTG have let slip that they've been able to reduce the Latency on Infinity Fabric v2.0 to where it could be used for High-Performance Application (hence why the Zen 2 Chiplet don't Cross-Talk) … so it will likely be a L3 Style (HBM) Cache.
As a result., this would provide 2 Benefits; one being that they could produce the Compute Clusters as Chiplet over a Monolithic SoC (i.e. 1-3 Compute Chiplet) being able to Scale by adding or removing these "As Required" … and the second is that with the additional space and larger (cheaper) 14nm Process, they can then include something similar to the Terascale Ultra-Threading Processor within the Control I/O; something they've essentially not had the room or complexity for with the Monolithic Designs.
After all., even an "All-In-One" Control I/O is going to be a fraction of the size for a GPU than it is for a CPU.
(Based on the size of the HBM2/GDDR5 one in Vega / Polaris., we're talking about half the size of the current Ryzen 3rd Gen Control I/O... that's a lot of Spare Space (and remember this is the case because it's a Lower Number of Chiplet; so fewer overall Memory Interfaces are required; plus Ryzen Control I/O need the whole Northbridge in there as well).
Having a Thread Management Engine within the Control I/O could mean that without strictly any Architecture Changes BETTER utilisation of the Available Hardware Threads without High-Skilled Low-Level Graphics Programming becomes possible, especially if it includes Machine Intelligence like Zen that Optimise-Over-Time the Branch Prediction, as well as better L3/L2 Utilisation.
Given Memory Calls are CPU/GPU the biggest pipeline bottleneck due to latency.
Remember part of the 5 Core Per Cluster; is primarily so that typically unused Threads can be used without affecting the Standard 4 Graphics Pipelines as say INT8 or FP16 resulting in potentially better performance when Optimised., and having a Thread Management Engine that automatically in-line optimises; say converting Colour Calls to FP16 / INT8, when typically they're not would free up not just an FP32 Pipeline but allow 2-4 to be Rapid Packed further increasing the Throughput.
A bit like having SMT / Hyper-Threading for the GPU., which is essentially what NVIDIA's Gigathreading Processor already does.
Navi is of course the 3rd Generation GCN 2.0 Architecture., this means 'Shiva' does end up being a "New Architecture"; although the claim it's the "End of GCN" is hyperbolic.
I think what is likely meant by this is that unlike GCN 2.0, it's not going to be compatible with GCN 1.0 … and instead entirely focus on expanding the GCN 2.0 Concept.
This will almost certainly be expanded with the addition of Fixed-Function Pipelines for things such-as Ray-Tracing or Machine-Learning.
In essence Modernised Terascale that is stripped down from all of it's Graphics Pipeline Elements into just Pure Mathematical Co-Processing along side the Compute Cores (and I almost guarantee it will follow the VLIW/5 Approach... i.e. 4 General Purpose + 1 Fixed Purpose Pipeline)
I mean this makes the most sense as GCN (Compute Units) are just infinitely scalable., what are however needed going forward are elements that either have to be designed as a Secondary Core (ala Turing); but that has issues with Data Sharing, or like how Zen has it's 2 Additional "Half Pipelines" for SMT. And well which will make the most sense for AMD to follow here?
What they might do is push this out to the Workstation / Developer Market first within this Generation of Navi; knowing that when they release the Consumer version in 2020., that Developers will already have a handle and be ready to Deliver Software capable of taking advantage of it. (Again learning from NVIDIA's Mistake)
Essentially NVIDIA "Jumping the Gun" as it were., will ultimately work out to AMD releasing a (Full) Product Range that supports it, just as it starts becoming utilised.
< • >
Honestly I expect the Architecture and Hardware will be basically directly inline with where it should be for Generational Improvements and Innovation... I also expect to see it being seen as "Underwhelming"... but frankly this has much more to do with the fact that AMD doesn't have an issue with their Hardware., but their Marketing and Branding.
*THAT* is what and where they really need to work on, in order to change Public Perception of them; and not simply be seen as the "Cheap Alternative"
I've seen quite a few claim "Oh, well they need to release something with either outstanding performance or outstanding price"., but has that ever worked in the past?
No, of course not. Even if it had the potential to, which I doubt., just look at NVIDIA's behaviour over the past 4 years, ever since Polaris spooked them.
They fell back onto their tried and true "Don't Control the Hardware, Control the Ecosystem" … that forces people into NVIDIA's Ecosystem.
It wasn't just Polaris that spooked them., but the Low-Level APIs (Vulkan and Direct3D 12); which they simply couldn't control., so what did they do? They heavily incentivised Developer to remain on DirectX 11, pandering to the fact most Developers / Publisher don't exactly want to change their Development Toolchain / Ecosystems.
AMD with their GPU Open initiative _could've_ corrected this., but they've arguably all but abandoned it, just as they've done with so many good ideas they've had over the years.
Like they keep getting Royal Flush Straight Hands., but still they fold to NVIDIA's Bluff. And this is behaviour they HAVE to stop; because they're not really competing with NVIDIA but themselves.
Now this should be made into a video.
Enjoy your break in the Swedish wilderness!
When I see an AdoredTV Update... I get excited like a kid when his dad comes home from work.
I said that months ago that they market navi and ryzen together as a "perfect package." Let navi hop on the success train that ryzen is cruzing on. "Rydeon"
Jim, well done - looking forward to part 2. Wishing you a restful and enjoyable break!
Another great video Jim! It sounds like I/O chiplets are indeed very likely for Navi. I really think there is a good chance that the RX 3080 is a cut down Navi 10 die with a 256-bit I/O, the PS5 is a slightly less cut Navi 10 die paired with a 320-bit I/O, and that "special edition" card is the RX 3090 with the full Navi 10 die and a 384-bit I/O.
I gotta say that I won't be entirely surprised if Navi stops at 64 CU's too, but I will find it a bit odd and disappointing. I definitely thought it would be more likely than not that Radeon would want to at least go a bit above 64 - to say 80 or so CU's. That way they could at least cut down the card to something a tad above 4096 for gaming, and sell the full dies to other customers until yields improve.
Hope all is well, still looking forward to the upload and appreciate all the work you must put in to these vids
hmmm...waiting for part 2!
Always utterly fascinating, even stuff I'm not interested in is interesting when this channel covers it.
If those Navi specs are true, then they seem reasonable to me. Albeit, the ROI at those MSRP's isn't looking too hot, although that might not matter if their die is "scalable" (whatever that ends up meaning).
I say this because of what we know about SS/GF 14nmLPP to TSMC 7nmHPC:
Area Reduction = 0.68x (32%). The maths for anyone wondering is, (((((9.5 / 16.6 * 100) + 100) / 2) + (9.5 / 16.6 * 100)) / 2) / 100. For Maxwell to Pascal for example, change 9.5 and 16.6, to 18.3 and 27.5. These numbers being the actual sizes of the nodes, and not the marketed sizes.
Power Reduction = 0.5x (50%)
.
Performance Increase = 1.25x (25%).
As far as TDP:
V64 with a tightened v/f has a GPU only power draw of ~170W at ~0.95V average, call it 180W.
A 64 CU P10, with a tightened v/f would have a GPU only power draw of ~180W too.
I have several formulas of figuring out TDP, but the one that seems to be the most accurate across different uarchs ends up with a TDP of ~160W. The maths of it is:
(180W / 64 14nm CUs * # of 7nm CUs required to reach equal performance to 64 14nm CUs) / 1.25x performance increase + 50W (RAM and misc components) = TDP.
The parity CUs at 7nm compared to 14nm CU's are found by taking the effective CU performance at 14nm and dividing it by the performance increase from node to node (1.25x).
For V64, the effective CU performance is more like 60 CUs, due to a ROP (and maybe something else) limitation. So instead of a V64 having a clock for clock performance advantage of ~14% over a V56, it's actually more like half that at ~7%. So if the hardware specs of V56 were to be scaled up linearly to match the per CU performance of the V64, it'd end up at 60 (56 * 1.07) CUs.
Therefore it'd take 48 (60 / 1.25) CUs at 7nm to have the same performance as a V64.
Add a 10 - 15% IPC increase, which is similar to what we saw from Fiji to Vega, and the rumoured performance is achieved.
The complete calculation then ends up being:
(180W / 64CUs * 48CUs) / 1.25x + 50W = 160W TDP.
There is another, maybe more comfortable formula, that goes:
180W * 0.5x * 1.25x + 50W = 160W TDP also.
Averaging out the results with the four other formulas, ends up at 160W TDP.
Either I'm overshooting, or the rumoured TDP is too low. But either way, it's close enough.
As far as die size, there are two possibilities because a P10 64 CU die would actually be ~100mm2 smaller than a 64 CU V10 die.
A 64 CU N10 die would then be:
V10 = 486mm2 * 0.68 = 331mm2 (same as V20, unsurprisingly)
P10 = 386mm2 * 0.68 = 263mm2
For a 48 CU N10/3080 die to be at equal performance with a V10 or P10 die, it'd be:
V10: 331mm2 / 1.25x = 265mm2
P10: 263mm2 / 1.25x = 210mm2
Based on V10 the 3080's effective die cost comes in at ~$90, and based on P10 it comes in at ~$60. Comparatively, the 580's die costs ~$25, and the V64's ~$60.
Adding up the component costs and I think the pricing would be closer to $300 for the 3080, but there are so many variables that it's close enough to $250 for it to be possible.
They could also go the GP100 to GP102 route, and remove ~20% of unnecessary for gaming FUs (function units), which would drop the die cost's by ~$20 - 30, but who knows if that's possible.
On the APU side, without the I/O, they could do an ~80mm2 chiplet with potentially up to 20 CUs which would pull ~45W at similar clocks to the 3080.
Going the GP100 to GP102 route, would increase the CU amount by four, but also the TDP by ~10W.
EDIT: I missed the obvious.
*JESUS* *MOHAMMED*
I want 8 Geometry Engines, 4096 cores, 2GHz clock speed, and 128 ROPs.
Radeon VII and Vega are limited in Geometry and ROPs (less so to be fair). They have BAD CU utilization in games. Even Wolfenstein 2 doesn't use all of Vega's or Radeon VII's capability, it has a lot of idle cores and idle time.
All these wants are more than easy on 7nm.
Definitely 128ROP's, I moved from Vega 64 to Radeon 7 and it was a good move but still need more power for 4K 60FPS Ultra 😅
@@adi6293 The geometry engines will help more :P but yeah.
@@MaggotCZ
Big Navi obviously.
Polaris is in consoles (x1X) and Polaris-Vega is in PS4 Pro yet RX 590 and RX Vega 64 LC exist. So yeah...
This may seem completely off the wall, but there was a mild discussion had recently, in which with AMD ramping up the sensors dumped into their cpus and now GPUS (as seen on the R VII), supposedly there is to be better utilization information pertaining to individual CUs potentially down the pipe. Would this be in navi, i doubt it, but it's not outside of the realm of possibility given the change from Vega 56/64 to the Vega used in R VII. IF amd is able to provide a breakdown of individual CU utilization, since it would be rather impractical to try and get utilization of all the SPs themselves (specially as they increase in number), this could potentially help with alleviating the utilization issue by being able to see what exactly is going on to a far greater level of accuracy. I mean it makes sense that they could do this, why haven't we had this as an option in the past already seeing as CPUs with even SMT having utilization information available for every thread individually. IF anything they should be able to report utilization based on just the 4x ECE. This may be more validated by the fact that even microsoft has only recently included a GPU utilization graph in the task manager now and with the newer WDDM, there's no real reason why we could have a more detailed breakdown with "multiple graphs" shown on the gpu utilization page akin to showing more cores/threads on the cpu performance tab.
I can't see navi pushing rops beyond 64, not unless we were to see a bump in CU count beyond 64... but again there is nothing to suggest we'd see more than 64 CUs in anything other than the top end navi anyways.
64 ROP is more than enough even for HDR 4K
Another interesting and much appriciated video! thank you for the hard work!
Thank god you posted this, I was hungry for more Navi info as we get closer, and I missed the wonderful sound of your voice...so soothing...I needed it! Thanks buddy!
Jim, I dunno about you, but I'm finally excited with REAL tinglies in my tummy about what Navi may hold for us... if the prices and performance are what we are expecting, we're looking at one hell of a battle come 2020 with Intel Xe on the table... Great work as always my friend.
430$ GPU that is only 15% slower than a 1200$ gpu? Not bad,i hope it's true.
me too :/
Think you got that wrong, the RTX 2080 is 750$ roughly, the TI version is 1200$
@@pedrosoares7273 didn't it say RTX 2080 + 15% ? that means it is approx 15% slower than the 2080Ti no? a $1200 card?
10% slower.
He said Navi 20 is plus 20% of a rtx 2080, rtx 2080 ti is 30% faster than the 2080...
@@coolbeans6148 I knew that it rtx 2080 ti was 35% faster than the rtx 2080 but whatever it's a small margin anyway.
*looks up from the mueller report*
Does it really fucking matter? Does public uproar even really do anything, one way or another? Things will continue of their own volition wither or not you read into other people's garbage drama. The sooner you rid yourself of it, the better you'll be for it.
Pauses Mueller report, reads adored TV.
Nope. There is lots of evidence of collusion, just not enough for an indictment.
Fucking plebs
@@WinterCharmVT Turn off CNN, your brain is fried.
I have been waiting for your update on Navi with baited breathe. And, you would think I would be disappointed that we only got half the story, but I am not. Now, I have something else to look forward to since AMD's product release has been delayed. I will freely admit, I have rooted for AMD since the early days and have continued to give them my patronage out of pure principal. Now, we will just have to sit back and see if even the most die hard Intel fan boys see the light because they seem to blindly spend exorbitant amounts of money for Intel products to date. Thanks for all your hard work. I enjoy every edited minute of it, and I know that is a lot of work, in itself, not to mention the research and due diligence required.
Baited breath is breath that smells like fish? Or did you mean bated?
Cheers Jim going to watch just before bedtime, have a good easter/weekend.
When is part 2
When is Part 2 coming? He said this week XD
Video is delayed just like Navi: 1st bad news. :D
I hope the performance of Navi in these rumors is true. It would shake up the PC market. Zen2+Navi would become a match made in heaven.
That has been done before, we got gtx 980 level on the gtx1060 which if we look at the gtx1080 vs this $250 Navi card, its the same situation if its true
@@imo098765 Yeah you are right it is about time we had another good upgrade like RX 480/GTX 1060.
@@imo098765 thats becuase 28 to 16nm was a incredibly node shrink. 14 to 7nm is kind of a medicore nodeshrink, atleast for AMD. Eg- 7nm radeon 7 only matching 16nm 1080 ti for crying out loud.
@@ericliu8434 If Im not mistaken going from 14nm to 7nm is just as big as 28 to 16 because it is half the size. So it is relatively a bigger decrease
@@imo098765 The math works out that way. But as the saying goes "its not about what transistor density you have, its about how you use it" or something like that. Amd themselves projected 25% more performance at the same power at 7nm, well below what maxwell to pascal was. And radeon 7 is the proof of that. Nvidia's R&D team is vastly larger and better funded, the massive amount of architecture level optimization that goes into modern nvidia cards is why they get so much more out of node shrinks. Amd isnt going to solve this problem until they start making billions more dollars.
Looking forward to the next video! Thanks for always trying to get us these juicy details!
We know you're a pro Jim. Don't let a few fools absorb too much of your time and energy. Ignore them if you can, defend yourself if you must.
Man you are killing me, its taking so long!!!
Oh no bad Navi news!? You got me on the edge of my seat Jim.
WE ARE LEAK STARVED. Liked video 2 seconds in.
Very interesting. Thank you Jim, awaiting your part 2 after Easter with real anticipation...
Thanks for your coverage once again. And don't worry too much about the press doubting you or your info, im sure most of us don't care what they think. I for one come here first for my tech stuff exactly because the info is so fresh that it may still change before release and the high quality analysis/history videos.
I just spent a week or so binging everything from the Polaris build up until now while on my daily commute. It is amazing to see how far you have come and the kind of contacts you have gained. Never would have thought back in 2016 that you'd be being fed such amazing info from inside sources.
Was very interesting going back to rewatch the older stuff with hindsight. Many laughs were had at some of the earlier speculation and analysis leading up to launches but those all faded. I wasn't checking as I went through but it sure feels like as time has gone on your speculation on future events has become a lot more accurate.
But after all of that content all I can say is. More, please.
Thanks for accompanying me along the road with all these infos..im otw from my home to my parents home for literally 30mins and these vid is really made me excited for these Navis.. especially since our household Pc and laptops are currently all-AMD hardware..that Navi 20cu apu will suit well to replace my 2200g 4k htpc
ive been here far too often i want part 2 :(
Working on it, I had a week off last week instead.
Part 2 next week, you said... it is now sunday.. I need it! :D :O
Working on it, I had a week off last week instead.
@@adoredtv Well deserved! I'll stock up on popcorn meanwhile :)
@@adoredtv Do we have an ETA now? It's killing me.
@@tigerd7528 planned for tomorrow.
@@adoredtv That will go perfectly with my Pizza then... :D
18:40 I want to believe this chart
Thx for your comprehensive information. Have a nice break!
The amount of effort you put in is incredible.
Another fantastic video Jim. Talk about fuelling the hype train! I can't wait for the next one. ☺
Thank you for these videos. Have a very, very good long weekend! And I'm looking forward to the next part! Good news or bad news for tech, I always appreciate the thoughts and ideas you bring to the table for each one. Personally I like to daydream about what might be, but because we have to live in the world that is, I always will appreciate someone who can bring the reality of the situation into the light. :) Even if it is just because you have good sources. :)
Thanks for video! Happy Easter!
Going with a chiplet design also makes AMD's delays and research funding reductions on the GPU line make a lot of sense with the company restructure outside of just saving money in a cash strapped company. It seemed a bit reckless to throw away the entire GPU division like that. If they planned on going to chiplet it would seem like a funding waste to develop a monolithic chip design letting much of the RND issues get resolved while developing Ryzen leaving just a chiplet to mainly design on a tested 7nm then architecture. Would not be shocked if they sourced some of those engineers to work on infinity fabric and die reduction in the interim. The Radeon VII is just were monolithic chip design left off and they released it with little marketing spending to recoup that RND cost not to really make a profit and act as a holdover until Navi for developers. If that all holds true it really was a brilliant strategy and explains why Ryzens chief designer was poached to play catch up.
thank you for taking the time to do this
Thanks for giving us blue balls, Jim .
lol
Some video ideas:
•Is the x86 architecture becoming obsolete? Is there a compelling enough reason to replace it in the near future?
•What's happening with Intels 10 nm?
One more: Possibility of Nvidia switching to Samsung from TSMC.
Funny, i just checked your channel not even 10 minutes ago to see if you uploaded incase CZcams forgot to notify me. Can't wait to watch.
Awesome, love the new outro!
Excellent! Just made my day. An Adored video to start my long weekend. 😁
Mind blown!
You even cleared my confusion on the phoronix guy comment that I read a few days ago.
ill be getting Navi 20
Nice Outro! I'm really looking foward to the launch of NAVI and 3rd Gen Ryzen :D
@AdoredTV Even if the research isn't 100%, it's still a deeply interesting rabbit hole to tumble down nonetheless and I'm more interested in the thought process as opposed the the actual result.
Or in other words, the journey is more satisfying than the final destination.
As for the detractors, the best thing you can do is let the research and the work you put in speak for itself. I still can't believe I'm still following after a good few years.
Your channel logo changed visuals/audio for the better. Nice idea now you are about to hit the 100K milestone soon.
It beggars belief that the channel isn't at the 1,000,000+ milestone. Alas, such is the world of YT and the world at large I suppose.
Frankly I think the logo has gone in the opposite direction...
If you could've heard the noises that came out of my throat when I got the notification for this.. Cheers Jim!
Great bit of info bud!! It's looking REALLY interesting for the chiplet design.
'That''ll be for another video....'. You big tease........ Have a great holiday bud. ^__-
always get all giddy when i see you've posted another video man
perfect timing. Done with dinner. Now on for adored!!
Hello Jim, thank you for your time and hard work . As always great job. I do believe in you Jim. People don't realize how hard is to confirm this leak's and get information.
ff means fast finish, so yes conceding that you've lost
ForFeit
When "ff" showed up on the screen, I thought it meant 255 in hex.
Don't defend yourself so much. You proved the trolls and haters wrong allready over the past years.
They will hate no matter what. Normal people understand context and look at the bigger picture instead of focussing on unimportant details.
Yeah, he predicted the chiplet design such a long time ago, as soon as Lisa has shown the Epyc on Next Horizen, all doubts should be gone for a long time.
@@Chuckiele true
@Adored TV Even if the research isn't 100%, it's still a deeply interesting rabbit hole nontheless and I'm more interested in the thought process as opposed the the actual result.
Or in other words, the journey is more satisfying than the final destination.
As for the detractors, the best thing Jim can do is let the research and the work he puts in speak for itself. I still can't believe I'm still following after a good few years.
@@drkRoss89 Exactly. His speculations and all the thoughts he put behind them are the exciting part, if they come out to be true its just a nice bonus. :D
I believe Silicon in this case can be called as stone too.. :)
In the end it's all metals anyway.
What puzzles me is how is AMD gonna feed that 20 CU iGPU the necessary bandwidth.
Through the I/O die.
Stacked dedicated DDR4 Dram? 64Bits@2.666mz minimum is probably much better than having it split between system ram and graphics bandwidth, plus Pascal level memory compression.
@@fishclaspers361 It's gonna take a lot more than that. The video card with that kind of memory bandwidth (GTX 1030 DDR4) is actually slower then even current Ryzen APUs.
@@Dj0rel Well we can always increase the bit width and increase the memory clock. Or use HBM2 if yeilds allow it.
As always, it's great to watch your videos. Sorry you have to deal with the skeptics so much.
I get the notification, I see AdoredTV, I follow the link immedieately, simple.
The prices seem pretty reasonable. I hope they stay thet way.
I would say TOO reasonable, seing the Radeon 7 at like 700€. something way better, close to the 2080ti for 430 bucks? naaaaah, I don't believe it
@@freepok nope, about 60-70% more performance for the same price bracket. And if they use chiplet design extensively, they can offer gGPUs at a extremly competetive price and still have higer margins than before.
In before NVIDIA lowers the price of Turing cards.
@@NANOTECHYT Oh, that would make Nvidia turing dGPU owners cry lol.
I'm used to "ff" being used to tell the other person to forfeit, not for saying they forfeit..
Great work.
I wanted to draw your attention to a couple other quotes from the "monolithic" article:
"The challenge is that unless we make it invisible to the ISVs [independent software vendors] you’re going to see the same sort of reluctance."
"But the GPU has unique constraints with this type of NUMA [non-uniform memory access] architecture, and how you combine features."
"So, is it possible to make an MCM design invisible to a game developer so they can address it as a single GPU without expensive recoding?
'Anything’s possible…' says Wang."
“Yeah, I can definitely see that,” says Wang, “because of one reason we just talked about, one workload is a lot more scalable, and has different sensitivity on multi-GPU or multi-die communication. Versus the other workload or applications that are much less scalable on that standpoint. So yes, I can definitely see the possibility that architectures will start diverging.” (referring to the use of multi-die and multi-GPU for specific workloads rather than gaming).
www.pcgamesn.com/amd-navi-monolithic-gpu-design
The article was published nearly a year ago, about 6 months before the use of an I/O die was publicly confirmed. With that, they confirmed the system will see the memory as UMA instead of NUMA. With the follow up with Mark Papermaster by Ian Cuttress, there are questions related to if all is routed through the I/O die:
"IC: With all the memory controllers on the IO die we now have a unified memory design such that the latency from all cores to memory is more consistent?
MP: That’s a nice design - I commented on improved latency and bandwidth. Our chiplet architecture is a key enablement of those improvements.
IC: When you say improved latency, do you mean average latency or peak/best-case latency?
MP: We haven’t provided the specifications yet, but the architecture is aimed at providing a generational improvement in overall latency to memory. The architecture with the central IO chip provides a more uniform latency and it is more predictable."
"IC: The IO die as showed in the presentation looked very symmetrical, almost modular in itself. Does that mean it can be cut into smaller versions?
MP: No details at this time.
IC: Do the chiplets communicate with each other directly, or is all communication through the IO die?
MP: What we have is an IF link from each CPU chiplet to the IO die.
IC: When one core wants to access the cache of another core, it could have two latencies: when both cores are on the same chiplet, and when the cores are on different chiplets. How is that managed with a potentially bifurcated latency?
MP: I think you’re trying to reconstruct the detailed diagrams that we’ll show you at the product announcement!
IC: Under the situation where we now have a uniform main memory architecture, for on-chip compared to chip-to-chip there is still a near and a far latency…
MP: I know exactly where you’re going and as always with AnandTech it’s the right question! I can honestly say that we’ll share this info with the full product announcement."
www.anandtech.com/show/13578/naples-rome-milan-zen-4-an-interview-with-amd-cto-mark-papermaster
So with an I/O die on the graphics card, there is a potential that it will provide an UMA situation, masking the split die setup. That would address the making it invisible and anything is possible statement.
Instead, the sensitivity on communications would seem the primary issue, whether it be cache or keeping the latency stable to prevent stale data or having one die run away on one type of calculation with the other die falling behind. Something like the HBCC or an I/O controller for the cache may help, but it would then be going off chiplet to pull the data in. Now, how to setup the IF for time sensitive frame calculations would take some engineering, along with getting the cache right, but it seems they could have a solution for the NUMA issue. Standardizing IF2 lengths for either memory or cache calls may also help, but I'm better at CPU than GPU analysis, so want to put that out there.
Meanwhile, looking forward to part 2!
I am doing great Jim!
Love your videos!
I was just thinking last night when we were gonna get a new AdoredTV vid.
I get more exited when Jim posts a new video than any episode of game of thrones.
The special edition Navi 20 is exactly what I am looking for.
this news about Navi was so expected… Could have easily predicted these results years ago. Because it feels like history repeats itself with every new generation. For example when Nvidia releases RTX 2080, it won't be long before AMD provide the same or slightly better performance for a much more affordable price. And then people complain because they already bought the RTX 2080 because it came first. It feels like AMD is always a year or so behind but at a much more affordable price. History repeats itself
This week has been very exciting
Gimme that sweet sweet Navi 20! Can't wait to ditch my GTX1070 for team red.
I have been waiting for this :D
New upload by Adored? Im pouring a drink!
Yessssss! Been waiting for a video!
This is proper NVidia trolling naming scheme right there. RIP RTX 3080/3070/3060
lol can you imagine
RTX 4000 series
RX 5000 series
RTX 6000 series
....and beyond xD
C'mon AdoredTV, it's been nearly 2 weeks... we need part 2! :) JK take all the time you need... but not too long!
Tomorrow ;)
@@adoredtv excited for it! its technically tomorrow now :p where it at? jk
@@adoredtv 22 hours now. About time?
@@tigerd7528 Running late, it's a long video so won't be done by tonight. Tomorrow now.
AdoredTV thanks for telling us
This is gonna be good day with Jim's video
keep up the good work. we appreciate it!
Yup, this is the video I was waiting for.
Best Tech CZcamsr on CZcams.
Not just youtube.
They helped me a lot by naming their products similarly to the mainstream trademarks.
Great video as always! Thanks!
I have a question: I heard/read somewhere that the 3000 series APUs will be 12 nm, which made a lot of sense to me at the time, since the 2000 series APUs were 14 nm. Did you hear anything about that? Do you have any thoughts?
Love your videos! Great logic mixed with any information that's known or speculated to come to come conclusions.
Of course all this stuff needs to be viewed as speculation and having flexibility in what's concluded is common sense... for intelligent viewers.... ;)
by considering that a modern and capable GPU like RTX 2060 or Vega 56 needs more than 300GB/s memory bandwidth, is it wise to separate GPU core and memory controller on two seperate die?
could you ask them if they want to improve in reflection handling and peak lightning.. average and dynamic?
Hey Jim! Great video as always, keep on going buddy 😆, did the recent supposedly "navi" leak that buildzoid was talking about confirm the bad stuff that you have heard or did it conflict with your findings? Do you think it was leaked on purpose to kinda counter act your warning you gave us last week that navi ain't what we think it is gonna be? Sorry for bothering im just curious, have a nice day
Loving that outro Jim!! Crack job