Meteor Lake looks to be the next big leap for Intel, where they attempt to do EVERYTHING at once! 0:00 - Future of Intel CPUs 0:16 - Tiles 1:36 - LP Cores 2:58 - New node
This video contains out-of-date information about how AMD CPUs are put together these days. They have an IO die that has the interconnects between the CPU dies, and the IO die is also how the CPU dies reach the rest of the system.
Not just that, but the IO die is using an older, larger lithographic process as it doesn't really gain any benefit from moving smaller, which is one of the reasons he gave for Intel's move to a tile-based design.
The 20A is likely 20 Angstroms, which is 2 nanometers. I assume that this is because they need granularity for future nodes - they can call the next next next generation 15 A and so on.
i can't wait for the reviews showing if their efficiency first approach is able to also increase performance in some way. I feel like ryzen has been dominant for some while now and maybe this will shake things up a bit.
I dunno if they can push the power savings into high clockspeeds and the tiles make it cheap to make we may have a pretty good jump in perfomance to price again
3:25 This is a bit misleading. The actual length given haven‘t meant anything in particular for a good while now (it used to be the lithographic limit, the feature size, now it is just named so scaling roughly behaves as it did back when we actually directly scaled via feature size alone). Intels renaming was for a good reason, in terms of transistor density their Intel 7 process is somewhere around TSMC N7 and N7+. So it is actually directly comparable, their 10nm process just used to be way better than anything else, so they changed their marketing to be in-line with TSMC again.
I sure hope the scheduling works properly in different applications. Or I'll have to pin processes to cores with some sort of utility or taskmanager or something
It already does. The different cores only had scheduling issues for like the launch month back in 2021 of 12th gen. 2 years later and its completely flawless. Having the e-cores enabled benefits most games anyways, even though they arent the high performance cores.
@@__aceofspades Besides Cyberpunk 2077 and BF2042, no other game benefits from e-cores. The scheduling works fine now, but we talking about 14th gen here, where a third set of cores are a thing and the weakest cores are prioritized. Hopefully that prioritization isn't a thing with games.
Slight correction on the firsr point of the video. Foveros (or 3d stacked technology) and tiled approach (or procssor desegregation) are quite different. Foveros refers to how the tiles connected together while desegregation is the separation of manufacturing of the processor to reduce complexity. 14th is the first client product to be using the tiled based approach while previously server products like sapphire rapid has been using them. Foveros on the other hand is a new packaging technology where the tiles are being stacked on top on of each other, thus getting better power consumption and latency than AMD's approach as mentioned in the video
Exciting stuff up ahead for Intel! While I'm glad the Ryzen series stepped up and brought some real competition to the CPU space. Great video as always!
Cool to see it's a mix of different types of lithography processes for the dies. It's kinda like Zen 3 with the I/O die being on 12nm while the CPU die is on a 7nm node. AMD does this as well for the RX 7000 series where the cache is on a older node because SRAM transistor scaling basically flatlined in performance. Interesting to note is that Intel wanted to do GPU tiles for Arc, but went with a monolithic design in the end. That would have been way harder to do than separating cache and graphics die (MCD, GCD) like AMD has done. There are massive advantages and I'm very excited to see the chiplet/multi die future!
Maybe I don't know enough about tiling but it seems the exact same as chiplets (i.e. multiple dies connected together) and suffer from the same disadvantages?
@@strawbaria The hope here is that beacuse each chiplet does different things (ie graphics, processing, io etc) there wouldn't be as much crosstalk and latency. only time will tell though.
@@strawbaria intel's version should have less latency but increased complexity and cost. but AMD has had several generations to improve the infinity fabric and they are also introducing the "Zen4C/5C" chiplets (their version of E cores). Zen 5 seems to be heavily reworking the cache system to reduce dependency on the infinity fabric and massively increase IPC (20 to 30% IPC increase).
What do you mean the structure is similar? Are you comparing the stylized "die shots" (they are not very accurate) that Intel and Apple show? Or do you mean the P- and E-cores? (Intel started doing this with Alder Lake which came out in 2021 and most ARM based chips did this already) The most important distinction is that Intel creates multiple dies and combines the functionality of each die to create an SoC (System on a chip) (using their packaging technology Foveros). Apple creates a monolithic die (fabbed at TSMC). Apple packages the M1, M2 such that RAM and the SoC is on the same package where as Intel does not do that (although they did show a test chip where there was on package memory (it's not exactly a unique concept), however, I don't think that is gonna be a SKU for their Meteor Lake lineup). There wasn't really a reason to mention Apple when the primary commentary was about chiplets / tiles.
I am not an expert but it was my understanding that it is not possible to compare transistor sizes between samsung, tsmc, intel etc as they each use different ways of measuring so intel wasnt necessarily behind but that they have been stuck on their 10nm for a long time (by process node time scales)
You can't compare transistor gate size (which is what the NM number used to come from), and there are other factors that matter, but you can compare transistor density. Doing so puts Intel 10nm (now called Intel 7) slightly behind TSMC 7nm, which debuted in 2019. Even if Intel hurries and finishes Intel 4 soon, it will probably be on par with TSMC 5nm, which will still put them a full node behind.
@@capsulate8642 given that it is interesting to see how close in performance intel is able to get compared to amd. I assume that it shows the importance of the aspects of chip design aside from transistor density. My I9 does get very hot though even with a 420mm rad, probably something to do with the sustained 310W
@@MrTeddy12397Anastasi in Tech has a video explaining that the "nm" is the theoretical transistor size of an equivalent planar transistor. Since current transistors use different techniques such as finfet and gate-all-around, they cannot be compared in any meaningful way
@@MrTeddy12397 I believe that is because of some 3d stuff they are doing that makes them equivalent to 2nm which is what makes it hard to compare cause they all have different ways of working that out
@@lightward9487 lol i wonder :D right now im rocking a 5600x so I'll stick with AMD until i can't upgrade on am4 then ill have to find who has the best price to performance
The intels sect is still huge in eastern EU. People still sticking up to blue camp despite insane power draw and huge latency issues caused by tiny cores.
Especially in the mobile segment not enough manufacturers offered AMD cpus and there they were really way better if it comes to energy consumption. On tower PCs it does not matter too much, because most games are bottlenecked by the gpu or you already get 144+ fps.
Hah, be happy that it's not the 1990s! Back then, you'd buy a new PC, and 2 months later it would already have been superseded by a newer, better, much faster generation of computers.
@@antred11 You're right! My father always tell me back in his days, building a "futureproof" PC was (and still) a bad idea. Just build a mid-range PC and upgrade it by time, no need to build high-end PCs for usual activities.
Meteor Lake is mobile, desktop gets Raptor Lake+ this year (10% improvement), later next year is when all these improvements come to desktop, so youre fine.
2:19 What does it mean that the work can't be done on the most efficient part of the processor? Does this work contain instructions that aren't implemented in that part of the processor or what?
AFAIK AMD's CPUs didn't technically use chiplets until Zen 2. This is because Zen EPYC and Zen/Zen+ Threadripper CPUs were made out of chips which were effectively standalone CPUs while with Zen 2 AMD split the cores from the I/O and since then their server, desktop and HEDT CPUs are all built using two different type of chips. Regarding LP cores, It's going to be interesting if we see any performance regressions as a result of work going to the LP and E cores before it reaches the P cores. I think gaming might be especially affected.
@@talibong9518 It didn't. Zen/Zen+ Ryzen CPUs did separate their cores into two Core Complexes (commonly referred to as CCXes) however it was still all one chip. If we were to count that as a "chiplet design" then Intel CPUs with P and E cores would also count as a "chiplet design" since in that case you also have separate groups of cores within the same chip. Edit: to be clear Zen EPYC and Zen/Zen+ Threadripper CPUs used the same 8 core dies as desktop Ryzen CPUs which means that they also had 2 CCXes within each die.
20A could refer to being 20 angstrom (Å) which means 2nm. Maybe it sounds cooler that way, I don't know. Naming schemes aside, at that scale, if what they claim is actually true and it is actually a 2nm architecture, it will be a very impressive technological achievement indeed, because this means that they found a reliable way to overcome the quantum effect limitations that are rather prominent at this scale. To put it in perspective, that means that the gate electrode of the FET is only 20 ATOMS WIDE(!) (give or take). I can't possibly give an explanation as to how that could happen (as it must be a cooperate secret), but very cool nonetheless.
Meteor Lake is revolutionary changes for Intel, the biggest changes in a decade. Tiles (chiplets), new more efficient Intel 4 node, brand new CPU and IGP architectures (AV1 encoding!), 133% the GPU cores of last gen, focus on power efficiency, NPU (AI cores). This is basically the best of Apple and AMD combined, with even more.
@@MrVladko0 Nope, latest leaks just suggested that it was around 30% IPC and clocks about the same. Prior leaks indicated that it was around 20% IPC and a 5-10% clock increase.
Hey, is it possible to make the thing were it gives the task to the low performance one then moves it up if needed a toggle? Can it be turned on and off, for gaming for example, or is this a hardware design thingie? Im not very savvy, sorry for any mistakes
So while Amd lower cost by designing 1 chiplet for both pc and server with different i/o die, intel is more flexible by designing many modular tiles that can be mix and match?
Sort of, though it means they will have to create multiple monolithic "SoC" dies for all sorts of applications. I guess they evaluated it and decided to not go with separate chiplets for compute, but some sort of chiplet design is necessary as the monolithic CPU and GPU dies are getting to a point where you wouldn't get many of them working from a wafer. nVidia also needs to switch to chiplets someday. Or not I guess, since they don't mind selling their GPUs for insane markups. If 5090 is really 50% larger, the price of that card will be in the thousandS, as long as the size isn't many chiplets combined of course.
AMD still had to design a separate dies for laptop. I beleave Ryzen 7000 laptops have like 4-5 different dies now with monolithic 4 core Zen 2, 8 core Zen3, Zen3+, Zen 4, and chiplet based 16core Zen 4 (which is the desktop die) all under Ryzen 7000 naming.
I went with AMD Cpu just because of efficiency. I think its nice to see intel actually try instead of overclocking the shit out of their processors. tho that forced lp e core on soc might cause latency and a lot lower single core performance. I'd say if these new processors can beat apples arm chips, it would be great
3NM to 20A means they're going from 30 nanometers (30 Angstroms) to 20 Angstroms. 1NM = 10A. As you pointed out, these sizes don't actually mean much of anything anymore, as they definitely aren't measuring the physical dimensions of their transistors now. Intel reasons the latency induced by the SoC scheduler will be no worse than AMD's latency from their massive IO die to the chiplets over their infinity fabric. They're making up for their uncompetative power consumption issues on a larger node by using these E cores for most smaller processes on a system (OS management and simple application computations).
I'm a chief of information technology from Finland and have seen computers evolve my whole life. The scale of modern computational capacity rivals the vastness of outer space. It's getting bigger at an increasing rate. Mind boggling, and we've barely scratched the creative surface of what to do with it all.
@unsubtract I agree that 20A probably means angstrom, but you are right it is confusing that it usually means Amperes. They should have used Å as it is the standard symbol for angstrom.
Very cool tech, but there are problems they really need to solve. Weird latency issues with today's p cores and e cores, even though the p cores get the work first.
Definitely larger gains for the laptop market which Intel have needed for years, their uncore power consumption has been pretty bad compared to AMD and they might finally have a competitive laptop iGPU too. Not often you see the launch of a foundational architecture that'll be built upon and improved for many generations to come, I'm sure there'll be a lot of low hanging fruit and we should see big gains over the next 4 or 5 gens from Intel with the overall package
I hope that they will also be compatible with the Z690 motherboards like the 13 gen of intel cpu's are. Then it would be an easy upgrade for me again for the next generation
@@soundspark Well i have been using a 9th gen I7 until a couple of months when i upgrade to 13th gen I9. Im sure that i can use that CPU for many years until i need another upgrade for it.
Intel’s naming isn’t a mistake, it’s a correction for the previous naming scheme, intel’s 14nm node has always been similar in actual transistor size to TSMC’s 10nm node, so they basically corrected the naming scheme to match TSMC’s marketing, plus TSMC isn’t really innocent themself for naming one of their 5nm nodes as “N4”.
It would be nice if we could force some applications to just use the P cores like with games or what have you. Like setting the priority in the Task Manager just set which cores where set to which programme. Naturally I don't care about power consumption all that much other then it dropping the temps so we can push voltages and clock speeds up further, real world performance is all I really care about if I'm spending £400+ on a new CPU. People aren't upgrading their CPU to save a few £'s a year on a more efficient CPU they upgrade because they want more performance. Mobile stuff is different obviously but on desktop for most people performance is the most improtant thing.
On process node naming, the "nm" hasn't reflected any physical feature for a very long time and today is in essence a pure marketing number. Intel's problem is the scale they chose was out of step with TSMC and Samsung, which is what they're correcting since Intel 7 and will be using going ahead. While Intel 7 might be previously known as Intel 10nm, it is comparable to TSMC N7.
Which is funny cause the distance apart isn’t even 7nm, it’s 32 nm If it was actually named after the physical distance, we’d still be on 32 nm Which is nuts
When you say tiles can be made by different foundries, does that mean, different tiles are glued together? Or the wafer is passed on to the different foundries??
Its different wafers from different foundries, they can use any node or foundry. Intel uses an active interposer (silicon) as the baseboard, that the tiles sit on and transfer data through. There is no glue lol. Its basically smart chips on top of a 'dumb' chip that connects the smart chips, but its far more technically complicated.
I think MCM is just a stop gap before a fully integrated single die solution. MCM chips rely on high-speed interconnects to communicate between the dies and the substrate. These interconnects may be susceptible to mechanical stress, thermal expansion, electromigration, and other factors that can cause them to break or degrade over time. This may affect the functionality and performance of the MCM chip. I will not be rushing to buy them.
amd dont really use chiplets anymore, they only rely on them for the bigger cpus like the 12 and 16 core models, the 8 cores just have one chiplet with 8 cores
I'm not the smartiest pant but wouldn't having a workload to go through the Low power island, then the E cores to then finally arrive at the P cores going to introduce significant latency?
20A = 20Å (Ångström) = 20 × 10⁻¹⁰ m = 2 × 10⁻⁹ m = 2nm...? I mean yeah smart play to change units but I'd've expected _after_ slipping below one nanometre (if this is even possible with our current physics/engineering knowledge)
@@lharsay sounds about right, I mean Philip mentioned in the video that 20A had an abstract meaning because they can't predict what they'll be manufacturing
improving energy efficiency by addin another cpu inside the cpu. unless the mini cpu is arm based i don't know how an extra processor is going to improve performance. maybe it won't be used at all because its so weak. i wonder if there's going to be some way of testing only the mini cpu.
At the end of the day, only TWO things matter: Power & Efficiency. If you win the crown at the expense of HEAT...NO GOOD. If you don't win the crown...oh well. That's NOT GOOD. So U need to be FAST...and COOL!
Amd should probably do something like this, as the infinity fabric is already quite a bottleneck, they could combine tiling and vertical stacking for super dense processors
And also add 1 or 2 extra small CPU cores in the I/O die so the main compute unit can be completely power off in light load (browsing, watching videos, doing simple office task), further lowering the idle power consumption
It depends what you mean. If you mean that the iGPUs get the capability to run games with raytracing and frame generation then maybe, but it will take some years to get the performance where additional tensor cores are really making sense on these small gpus eventhough they try to integrate it earlier. If you mean that AI is integrated in the cpu with ASICs and Software making use out of it, then they already do that, but as a user you won't notice, because it is more likely happening under the hood.
CPUs have gotten many gamechanging features over the years. You just don't really notice them, because you do not see them. For example ddr5, more cores, pcie 5, usb 4, 3d Vcache. The cores also constantly improve with new instruction sets, security and power efficiency. I know this is not really exciting if you are only playing games but when you need the performance and connectivity then these thing are gamechanging.
@@killersberg1 I mean you are right, but these features are not new. 3d V-Cache also only allows more Cache memory. Sure these features increase the performance of the CPU, but they do not enable new features.
@@Tri-Technology What is RT? It is a "new" feature but in actuality it is something that has been done for many years but now in realtime. Before we would just approximate lighting which was good enough 90% and RT doesn't really look impressively amazing when compared to good approximated lighting. I do not consider RT a gamechaning feature. The hype mostly stems from marketing imo. I think CPUs improved more relative to GPUs in the last five years, although they have been behind before due to intel so maybe that cancels out.
@@2kliksphilip 15th gen is going to be Arrow Lake on the 20A proccess node with backside power delivery, but we don't know details from an architecture standpoint. 14th gen is Raptor Lake Refresh on desktop and Meteor Lake on mobile, this means that on desktop from 14th to 15th gen there will be a 2 (3?) node jump. Edit: Oh well apperently they just said that Meteor Lake would come to desktop in 2024.
I can't wait to see how badly Windows' thread scheduler is going to be tripped up by more types of processors it has to schedule for. You know, more than it already does with the current E/P core divide. (ok, to be fair, the Linux kernel had it even worse for quite a while, because Intel didn't have Thread Director ready for Linux yet)
2kliksphilip talking about pc hardware are my favorite 2kliksphilip videos
Facts
Saaaaame
2kliksphilip talking are my favorite 2kliksphilip videos
For me, it’s 2kliksphillip talking about arma
I like all his videos. His music is really good too. It reminds me of Bjorn Lynne's music, the guy that made the music used by Digital Foundry.
This video contains out-of-date information about how AMD CPUs are put together these days. They have an IO die that has the interconnects between the CPU dies, and the IO die is also how the CPU dies reach the rest of the system.
This needs to be on the top, if they made a video like this much details they shouldn't use outdated info
i think plhilip made this decicion consciously, this video is meant to be a short introduction, not a deep dive
Either Philip didn’t know or he purposefully did that to make this video easier to explain.
@@SonicMaster519 its clear he did it to streamline the video, philip has made previous deep dives into zen2/3, he knows his stuff
Not just that, but the IO die is using an older, larger lithographic process as it doesn't really gain any benefit from moving smaller, which is one of the reasons he gave for Intel's move to a tile-based design.
Philip's videos brighten my day, even if it's night
same
tf bro 😂😂😂😂😂😂😂😂😂
You probably should get that checked out...
after you have watched the video of course
Yeah, so much it becomes day again
frfr
The 20A is likely 20 Angstroms, which is 2 nanometers. I assume that this is because they need granularity for future nodes - they can call the next next next generation 15 A and so on.
And opens the possibility for naming a node 20B or something if its an improved process
All of these names have been marketing monikers for over a decade. None of it has anything to do with optical shrinks anymore.
@@martinum4 They supposedly have 18A for that.
@@martinum4imagine reading the original comment and coming to that conclusion 😂😂😂
@@BlueBillionPoundBottleJobs Intel originally planned 2 Processes per Wavelength, thus i came to that conclusion :)
i can't wait for the reviews showing if their efficiency first approach is able to also increase performance in some way. I feel like ryzen has been dominant for some while now and maybe this will shake things up a bit.
amd is expensive now too
@@socialist_elmonahhhh, the prices have been adjusted to compete w intel
@@2kliksphilipI would argue that the 5600 was the most ground-breaking cpu since zen1
@@om0206 I respect your opinion but the most impressive one was the 5800X3D. But hey, at least we're not stuck with 4c/4t anymore.
@@BottomOfTheDumpsterFire that came afterwards though.
If this is done properly imagine the Laptop battery gains!
Just use an arm based laptop
Just lose half the apps you use@@TEENYcharma
@@TEENYcharmaIntel might get close to ARM
@@TEENYcharmafrom the Chinese slave trader company with a fruit logo?
@@shadmansudipto7287banana corp? I dislike those guys. A lot.
Cool video. Real HW news without taking up 30 minutes.
Intel 7 is 10nm , but transistor density is of TSMC'S 7nm. Same to Intel 4, it's 7nm but transistor density is of TSMC'S 5nm
Foveros in greek means "someone who is fearful" - hope that's not indicative of anything!
I dunno if they can push the power savings into high clockspeeds and the tiles make it cheap to make we may have a pretty good jump in perfomance to price again
3:25 This is a bit misleading. The actual length given haven‘t meant anything in particular for a good while now (it used to be the lithographic limit, the feature size, now it is just named so scaling roughly behaves as it did back when we actually directly scaled via feature size alone).
Intels renaming was for a good reason, in terms of transistor density their Intel 7 process is somewhere around TSMC N7 and N7+. So it is actually directly comparable, their 10nm process just used to be way better than anything else, so they changed their marketing to be in-line with TSMC again.
I sure hope the scheduling works properly in different applications. Or I'll have to pin processes to cores with some sort of utility or taskmanager or something
It already does. The different cores only had scheduling issues for like the launch month back in 2021 of 12th gen. 2 years later and its completely flawless. Having the e-cores enabled benefits most games anyways, even though they arent the high performance cores.
@@__aceofspades Besides Cyberpunk 2077 and BF2042, no other game benefits from e-cores. The scheduling works fine now, but we talking about 14th gen here, where a third set of cores are a thing and the weakest cores are prioritized. Hopefully that prioritization isn't a thing with games.
Hey buddy, it's not 2019 anymore champ
This is very interesting! BUT can you please tell me what music you used in the background - it's so nice!!
i didnt even know the person uploading game vids and the person uploading pc vids are the same person lol, love the vids
Always a GREAT day when Philip uploads❤
I love your summaries of PC hardware news
Slight correction on the firsr point of the video. Foveros (or 3d stacked technology) and tiled approach (or procssor desegregation) are quite different. Foveros refers to how the tiles connected together while desegregation is the separation of manufacturing of the processor to reduce complexity. 14th is the first client product to be using the tiled based approach while previously server products like sapphire rapid has been using them. Foveros on the other hand is a new packaging technology where the tiles are being stacked on top on of each other, thus getting better power consumption and latency than AMD's approach as mentioned in the video
Exciting stuff up ahead for Intel! While I'm glad the Ryzen series stepped up and brought some real competition to the CPU space. Great video as always!
Apple are the ones truly pushing it forward right now, unfortunately.
I'm afraid you're a bit late to the party... Zen came out 6 years ago!
I like your dispatch pfp :)
@@kingeling Yeah Zen came out. And was dogshit until it hit the 4th gen 5000 series.
@@onebigfatguy Lol no, it was almost on-par with Intel for much cheaper up until Zen 2, do your research.
Cool to see it's a mix of different types of lithography processes for the dies. It's kinda like Zen 3 with the I/O die being on 12nm while the CPU die is on a 7nm node. AMD does this as well for the RX 7000 series where the cache is on a older node because SRAM transistor scaling basically flatlined in performance. Interesting to note is that Intel wanted to do GPU tiles for Arc, but went with a monolithic design in the end. That would have been way harder to do than separating cache and graphics die (MCD, GCD) like AMD has done. There are massive advantages and I'm very excited to see the chiplet/multi die future!
Maybe I don't know enough about tiling but it seems the exact same as chiplets (i.e. multiple dies connected together) and suffer from the same disadvantages?
@@strawbaria
The hope here is that beacuse each chiplet does different things (ie graphics, processing, io etc) there wouldn't be as much crosstalk and latency. only time will tell though.
@@strawbaria intel's version should have less latency but increased complexity and cost. but AMD has had several generations to improve the infinity fabric and they are also introducing the "Zen4C/5C" chiplets (their version of E cores). Zen 5 seems to be heavily reworking the cache system to reduce dependency on the infinity fabric and massively increase IPC (20 to 30% IPC increase).
thanks for highlighting the chip size for intel
Who made the thumbnail? looks pretty sick.
surprised you didn’t mention apple’s m chips. The structure seems very similar visually and might even be what intel considers their direct competitor
What do you mean the structure is similar? Are you comparing the stylized "die shots" (they are not very accurate) that Intel and Apple show? Or do you mean the P- and E-cores? (Intel started doing this with Alder Lake which came out in 2021 and most ARM based chips did this already)
The most important distinction is that Intel creates multiple dies and combines the functionality of each die to create an SoC (System on a chip) (using their packaging technology Foveros).
Apple creates a monolithic die (fabbed at TSMC). Apple packages the M1, M2 such that RAM and the SoC is on the same package where as Intel does not do that (although they did show a test chip where there was on package memory (it's not exactly a unique concept), however, I don't think that is gonna be a SKU for their Meteor Lake lineup).
There wasn't really a reason to mention Apple when the primary commentary was about chiplets / tiles.
Apple isn't even in the same ballpark as amd.
Wdym by that?@@HaasTheFirst
@@HaasTheFirst?
The apple M chips isn’t a direct competitor because it’s only available in Apple hardware.
it is so gonna be a hassle to get the schedulers do to their work correctly.. I'm intrigued!
I am not an expert but it was my understanding that it is not possible to compare transistor sizes between samsung, tsmc, intel etc as they each use different ways of measuring so intel wasnt necessarily behind but that they have been stuck on their 10nm for a long time (by process node time scales)
You can't compare transistor gate size (which is what the NM number used to come from), and there are other factors that matter, but you can compare transistor density. Doing so puts Intel 10nm (now called Intel 7) slightly behind TSMC 7nm, which debuted in 2019. Even if Intel hurries and finishes Intel 4 soon, it will probably be on par with TSMC 5nm, which will still put them a full node behind.
@@capsulate8642 given that it is interesting to see how close in performance intel is able to get compared to amd. I assume that it shows the importance of the aspects of chip design aside from transistor density. My I9 does get very hot though even with a 420mm rad, probably something to do with the sustained 310W
what's even funnier is that they call some transistors with names like "3nm" and such but they are actually 10nm in size...
@@MrTeddy12397Anastasi in Tech has a video explaining that the "nm" is the theoretical transistor size of an equivalent planar transistor. Since current transistors use different techniques such as finfet and gate-all-around, they cannot be compared in any meaningful way
@@MrTeddy12397 I believe that is because of some 3d stuff they are doing that makes them equivalent to 2nm which is what makes it hard to compare cause they all have different ways of working that out
i wonder if going more efficient may cause increase performance by just reducing heat so you can work them more
Yes intel beast
@@lightward9487 lol i wonder :D right now im rocking a 5600x so I'll stick with AMD until i can't upgrade on am4 then ill have to find who has the best price to performance
It doesn't work like that unfortunately - apparently Intel's existing P-cores have worse eff
@@strawbariabecause performance cores don't care about energy efficiency, there are efficiency cores for that
It's somewhat amazing to see that there is a whole world of how can a company implement x86 architecture.
Wonder if it will be that proposed x86S architecture?
excited to read the Userbenchmark reviews.
"Gamers need to look no further than the i3 8350K"
I love Philip talking about stuff that i'll be never be able to purchase...
But it makes me dream tho!
Have you done any videos on DLDSR?
That music at the end was nice what was it?
the A in 20A stands for Angstrom. and 10 angstrom = 1nanometer.
Also, British English. Always refreshing when so much YT is not.
The intels sect is still huge in eastern EU. People still sticking up to blue camp despite insane power draw and huge latency issues caused by tiny cores.
I disagree.
Especially in the mobile segment not enough manufacturers offered AMD cpus and there they were really way better if it comes to energy consumption. On tower PCs it does not matter too much, because most games are bottlenecked by the gpu or you already get 144+ fps.
pretty exciting stuff for chip competition. Hopefully intel pulls it off, love to see innovation and competition again.
Aaaand this innovations come out after I just bought my CPU.
Hah, be happy that it's not the 1990s! Back then, you'd buy a new PC, and 2 months later it would already have been superseded by a newer, better, much faster generation of computers.
yea but all those are shit in comparison to modern CPUs& GPUs, like thousands of times slower so like get your boomer stuff outta here bro@@antred11
I don't see any innovation besides a new way of confusing the consumer in here.
@@antred11 You're right! My father always tell me back in his days, building a "futureproof" PC was (and still) a bad idea. Just build a mid-range PC and upgrade it by time, no need to build high-end PCs for usual activities.
Meteor Lake is mobile, desktop gets Raptor Lake+ this year (10% improvement), later next year is when all these improvements come to desktop, so youre fine.
2:19 What does it mean that the work can't be done on the most efficient part of the processor? Does this work contain instructions that aren't implemented in that part of the processor or what?
AFAIK AMD's CPUs didn't technically use chiplets until Zen 2. This is because Zen EPYC and Zen/Zen+ Threadripper CPUs were made out of chips which were effectively standalone CPUs while with Zen 2 AMD split the cores from the I/O and since then their server, desktop and HEDT CPUs are all built using two different type of chips.
Regarding LP cores, It's going to be interesting if we see any performance regressions as a result of work going to the LP and E cores before it reaches the P cores. I think gaming might be especially affected.
I'm pretty sure Zen 1 used dual core chiplets, at least with Ryzen
@@talibong9518 It didn't. Zen/Zen+ Ryzen CPUs did separate their cores into two Core Complexes (commonly referred to as CCXes) however it was still all one chip. If we were to count that as a "chiplet design" then Intel CPUs with P and E cores would also count as a "chiplet design" since in that case you also have separate groups of cores within the same chip.
Edit: to be clear Zen EPYC and Zen/Zen+ Threadripper CPUs used the same 8 core dies as desktop Ryzen CPUs which means that they also had 2 CCXes within each die.
20A could refer to being 20 angstrom (Å) which means 2nm. Maybe it sounds cooler that way, I don't know.
Naming schemes aside, at that scale, if what they claim is actually true and it is actually a 2nm architecture, it will be a very impressive technological achievement indeed, because this means that they found a reliable way to overcome the quantum effect limitations that are rather prominent at this scale. To put it in perspective, that means that the gate electrode of the FET is only 20 ATOMS WIDE(!) (give or take).
I can't possibly give an explanation as to how that could happen (as it must be a cooperate secret), but very cool nonetheless.
Meteor Lake is revolutionary changes for Intel, the biggest changes in a decade. Tiles (chiplets), new more efficient Intel 4 node, brand new CPU and IGP architectures (AV1 encoding!), 133% the GPU cores of last gen, focus on power efficiency, NPU (AI cores). This is basically the best of Apple and AMD combined, with even more.
Only the E-cores are getting new micro architecture. P-cores stay on variance of Golden Cove
AMD is dropping ryzen 8000 next year with 30% IPC gains, so it'll probably be obsolete before it even hits shelves.
@@PineyJustice Latest leaks suggesting 14-19% IPC instead
there is nothing revolutionary about it.
@@MrVladko0 Nope, latest leaks just suggested that it was around 30% IPC and clocks about the same. Prior leaks indicated that it was around 20% IPC and a 5-10% clock increase.
Finally more competition
Hey, is it possible to make the thing were it gives the task to the low performance one then moves it up if needed a toggle?
Can it be turned on and off, for gaming for example, or is this a hardware design thingie?
Im not very savvy, sorry for any mistakes
Did you use AI upscaling on those Intel marketing graphics? Some of the text looks very suspiciously artifacted.
I think Intel should name their next generation Sodium Lake.
As they will mimic sodium being thrown in a lake when installed in your PC.
So while Amd lower cost by designing 1 chiplet for both pc and server with different i/o die, intel is more flexible by designing many modular tiles that can be mix and match?
Sort of, though it means they will have to create multiple monolithic "SoC" dies for all sorts of applications. I guess they evaluated it and decided to not go with separate chiplets for compute, but some sort of chiplet design is necessary as the monolithic CPU and GPU dies are getting to a point where you wouldn't get many of them working from a wafer.
nVidia also needs to switch to chiplets someday. Or not I guess, since they don't mind selling their GPUs for insane markups. If 5090 is really 50% larger, the price of that card will be in the thousandS, as long as the size isn't many chiplets combined of course.
AMD still had to design a separate dies for laptop. I beleave Ryzen 7000 laptops have like 4-5 different dies now with monolithic 4 core Zen 2, 8 core Zen3, Zen3+, Zen 4, and chiplet based 16core Zen 4 (which is the desktop die) all under Ryzen 7000 naming.
I went with AMD Cpu just because of efficiency. I think its nice to see intel actually try instead of overclocking the shit out of their processors. tho that forced lp e core on soc might cause latency and a lot lower single core performance. I'd say if these new processors can beat apples arm chips, it would be great
We need a DLSS 3.0 (and .5) video.
Your original one was great
guess I'll wait for this to release if I'm making a new PC!
3NM to 20A means they're going from 30 nanometers (30 Angstroms) to 20 Angstroms. 1NM = 10A. As you pointed out, these sizes don't actually mean much of anything anymore, as they definitely aren't measuring the physical dimensions of their transistors now. Intel reasons the latency induced by the SoC scheduler will be no worse than AMD's latency from their massive IO die to the chiplets over their infinity fabric. They're making up for their uncompetative power consumption issues on a larger node by using these E cores for most smaller processes on a system (OS management and simple application computations).
What a time to be alive.
I'm a chief of information technology from Finland and have seen computers evolve my whole life. The scale of modern computational capacity rivals the vastness of outer space. It's getting bigger at an increasing rate. Mind boggling, and we've barely scratched the creative surface of what to do with it all.
my next gen might never exit my body.
20A probably means 20 angstrom, an angstrom is 0.1 nanometers.
So if it does go down to 2nm or 20A, that's pretty impressive.
It almost definitely will not. But they wont let that stop them from naming it so!
@unsubtract 20 is also bigger than 10 7 5 3 and 1 so my brain hurt
probably? It litterally says angstrom in the graphic shown in the video.
@unsubtract I agree that 20A probably means angstrom, but you are right it is confusing that it usually means Amperes. They should have used Å as it is the standard symbol for angstrom.
they are just joining all the other companies in having naming that doesnt match what it actually is in nodes @@UnimportantAcc
integrate ARM for some special optimisation alongside the x86 or even better - RISC-V!
Seems like "islands" are in fashion
Very cool tech, but there are problems they really need to solve. Weird latency issues with today's p cores and e cores, even though the p cores get the work first.
would be cool if intel made a chip
with a tile of small cores
and another tile with big cores
Definitely larger gains for the laptop market which Intel have needed for years, their uncore power consumption has been pretty bad compared to AMD and they might finally have a competitive laptop iGPU too. Not often you see the launch of a foundational architecture that'll be built upon and improved for many generations to come, I'm sure there'll be a lot of low hanging fruit and we should see big gains over the next 4 or 5 gens from Intel with the overall package
Userbenchmark is going to have a field day with this one😂
I hope that they will also be compatible with the Z690 motherboards like the 13 gen of intel cpu's are. Then it would be an easy upgrade for me again for the next generation
It appears 14th Gen is the end of the line for Z690 and Z790 motherboards.
@@soundspark Well i have been using a 9th gen I7 until a couple of months when i upgrade to 13th gen I9. Im sure that i can use that CPU for many years until i need another upgrade for it.
@@MrTefe If you had only recently bought a 13th Gen then it would have been a good idea to have got a Z790 board. Z690 came with the 12th Gen.
@@soundspark I couldnt find any good looking Z 790 white mobo for me. I like the Z690 formula from asus
@MrTefe In fairness I have a Z690 too but I don't have any PCIe devices to take advantage of Z790.
I'm just going to smile and nod, not really understanding the most of this, but MAN does it intrigue me.
"nm" is just branding, not a standard, the +++ used wasn't really a lag behind per se, it's all done on the same equipment
I was looking for Intel CPU arch, and this video pops up.
I thought this guy is a computer engineer explains archtectures lol.
Hopefully we will soon see 3D stacked neural net cpus. It's time for skynet.
1:40 ADD MOAR CORES
Intel’s naming isn’t a mistake, it’s a correction for the previous naming scheme, intel’s 14nm node has always been similar in actual transistor size to TSMC’s 10nm node, so they basically corrected the naming scheme to match TSMC’s marketing, plus TSMC isn’t really innocent themself for naming one of their 5nm nodes as “N4”.
Intel is making kernel's scheduler even more complex lol
It would be nice if we could force some applications to just use the P cores like with games or what have you. Like setting the priority in the Task Manager just set which cores where set to which programme. Naturally I don't care about power consumption all that much other then it dropping the temps so we can push voltages and clock speeds up further, real world performance is all I really care about if I'm spending £400+ on a new CPU. People aren't upgrading their CPU to save a few £'s a year on a more efficient CPU they upgrade because they want more performance. Mobile stuff is different obviously but on desktop for most people performance is the most improtant thing.
Saw that 32 sec after upload
I saw it 19
On process node naming, the "nm" hasn't reflected any physical feature for a very long time and today is in essence a pure marketing number. Intel's problem is the scale they chose was out of step with TSMC and Samsung, which is what they're correcting since Intel 7 and will be using going ahead. While Intel 7 might be previously known as Intel 10nm, it is comparable to TSMC N7.
so should i wait for 14th gen yes or no
currently i have 9900KS
Their nm are misleading the same way every other chip manufacturer's is misleading.intel's 7 in onpar with TSMC's 7"nm"
Which is funny cause the distance apart isn’t even 7nm, it’s 32 nm
If it was actually named after the physical distance, we’d still be on 32 nm
Which is nuts
i hope we will get a version without integrated graphics
does anyone know what the songs in the background are?
Does Meteor Lake refer to how hot they are going to run? ;)
When you say tiles can be made by different foundries, does that mean, different tiles are glued together? Or the wafer is passed on to the different foundries??
Its different wafers from different foundries, they can use any node or foundry. Intel uses an active interposer (silicon) as the baseboard, that the tiles sit on and transfer data through. There is no glue lol. Its basically smart chips on top of a 'dumb' chip that connects the smart chips, but its far more technically complicated.
@@__aceofspades we need a video explaining this process 😁
This lineup of processors would kill the prospect of over clocking.
What about 3d cache?
I think MCM is just a stop gap before a fully integrated single die solution.
MCM chips rely on high-speed interconnects to communicate between the dies and the substrate. These interconnects may be susceptible to mechanical stress, thermal expansion, electromigration, and other factors that can cause them to break or degrade over time. This may affect the functionality and performance of the MCM chip. I will not be rushing to buy them.
amd dont really use chiplets anymore, they only rely on them for the bigger cpus like the 12 and 16 core models, the 8 cores just have one chiplet with 8 cores
0:05 95% of cpu launches summarized
I'm not the smartiest pant but wouldn't having a workload to go through the Low power island, then the E cores to then finally arrive at the P cores going to introduce significant latency?
do you really care about a process taking longer to start?
Even if it takes significant time, it’s not going to take more than a second.
is there ddr4 support?
20A = 20Å (Ångström) = 20 × 10⁻¹⁰ m = 2 × 10⁻⁹ m = 2nm...?
I mean yeah smart play to change units but I'd've expected _after_ slipping below one nanometre (if this is even possible with our current physics/engineering knowledge)
These names mean nothing by now, thay only refer to transistor density increase over the old nodes not a physical parameter of a given transistor.
@@lharsay sounds about right, I mean Philip mentioned in the video that 20A had an abstract meaning because they can't predict what they'll be manufacturing
I know nothing about any of this, but I find it funny when "breakthrough innovations in 2024" is part of their roadmap.
That thumbnail of a Minecraft cobblestone generator
Just bought 13700k and 4x16gb 6000mhz cl30 ddr5, ready to upgrade for i9 14900k at december i guess...
Seems like Intel is finally getting their shit together.
Dlss 3.5 vid?
improving energy efficiency by addin another cpu inside the cpu.
unless the mini cpu is arm based i don't know how an extra processor is going to improve performance. maybe it won't be used at all because its so weak. i wonder if there's going to be some way of testing only the mini cpu.
Me about to head to sleep but i see philip so i click and watch till the end
At the end of the day, only TWO things matter: Power & Efficiency.
If you win the crown at the expense of HEAT...NO GOOD.
If you don't win the crown...oh well. That's NOT GOOD.
So U need to be FAST...and COOL!
Someone tell Intel that you can't just glue cpus together!
This is how most smartphone processors work
@@joshuafountain Google the reference ;)
Hey 2kliksphilip, can you tell your brother 3kliksphilip to make a new CS2 video where he inspects every weapons' rare animation? Nice, ty.
So what you're saying is we should wait for the next next gen? Got it.
love listening to 2kliksphilip hardware/tech videos in the morning. easily digestible (sometimes) information that isnt bloated. good job!
yep
Amd should probably do something like this, as the infinity fabric is already quite a bottleneck, they could combine tiling and vertical stacking for super dense processors
And also add 1 or 2 extra small CPU cores in the I/O die so the main compute unit can be completely power off in light load (browsing, watching videos, doing simple office task), further lowering the idle power consumption
Infinity cache is the reason why RDNA 2 is even mildly decent at ray tracing tho
I wonder if processors will ever get their exciting new technology, such as RTX GPU raytracing or frame generation
It depends what you mean. If you mean that the iGPUs get the capability to run games with raytracing and frame generation then maybe, but it will take some years to get the performance where additional tensor cores are really making sense on these small gpus eventhough they try to integrate it earlier. If you mean that AI is integrated in the cpu with ASICs and Software making use out of it, then they already do that, but as a user you won't notice, because it is more likely happening under the hood.
CPUs have gotten many gamechanging features over the years. You just don't really notice them, because you do not see them. For example ddr5, more cores, pcie 5, usb 4, 3d Vcache. The cores also constantly improve with new instruction sets, security and power efficiency. I know this is not really exciting if you are only playing games but when you need the performance and connectivity then these thing are gamechanging.
maybe on the igpu once they are more standard
@@killersberg1 I mean you are right, but these features are not new. 3d V-Cache also only allows more Cache memory. Sure these features increase the performance of the CPU, but they do not enable new features.
@@Tri-Technology What is RT? It is a "new" feature but in actuality it is something that has been done for many years but now in realtime. Before we would just approximate lighting which was good enough 90% and RT doesn't really look impressively amazing when compared to good approximated lighting. I do not consider RT a gamechaning feature. The hype mostly stems from marketing imo. I think CPUs improved more relative to GPUs in the last five years, although they have been behind before due to intel so maybe that cancels out.
planning the upgrade from the 10th gen i5 to the 14th gen i7 when it releases. I've found myself playing CPU heavy games recently.
Will probably be better off getting a discounted 13th gen part
@@2kliksphilip 15th gen is going to be Arrow Lake on the 20A proccess node with backside power delivery, but we don't know details from an architecture standpoint. 14th gen is Raptor Lake Refresh on desktop and Meteor Lake on mobile, this means that on desktop from 14th to 15th gen there will be a 2 (3?) node jump.
Edit: Oh well apperently they just said that Meteor Lake would come to desktop in 2024.
I hope they will support AVX512.
I can't wait to see how badly Windows' thread scheduler is going to be tripped up by more types of processors it has to schedule for. You know, more than it already does with the current E/P core divide.
(ok, to be fair, the Linux kernel had it even worse for quite a while, because Intel didn't have Thread Director ready for Linux yet)