M4 Deep Dive - Apple's First Armv9 Processor

Sdílet
Vložit
  • čas přidán 19. 05. 2024
  • Here is my look at the new Apple M4 processor. Apple's next gen chip has been redesigned to utilize Armv9 and extensions like Scalable Matrix Extension 2 (SME2). This will be the foundation for the next several years of processors from Cupertino.
    ---
    Let Me Explain T-shirt: teespring.com/gary-explains-l...
    Twitter: / garyexplains
    Instagram: / garyexplains
    #garyexplains
  • Věda a technologie

Komentáře • 427

  • @john_hind
    @john_hind Před 26 dny +252

    Love that we live in a world where '28 billion transistors' is a throw-away line! Lets take a moment to admire that: three transistors for each person on the planet in an area smaller than a postage stamp. Mind, boggled!

    • @TheEVEInspiration
      @TheEVEInspiration Před 26 dny +46

      And then get 8GB of RAM.

    • @Belaziraf
      @Belaziraf Před 26 dny +5

      @@TheEVEInspiration Understand that the Earth is only a 22nm chip with 8 billions transistors and 8 continents (RAM, yes, I counted South and North America independently to fit the theme 😁).

    • @ichbintoll7128
      @ichbintoll7128 Před 26 dny +3

      ​@@Belaziraf north and south america are counted seperate anyway, where did you get the 8th continent from?

    • @lennyvalentin6485
      @lennyvalentin6485 Před 26 dny +2

      @@ichbintoll7128 Seventh too, btw... :) Europe, Asia, N Murica, S America, Africa, Antartica. Unless we've discovered Atlantis sometime recently and I missed all about it, that counts to only 6 continents... :) (Should be just 5 tbh because Europe being separate is really rather BS...)

    • @AbeDillon
      @AbeDillon Před 26 dny +2

      @@ichbintoll7128 Africa, Antarctica, North America, South America, Antarctica, Asia, Australia, Europe, and Britain. Obviously. Haven't you heard of Brexit?!

  • @BrockGunterSmith
    @BrockGunterSmith Před 26 dny +101

    Well done video. I’ve owned almost every model of iPad and have a household of M-series latptop/desktop machines. The M4 iPad that I picked 5 days ago now has wildly changed my workflow. Doing very high resolution wildlife photography (and some videography), speed is 100% the most important variable in whatever equipment I use, besides the display. I’m now spending more time working on my M4 iPad than I am on my Mac Studio or 16” MacBook Pro. It’s performance running Lightroom, Photoshop, DaVinci Resolve, Affinity Photo 2 and Final Cut Pro is absolutely amazing. Now, I shoot, I hookup my Thunderbolt CFExpress card reader, dump hundreds of 50MP Sony .ARW images, and get down to processing my photos (and some 8K/30 and 4K/120 video) all without touching my desktop machines. My Mac Studio M1 Ultra 128GB is faster when doing huge batches of very intensive image processing, but that really just comes down to having more than double the CPU and GPU cores in addition to still very strong NPU performance even though that machine is several generations behind.
    Anyhow, it’s amazing. I care very little about benchmarks because the only thing I care about are the results I can accomplish and how effectively I can get that work done. There are still VERY strong use cases for each of my machines in different scenarios. I won’t be one of those people ignorantly stating that it’s can do everything. It can’t. Neither can my MacBook Pro or Mac Studio, they all work together excelling in their own ways.
    Also, being a gamer, I love playing more and more games on my iPad including FINALLY being able to load up my retro games via Delta Emulator.
    If all you care about is AAA gaming, go buy a Windows PC or Steam Deck. If you troll CZcams, Twitch, and barely use any real capabilities of any of your machines, buy whatever you want and still pretend to be an expert online to make yourself feel good. What it really comes down to is buy the technology that lets you be creative, productive, or have fun in the manner that is best suited to your budget and personal preferences! 😊 I use all major operating systems daily (yes that includes Linux), and all manner of hardware (yes, that even includes an Android device and Windows on ARM). I have stuff I love and hate personally…but at the end of the day it is ALL pretty darn amazing technology and it’s a good time to be alive if you enjoy this sort of thing. 👍

    • @mikldude9376
      @mikldude9376 Před 26 dny +6

      Yes , its amazing mate , i remember back in the days before internet ( vaguely , yes i`m old ish ) , i think we had a 1200 baud modem or was it 300 ? anyway , back then there was " bulletin boards " that you logged into over the phone lines i think , our first computer i bought for my little step bro which had 18 k`s of memory , it was named an Aquarius with silicon rubber keys , it came with a couple of little booklets allowing you to do some hi tech programming :) , for example being able to put a colored dot of your choice in the centre of the screen , or make a running man with dots , etc ,etc , you would have to type in the lines of code to get the desired effect , it also came with some gaming cartridges that plugged into the computer for some very very basic games , good bit of kit to see the basics of computing .
      A period of years later we where playing multi player doom and quake over the phone lines on another of many computers Too long back for me to recall , that experience was just utterly hooooorrible :) sometimes about one frame per second , now people play multiplayer heavy duty games on their phones with ray tracing at 60 FPS :).
      Amazing how times change .
      Cheers .

    • @BrockGunterSmith
      @BrockGunterSmith Před 26 dny +2

      @@mikldude9376 I'm right there with you! :-) Our first family computer was a Sinclair ZX Spectrum. Audio cassettes for storage. I THINK it was almost identical to the Aquarius. We appear to be cut from the same cloth. I love that I was born at a time where I could watch from the very beginnings this mind boggling technology curve. 👍

    • @gaiustacitus4242
      @gaiustacitus4242 Před 26 dny +2

      @@mikldude9376 The first single-board computer I programmed featured 8 toggle switches (one for each bit) and a pushbutton to submit code one byte of machine language at a time. One mistake and you had to reset the RAM and start over. Output was limited to display via LEDs.

    • @zh9732
      @zh9732 Před 26 dny +7

      Refreshing to see a pro user with actual needs. I was just being lectured by a guy w/ a 2008 macbook on how "it can do anything you need right now". I don't understand people that don't value their time enough to improve their systems

    • @TheWallReports
      @TheWallReports Před 26 dny

      @@mikldude9376 Me as well. My 1st computer was the Commodore 64 which I still have to this day; That was back in 1985. I remember my high school had just gotten a computer lab a year or so before; It was equipped w/Radio Shacks TRS-80s. Anyways it was 2 years later when I bought my 1st C=64 modem: A 300/1200 baud modem I used for dialing into BBS and online service providers like QuantumLink, CompuServe, The Source, GEnie. Those were definitely the days.

  • @BenjaminDirgo
    @BenjaminDirgo Před 26 dny +28

    Excellent video, great idea to not just repeat the press kit and come with some additional information 😊 I liked the idea to divide by the GHz to understand the change better

  • @octagonPerfectionist
    @octagonPerfectionist Před 26 dny +50

    i was wondering when they’d switch to armv9. i always thought it was a bit unfortunate that the early M chips were stuck on the older ISA with the less robust vector support. glad to see they’re still trying to compete and not just sitting back.

    • @--waffle-
      @--waffle- Před 26 dny +3

      Same. I've been waiting since the M1. I thought it would come with M2

    • @mrsrhardy
      @mrsrhardy Před 26 dny +2

      Its been a LONG time coming as V8 is well and truely over a decade old architecture

    • @klauszinser
      @klauszinser Před 26 dny +2

      @@--waffle- I am quiet happy with the M1 and was waiting for the M4 (maybe even a little longer). But the M4 is the 1st generation that started to be developed with the Chat GPT 3 knowledge (to be incorporated in the hardware).

    • @mochachaiguy
      @mochachaiguy Před 25 dny +6

      There is definitely more competition in the ARM space nowadays. They can’t afford to rest, and that’s a good thing for everyone 👍🏽

    • @sendi_sen
      @sendi_sen Před 23 dny +11

      @@klauszinser*cackles* No. other ML trained NNs have been involved in arranging transitions and blocks on ICs for well over a decade, but no LLMs are involved in the development of any IC. And no LLMs will ever be involved in the development of any IC as they’re the wrong type of ML system for IC design, and that’s before you take into account that LLMs can’t be used for things that require precision as they fundamentally can’t understand how anything works.

  • @Kw1161
    @Kw1161 Před 25 dny +5

    Thanks Gary for this information and clarification video on the new M4.
    Have a great day!

  • @therealmarv
    @therealmarv Před 25 dny +6

    This is why I subscribe this channel and even put on the bell. Gary goes deeper than the usual CZcams tech reviewer and I really want to know these details in computer tech!

  • @hishnash
    @hishnash Před 26 dny +9

    The dynamic caching has nothing at all to do with system memory.
    What it is talking about is the on die local cache (within the GPU) (think of it like L1/2 cache but for the GPU) and how this is divided into Cache, thread group memory and registers.
    On (all other) gpus when you run a task the GPU will look at the maximum amount of local memory and registers that tasks will need throughout the runtime of the task (this includes optional branches that it might never take but could... you cant know before you run it after all). It will then look at how any registers and local mem each core has and from that figure out how many copies of that shader it can run at once. However most real world shaders have very non uniform mem/regsiter usage were 95% of the shader time will only use a tiny fraction but that 5% will use a huge amount. What this means in practise is the GPU is still limited on how many copies it can run but 95% of the time there is other capacity that it could use but it cant do anything about that since the registers and or local memory are resolved for that high demand point (that might not even happen in each instance of the shader as it likly is behind some optional branch).
    Dynamic caching has 2 key changes to this:
    1) the gpu at runtime can dyanmicly change the local per core (l1) memory to re-alocate how much is used for registers, thread group memory or cache. Rather than on other gpus were the GPU vendor needed to in advance fix this ratio. This is a big deal as differnt tasks have differnt demands on the ratio so now they can make better use across more use cases.
    2) Due to being able to dyanmicly alocate more registers or local memory during runtime the gpu can run more instances of a task at once since if it hits that high mem/register demanding point it can get some more registers (or thread group memory) by kicking something out of cache...
    These 2 things combined has a HUGE impact on performance for branching code pathways were your you cant predict before running it what pathways the code will take so on other GPUs the GPU must anticipate the worst case scenario leaving lots of GPU un-der used due to some optional pathways that maybe non of the threads end up hitting. The biggest culprit of this is RT like operations were your sending a load of rays out to intersect objects it he scene and then you need to do shuddering computation for each interaction but there are lots of differnt types of objects out there leading to very few threads being run once just in case all the rays end up hitting the most costly (from a thread group mem or register) martial function.
    But the key takeaway is it has nothing at all to do with your system memory it is all about the tiny amount of local memory (registers, thread group and cache) that is within each GPU core.

  • @brechtxt8096
    @brechtxt8096 Před 22 dny +1

    Thanks for this Gary!

  • @ikjadoon
    @ikjadoon Před 26 dny

    Thank you for your benchmarks and analysis. Out of curiosity, were all benchmarks run with the new Geekbench 6.3? There were some methodology changes in 6.1 and then SME support in 6.3.

  • @EyesOfByes
    @EyesOfByes Před 26 dny +8

    It is V9?! Hell yeah! So my new device isnt a total waste of money ;) Joking aside, I'm mostly curious on the powerdraw in pure watts

  • @softwaremaniacpsm7075
    @softwaremaniacpsm7075 Před 24 dny +2

    This is an incredible synopsis of M4. I will spread the URL to your video.

  • @akarimsiddiqui7572
    @akarimsiddiqui7572 Před 26 dny +5

    How much do you think Qualcomm is leaving on the table in terms of performance for not being able to implement ARMv9 instruction set in their Elite professors? I am guessing that must've been part of the bump in Apple numbers along with Ghz, improved process node and all that you mentioned.

  • @MrKeedaX
    @MrKeedaX Před 26 dny +3

    you received a new sub by the 4 min mark of the video. thanks.

  • @danwaterloo3549
    @danwaterloo3549 Před 20 dny

    Nice Overview. Thanks!

  • @EverythingCameFromNothing

    Great video!! Very interesting information

  • @rickkarrer8370
    @rickkarrer8370 Před 24 dny

    Thanks for this.

  • @soraaoixxthebluesky
    @soraaoixxthebluesky Před 26 dny +2

    Love the fact that Apple now focus on native port of AAA console games on their machine.

  • @acasualviewer5861
    @acasualviewer5861 Před 26 dny

    Good analysis

  • @mranalog241
    @mranalog241 Před 22 dny +2

    People keep getting distracted by the iPad when the big story here is the impact the M4 / Pro / Max / Ultra will have on Macs.

  • @smokeduv
    @smokeduv Před 26 dny +7

    It's impressive that yes, in numbers the Qualcomm Elite-X should be faster than the M4, but there are no single device with it, even half a year after it was announced, yet the M4 is already here and it was just a few days after, ON AN IPAD. If they take more time there will be a desktop M4 with a lot more performance cores and the Elite-X won't be as disruptive as they thought it was going to be, only in the PC-area, not as the competition for Apple silicon

  • @noinghenah2764
    @noinghenah2764 Před 24 dny +1

    we need to wait for the power usage and temperature of the qualcomm chips

  • @skyak4493
    @skyak4493 Před 26 dny +1

    I have been wondering when Apple would move to V9. Now I need a refresher video on what V9 brings. My only recolection is that it should need less code optimization for high performance.

  • @thebeattrustee
    @thebeattrustee Před 24 dny

    Do any of these changes impact the security concerns that were identified in the M1-M3 chips?

  • @PlanetFrosty
    @PlanetFrosty Před 26 dny

    Interesting take.

  • @AmericaWhatsup
    @AmericaWhatsup Před 23 dny

    The upgrade to ARM 9.4 instructions is encouraging.

  • @simon4512
    @simon4512 Před 25 dny +2

    Great video, super excited for when we know more about the Snapdragon chips! Hoping for great Linux support for them

    • @GaryExplains
      @GaryExplains  Před 25 dny +2

      It looks like Linux support will be ok as Qualcomm has written a blog post about Linux on the X series. 👍

  • @TheEVEInspiration
    @TheEVEInspiration Před 26 dny +1

    Higher clock and adapting the micro-architecture for that is nice, but I read that the M2 and M3 throttled easily.
    Got to see how the M4 does under loads, if throttles too, its not really much progress for many uses.

    • @chidorirasenganz
      @chidorirasenganz Před 26 dny +2

      M2 and M3 didn’t throttle anymore than M1. Those chips only throttled when ran at full load for longer than 10 minutes in a passively cooled chassis

  • @AlanTheBeast100
    @AlanTheBeast100 Před 23 dny

    The OS runs on the efficiency cores almost all the time. The performance cores are used when needed. So the difference of 3 performance to 4 performance cores will only matter to people who are running it full tilt a lot.

  • @abidibrahimsafwan3974
    @abidibrahimsafwan3974 Před 26 dny +3

    Is M4 the first Armv9 SoC from Apple?
    I thought Apple switch to Armv9 with the A17 Pro and M3 series.

  • @l2etranger
    @l2etranger Před 26 dny

    Great video, once again with the best technical nuances that would escape drooling buyers with cash in hand.
    Indeed, choosing its mobile platform to introduce the latest M chip is an indicator of its marketing strategy.
    Windows is closing the gap with its creator and artist suites, we're back to hardware vs software competition.

  • @El.Duder-ino
    @El.Duder-ino Před 14 dny

    8:34 and further shows how increased clock speed rocks all the boats in the port (not just CPU cores)... Apple same as others who design their own processors understand already how hard is to truly improve processor speed just from the microarchitecture point of view and how much easier is to just increase clocks. Processor architecture improvements r hard and expensive and soon or later limits r reached resulting in the single digit % increases. That's why memory system/subsystems and interface with interconnects will play another significant role in the future besides everything else. Memory in comparison to the computing has been left way too far behind for decades and now it has to catch up...
    Excellent quick analysis vid, thx Gary!

  • @johnkost2514
    @johnkost2514 Před 26 dny +1

    I wonder if the M4 has fixed the GoFetch vulnerability?

  • @andre-le-bone-aparte
    @andre-le-bone-aparte Před 25 dny +2

    @07.50 : For those wondering... here is what Geekbench reports for iPad16,6 * ARM 4408 MHz (10 cores) - Single-Core Score: 4004 - Multi-Core Score: 14,943

    • @sprockkets
      @sprockkets Před 24 dny

      That's weird, ars only reported 3600 for single thread....

    • @GaryExplains
      @GaryExplains  Před 24 dny +3

      I used a stacked graph, which seems to be confusing some people. The data labels on the graph at @10:18 are the actual numbers. I won't use stacked graphs in the future.

    • @Winnetou17
      @Winnetou17 Před 23 dny

      I was searching for this. I thought briefly that maybe they are stacked, but it doesn't have a purpose or usefulness there. Glad that there's no errors in the benchmarks.
      Btw @GaryExplains for that kind of graph, showing the numbers would also be nice if you would, please. Maybe it's not important, but I for one can't help myself and spend some time trying to see / compute more precisely the numbers.

  • @MorbidGod391
    @MorbidGod391 Před 24 dny

    At some point can you do a deep dive into the new Snapdragon vs M4?

  • @Garythefireman66
    @Garythefireman66 Před 23 dny

    Apple in an ARMs race (pun intended) with itself 😂 Thanks professor!

  • @edahmed7
    @edahmed7 Před 26 dny +12

    I didn’t know it was a v9 arch chip. Nice thanks Gary… awesome stuff

    • @klauszinser
      @klauszinser Před 26 dny

      V9 and the comparison with Snapdragon (did he say if Qualcomm is on V8 or V9?) that was most important. Going to 3nm I knew but thats a big jump.
      This video explained a lot. I think Apple has to make a big jump for the notebooks. Starting the with the Pro processors?
      Since around 1 year there is already a Windows Arm Server platform available (Ampere Arm Altra, Ultra Max, One; e.g. Hetzner, but Hetzner don't allow to install Windows, All on Arm V8.2/8.6). The CEO of Ampere, she had left Intel years ago.

    • @Freshbott2
      @Freshbott2 Před 23 dny

      @@klauszinseris that the same CEO who ran Intel’s strategy into the ground?

    • @klauszinser
      @klauszinser Před 23 dny

      @@Freshbott2 Renée James ?
      'James joined Intel in 1988 as product manager for the 386 family of motherboards and systems. In the early 90s, she was responsible for product marketing of various software programs..James was appointed president of Intel Corporation on May 16, 2013.
      In February 2016, James left Intel'

    • @Freshbott2
      @Freshbott2 Před 23 dny

      @@klauszinser I had a brain fart. I was thinking Intel CEO -> Ampere CEO.

    • @klauszinser
      @klauszinser Před 23 dny

      @@Freshbott2 I dont understand. She was president at Intel. Anything bad with her at Intel?

  • @vernearase3044
    @vernearase3044 Před 26 dny +5

    As I always say, for consumers the single core performance is the best metric for a computer's snappiness - most consumers who have a ton of cores just spend their time with most cores idle.
    If you're a creative or graphics designer or do a lot of transcoding, you probably use software which is multi-threaded in which case you can make use of that multi-core performance.
    Really though for most folks, something like Speedometer 3.0 is probably the best benchmark you can use to compare computer performance.

    • @lekejoshua4402
      @lekejoshua4402 Před 26 dny +1

      No it's not really, it's good for short term speed if your work requires constant bombardment of the soc. More cores will shine because giant cores throttle.

    • @lekejoshua4402
      @lekejoshua4402 Před 26 dny

      This is from a gamers perspective.

    • @vernearase3044
      @vernearase3044 Před 26 dny

      @@lekejoshua4402 So give me a sample workflow.
      My main driver is a 2020 iMac 5K with a core-i9 (Intel 10910 [10c 20t]), and I can tell you unless I'm transcoding pretty much all cores are pretty much idling.
      If I'm doing something CPU intensive, one or two cores may be running hot but the rest are idling.
      Unless you're using a multithreaded app, multiple cores are simply modern CPU vendors' way of getting compute specs up when they can't get single core speeds running fast enough.
      Multiple cores are great in a server environment where discrete client processes can run in parallel, but not much use for consumers.

    • @vernearase3044
      @vernearase3044 Před 26 dny +1

      @@lekejoshua4402 Gamers don't generally really use a ton of CPU power - they really hit the GPU.
      That's why in general a core-i5 with a good GPU usually runs better than a core-i9 with a medicore GPU.

    • @vernearase3044
      @vernearase3044 Před 26 dny

      @@lekejoshua4402 Really, if single core speeds weren't constrained by engineering and physics, the fastest computer would be a high speed CPU with a good dispatcher - that way you avoid semaphore wait time on local address space or global system resources.
      That's the way computers were designed in the high speed mainframe days before the advent of multiprocessing.
      Multiprocessing in personal computers is the way CPU vendors can put high compute numbers on spec sheets - if you can't engineer a CPU with high enough compute you put parallel processors on the chip so that resource intensive programs can be programmed in a multithreaded manner to get higher total aggregate compute speed, and the big number on the spec sheet will convince consumers that your chip is better than someone else's because the number is higher.
      Multiprocessing is great for a server running discrete processes for multiple users, but for most consumers they just end up powering a bunch of idle cores.
      My main driver is a 2020 iMac 5K with a core-i9 (Intel 10910 with 10 cores and 20 threads). If I'm not transcoding or video editing, I may light up one or two cores but the rest are pretty much always idling.

  • @caseyleedom6771
    @caseyleedom6771 Před 26 dny +2

    I thought that it was widely discussed that the Snapdragon X Elite would consume "up to 80W"? Do we know yet what the maximum TDP of the M4 is?

    • @DeviRuto
      @DeviRuto Před 26 dny +1

      that's only one of the versions, i think the ones going to ultrabooks are the 25W version

    • @caseyleedom6771
      @caseyleedom6771 Před 26 dny +2

      @@DeviRuto I was thinking specifically of the 12-Core Snapdragon X Elite ... but maybe they have two different base clock versions? I suppose that it's all moot till we get accurate TDP figures to go with our Geekbench results. Like others, I liked Gary's charts that normalized performance by the base clock rate, but for a laptop, normalizing performance by Power is probably even more interesting ...

    • @chidorirasenganz
      @chidorirasenganz Před 26 dny +2

      M4 will likely be 20-22 watts just like M2/M3

    • @user-yj1ov9cz9g
      @user-yj1ov9cz9g Před 26 dny

      ​@@caseyleedom6771they have two base TDP versions and they can be configured by the laptop's vendor
      I think there were two Geekbench scores (or for some other benchmark), explicitly for the top model with 23W and 80W TDP, and the majority of benchmarks had the 23W variant based in the notes

    • @Frozoken
      @Frozoken Před 25 dny +1

      Someone tested it, 24w roughly

  • @perge_music
    @perge_music Před 26 dny

    does it still only have USB2 speed on the lighting/USB-C socket or is that now thunderbolt 4 rated?

    • @boshi9
      @boshi9 Před 26 dny +8

      Even last year's iPhone has USB 3.2 Gen 2 speeds of up to 10 Gb/s. The new iPad Pro is confirmed to have USB 4 / Thunderbolt, up to 40 Gb/s.

    • @chidorirasenganz
      @chidorirasenganz Před 26 dny +7

      Yeah iPad Pros have had thunderbolt since m1

  • @Freshbott2
    @Freshbott2 Před 24 dny

    Hi Gary/viewers, it’s usually been wisdom that CPU cores come in twos. In fact I’ve found shoddy performance when using odd cores in Parallels. What is is about iPhones/iPads that it’s normal to have odd numbers vs. say windows?

    • @GaryExplains
      @GaryExplains  Před 24 dny +2

      There is no technical reason for cores to come in an even number (or a power of 2).

  • @gerald1964
    @gerald1964 Před 26 dny +1

    P-core CPU area in the M3 SOC increased substantially over M2 implying a significant redesign. Now M4 takes it to another level with V9 ISA for MB Pro targeted SOCs only one year after M3 assuming M4 MB Pros are released in the October / November time frame. Apple is really executing in regard to the SOC design. So much for future proofing talk with regard to M3...

    • @Frozoken
      @Frozoken Před 25 dny

      it's not tho someone tested it and it's like +3% IPC vs the m3 ☠️. They're pulling an Intel and just upping clock speeds and the absolute max power draw is like 20% higher as a result despite the new node

    • @ThePowerLover
      @ThePowerLover Před 17 dny

      @@Frozoken It's not +3 IPC vs the M3, it's more, on X86-64 GB already supported instructions like SME2, and you can see the IPC uplift on Spec2017 too.

  • @thecloudtherapist
    @thecloudtherapist Před 23 dny

    Are the performance and efficiency cores what ARM used to call big.LITTLE architecture or something completely different?

    • @GaryExplains
      @GaryExplains  Před 23 dny +1

      Yes, it is what Arm used to call big.LITTLE.

  • @mitchec100
    @mitchec100 Před 26 dny +4

    i have seen many reviews and yours is the only one mentioning Armv9.

    • @GaryExplains
      @GaryExplains  Před 26 dny +11

      That is why you should only watch my stuff and ignore everyone else! 😜🫣

    • @mrsrhardy
      @mrsrhardy Před 26 dny +1

      @@GaryExplains Q: Garry, how did you determine it was V9.4A? Im not doubting you but hellsbells, everyone else isV 9.3A/B with 9.4 only now being laid out and I didnt know it was actualy a ARM standard ready for production?!? Id love to see a followup V9 video explaining how the heck you determind this and all about now the state of V9, its future and realationship to OpenV5 stuff, going forward (or a series) as its the FUTURE now as intels slowly dying the power-draw wars!

    • @GaryExplains
      @GaryExplains  Před 26 dny +2

      Arm v9.4-A was announced in 2022.

  • @EugWanker
    @EugWanker Před 24 dny

    @7:24 The Geekbench 6 score graph is very misleading. M4 10-core MT is less than 15000 (unless you use liquid nitrogen to prevent throttling). I think what you've done in the graph is added the MT score on top of the ST score, which pushed the blue MT bar to around 18000.

    • @GaryExplains
      @GaryExplains  Před 24 dny +1

      Yes, it is a stacked graph. I won't use them in future videos.

  • @bill_the_duck
    @bill_the_duck Před 24 dny

    Good analysis. Interesting that Snapdragon has pulled ahead, but it doesn't look like that'll hold for too long when the Pro and Max versions come out. The question is what Qualcomm's response to that will be.
    Either way, I'm really glad to see the PC market finally making a serious push for ARM.

    • @manulovesjesus
      @manulovesjesus Před 23 dny

      Qualcomm has never pulled ahead in terms of max performance. M3max is way faster. Now concerning efficiency we will see once the x elite really comes out in products, but the numbers lean more towards apples SOCs being more efficient.

  • @ChiquitaSpeaks
    @ChiquitaSpeaks Před 2 dny

    Do they update the x86 profiles like they do the ARM profiles (v8 to v9)?

  • @Tommy31416
    @Tommy31416 Před 25 dny

    Any clues on what an M4 ultra or extreme might be like, based on the architecture of the standard M4? Hoping we soon see a 96 core GPU with 1028GB unified memory option 🤞 maybe a desktop only variety of SoC or something

    • @GaryExplains
      @GaryExplains  Před 25 dny

      Guess who didn't watch the video until the end! 😜

    • @Tommy31416
      @Tommy31416 Před 25 dny

      You got my hopes up Gary - I had watched to the end. So I rewatched and indeed, you only spoke about CPU cores, which don’t really interest me that much compared to the GPU capabilities. Great video though and thanks for replying 👍

    • @GaryExplains
      @GaryExplains  Před 25 dny

      Ah, sorry my bad, I didn't notice that you specifically wanted GPU info.

  • @superangrybrit
    @superangrybrit Před 26 dny +1

    I was waiting for Apple to get to newer designs. While I don't blame Apple for it as ARM Inc is spitting them out so fast. It is nice to see newer stuff outside of the realm of smartphones. I was disappointed to learn that the M3 was still not upgraded to v9. Good video!
    Hopefully, we'll get an inexpensive Mac Mini with M4. 😉

    • @robblincoln2152
      @robblincoln2152 Před 26 dny +3

      Most of what v9 offered was already implemented in other ways by Apple. SME2 had been the most notable exception, but keep in mind, these chip design take years to implement. The fact that Apple comes out with something new each year belies the years of design work that went into each one.

    • @chidorirasenganz
      @chidorirasenganz Před 26 dny +3

      My prediction is we’ll see m4 in the desktop Macs at WWDC

  • @EyesOfByes
    @EyesOfByes Před 26 dny +2

    256GB is limited to 1080p in ProRes videorecording, if someone would like to film with this device

  • @EyesOfByes
    @EyesOfByes Před 26 dny +4

    Apple should start with 12GB RAM minimum. Since they already have 6GB RAM chips in the Macbooks

    • @TamasKiss-yk4st
      @TamasKiss-yk4st Před 24 dny

      You can buy it with more RAM.. why they should ruin everyone experience just because you are lazy to click one extra.. or don't you aware of that, the RAM doesn't have an off state, if you cut off the power supply it's forget everythig.. so even in idle/standby thise RAM continously drain the battery, and guess what, double amount if RAM also drain double amount of power.. so everyone can choose how much they need, it's your decision if you prefer 3-5% longer usage or 2x more RAM..

  • @tompurvis1261
    @tompurvis1261 Před 24 dny

    So it will show my CZcams videos faster? Do I need to worry about it playing at 2x speed?

  • @linuxgeex
    @linuxgeex Před 23 dny

    Deeper doesn't necessarily mean the pipeline. It can also mean the reorder buffer size. The pipeline length is the minimum clocks from fetch to retire. All modern uarch cores are doing their best to keep this number small, ie in the 9-13 clocks range. But with the ROB they can execute instructions out of order, holding some back for over 200 clocks to hide consecutive load hazards. Efficiency cores hide small caches and shorter reorder buffers by clocking lower. It's easier to feed the beast when the beast chews more slowly. They're almost certainly not increasing the pipeline length of the efficiency cores to boost clocks. ie I would gladly take 100:1 bets against that.

  • @truecuckoo
    @truecuckoo Před 23 dny

    Now that the efficiency cores are so great, I think Apple should make an efficiency core only processor as well, with focus on power efficiency and maximum battery life. They should experiment with how low they can go without sacrificing the user experience. I think Apple could make a phone with at least double the battery life, maybe more.

  • @alecsei393ify
    @alecsei393ify Před 26 dny

    Excellent Video!!, one question regarding CPU arch.. does Apple or GCC provides a compiler for Armv9.4-A , to developer communities ?

    • @GaryExplains
      @GaryExplains  Před 26 dny +4

      LLVM supports the Armv8.9-A and Armv9.4-A extensions. See community.arm.com/arm-community-blogs/b/tools-software-ides-blog/posts/whats-new-in-llvm-16

  • @johnweiner
    @johnweiner Před 26 dny

    What is the difference between a "performance" core and an "efficiency" core? Counting them seems to figure a lot in the discussion.

    • @lekejoshua4402
      @lekejoshua4402 Před 26 dny +1

      In mcu terms, Performance cores are like hulks and efficiency cores are like thors.

    • @HolarMusic
      @HolarMusic Před 22 dny

      essentially it all comes down to the simple fact that the faster you run a core, the less energy efficient it becomes
      there are also usually other optimizations, which are different for each design
      when scheduling tasks, the OS can assign the less urgent tasks to the efficiency cores (which will do the same amount of computation at a lower power cost) and the more time-critical ones (like low-latency audio, UI responding to user input, etc) to the performance cores
      this not only helps with saving energy (which also matters for thermal limitations i.e. performance), but can also make the processors cheaper, as the E cores can have much higher yields and also be simpler

    • @johnweiner
      @johnweiner Před 22 dny

      @@HolarMusic Thank you for that description...it sounds somewhat analogous to the graphics tasks being assigned to the GPU so as not to overburden and slow down execution in the CPU.

  • @boazjoe1
    @boazjoe1 Před 26 dny +17

    Unsure why the iPad needs all this horsepower, but it is interesting. Love the summary. Thank you.

    • @mikldude9376
      @mikldude9376 Před 26 dny +10

      Rule No. 1 , you can never have too much power , as always , the more power you have , the more ways you can find to use it all up :) , and on the plus side , you are future proofed for years ..

    • @Joniyah444
      @Joniyah444 Před 26 dny +5

      Games, try games 😎

    • @boazjoe1
      @boazjoe1 Před 26 dny +3

      @@Joniyah444🤣

    • @brulsmurf
      @brulsmurf Před 26 dny +3

      I think purely for ai. They kinda get caught with their pants down now everyone and their mom is balls deep into LLM's. IF they get some ai applications, they need some unkown amount of compute to run it.

    • @lekejoshua4402
      @lekejoshua4402 Před 26 dny +2

      It is needed lol mobile formfactor still can't run mobile games at maximum resolution and fps constantly it's always 50-70% now we are like 90% wuth the m4 (for mobile games)

  • @gsestream
    @gsestream Před 26 dny

    cpu is only a single threaded gpu. ie one slice of gpu core. why try to accelerate poor sequential code. instead make the compiler produce parallelizable and nicely fast linear code. yep make the compiler do away completely the need for pipelining. pre-sorted code. yep, more like gpu style code, instead of the normal cpu style code.

  • @DenisOvod
    @DenisOvod Před 26 dny

    I seems like the tshirt link is broken: "Uh oh... We couldn't find the page you're looking for. ..."

  • @jakobw135
    @jakobw135 Před 10 dny

    Is Apple the only manufacturer that uses A.R.M. in their CPUs, or are there others that produce desktops using the same architecture?

  • @Brk_Scheffer
    @Brk_Scheffer Před 22 dny

    Bro, where is the speed test g videos? Why did u stop ?

  • @jakobw135
    @jakobw135 Před 10 dny

    How does the new Apple CPU compare to the equivalent in Intel?

    • @GaryExplains
      @GaryExplains  Před 10 dny

      The difficult part is defining what is an "equivalent" Intel chip to compare with.

  • @retroheadstuff8554
    @retroheadstuff8554 Před 26 dny +6

    M4 Ultra is going to be a killer! 💀👾🤯

    • @GaryExplains
      @GaryExplains  Před 26 dny +2

      It will be interesting to see if there is an "Ultra" version. From what I remember there are some bandwidth issues with the "Ultra" versions, in other words there is a limit to how many of these processors you can just keep gluing together.

    • @boshi9
      @boshi9 Před 26 dny +1

      ​@@GaryExplains Perhaps core-to-core latency? Ultra doubles the memory bandwidth of the corresponding Max chip.

    • @Frozoken
      @Frozoken Před 25 dny

      ​@@GaryExplainsReally goes to show how great that unified memory is in reality.
      Turns out the the low power variant of ram is not in fact faster even if it has a bigger bandwidth number lmao. Lpddr5 has terrible latency and it's so bad to the point where it's bandwidth will also only cap out to like two thirds of its theoretical max while regular ddr hits like 90%+.
      Lpddr makes 0 sense on something with the ultras power budget but apple can't change it seeing they've hyped it up so much and it's the only ram that's unupgradeable lmao.

    • @TamasKiss-yk4st
      @TamasKiss-yk4st Před 24 dny

      @Frozoken sure the latency is not optimal, but the transfer speed difference compensate that (just calc the time required to transfer 1GB on a DDR5 with 40GB/s limit or transfer the same 1GB on a 400GB/s limited M3 Max, which is still "just" a laptop chip.. you don't just need to address the location, you must deliver the data too..)

    • @Frozoken
      @Frozoken Před 24 dny

      @@TamasKiss-yk4st m3 max has 200GB/s theoretical, normal ddr5 has 100-120GB. Trust me when your a CPU operating literally 1000s of times faster than the ram with tons of cache tiers to avoid using the ram as much as possible you get about 0.1% of accesses coming straight from the ram. What do you think the CPU cares about when that one tiny bit of data on the ram is needed and it now has to waste thousands of CPU cycles doing nothing waiting for it to respond? It's latency.
      This massive difference in latency also causes actual transfer rates as I said to be lower because of the latency. The ram takes too long to respond and doesn't start transferring data quick enough and loses speed to a slow start.
      That being said apples on package implementation still has much less downside than normal soldered lpddr which is on everything else, the point is tho, it's still moderately slower and definitely not faster like they claim. That being said it's way more efficient, but the m3 ultra is a desktop so that really doesn't matter as normal dram doesn't use much power at all compared to a desktop chip. They use like 1w, the performance tradeoff makes 0 sense in the ultra.

  • @LouisDuran
    @LouisDuran Před 18 dny

    This passively cooled tablet chip nearly matches the performance of my Intel Core i7-14700 desktop CPU that draws up to 235W at peak power. My best Geekbench6 multi thread score is 19,300

  • @petergibson2318
    @petergibson2318 Před 26 dny

    What’s the difference between an “efficiency core” and a “performance core”?

    • @TooGoodForYoutube
      @TooGoodForYoutube Před 26 dny +1

      one has more performance and can compute bigger tasks, the other has more efficiency and can compute tasks which don‘t have a high priority or are running in the background.

    • @mrsrhardy
      @mrsrhardy Před 26 dny

      When compiling threaded apps, priority is given to some threads, while others get a lower priority and run in the background at a great power saving. On battery this makes a huge difference! You need an engine to drive the wheels on a car but not the instrament pannel, where the same energy would go mostly wasted. Many smaller, lower priorety threads can complete in the background at a massive power saving. ML cores are only on the main cores for this reason and Ai has a seperate NUREAL LINK cores, this makes sure that the engine is doing the tough stuff, and offloading everything else (codes too are hardware cores)

    • @GaryExplains
      @GaryExplains  Před 26 dny +2

      The ML accelerators are in both core types, as I explain in the video.

  • @DK-ox7ze
    @DK-ox7ze Před 26 dny +3

    If the ML accelerator does Matrix multiplication, then what does the NPU do?

    • @GaryExplains
      @GaryExplains  Před 26 dny +13

      Now that is a really pertinent question, one I am thinking about for an upcoming video, but I am not 100% sure abut how to present it. Watch this space... I guess!

    • @DK-ox7ze
      @DK-ox7ze Před 26 dny +4

      @@GaryExplains It will be great if you can also cover how ML tasks are split between the NPU, GPU, and the ML accelerator, and their relative performance? I believe it might depend on the framework but maybe you can cover popular ones like pytorch and Tensorflow.

    • @spinthma
      @spinthma Před 26 dny +1

      Matrix Multiplication is used to calculate the weights of a model, to train it, Npu are for Inference, means reading out the pre trained models for use in apps.

    • @GaryExplains
      @GaryExplains  Před 25 dny +1

      @spinthma What mathematical operation do you think an NPU does for inference?

    • @spinthma
      @spinthma Před 25 dny

      You are right Gary, the Npu does the processing on the input data to make it processable to the trained Model. Cheers!

  • @leviandhiro3596
    @leviandhiro3596 Před 25 dny

    But does it have a calculator app and can you watch two videos at the same time?

    • @GaryExplains
      @GaryExplains  Před 24 dny

      I can't watch two videos at the same time on any platforms, my brain can't handle that. Can yours?

    • @TheStopwatchGod
      @TheStopwatchGod Před 10 dny

      No and No

  • @davout5775
    @davout5775 Před 26 dny +12

    Finally some changed from the original A14 architecture. It really looks to be a massive upgrade. This is most likely the most powerful core in the world for a consumer device. It would be hilarious if the iPhone has more powerful single-core performance than the i9 14900k. The improvement is so big that M4 is now as poweful as M2 Max. People are still sticking to the M1 Max MacBooks and now Apple offers greater performance in the lowest-end of desktop chips. This would certaily open a lot of doors, especially for gaming as that M4 GPU is extremely powerful. We are now in an area where every mobile game can run smoothly between 100-120fps. M2 was not capable of that, mostly because of the 4x resolution difference compared to smartphones.

    • @User9681e
      @User9681e Před 26 dny +1

      They have to use the npu for gaming already

    • @tragicevans4157
      @tragicevans4157 Před 22 dny

      The i9 14900K is consuming 300 watts of power. So if the M4 is faster than the i9, it's godlike CPU.

    • @davout5775
      @davout5775 Před 22 dny

      @@tragicevans4157 Well, it is not faster overall. i9 14900k is an extremely big chip. It has 24 cores, 8 of which are very big with hyper-threading. In multi-core performance, the i9 is very similar to the M3 Max when it comes to multi-threaded performance. M4 is amazing with that it is a LP chip that performs similar to M2 Max while still having the efficiency of chips like the M1, M2 and M3. It also has the most powerful single core of any consumer device in the world riight now.

    • @User9681e
      @User9681e Před 22 dny

      @@tragicevans4157 it's only single core inside a tablet as m4 doesn't have much p cores and doesn't consomme like so much energy but it's still fairly close in multi considering it's consuming very little energy so yeah definitely godlike making tablets a supercomputer of a few years back

  • @timr.2257
    @timr.2257 Před 26 dny +6

    The A17 Pro still wasn't v9 for a late 2023 chip? 🤔

    • @davout5775
      @davout5775 Před 26 dny +6

      Didn't matter at all

    • @mikelay5360
      @mikelay5360 Před 26 dny

      I don't see a problem with that

    • @thetruthisouttheregofindit128
      @thetruthisouttheregofindit128 Před 26 dny

      Yes it still used the older instruction set. I believe it was v8.6

    • @josephjames2174
      @josephjames2174 Před 26 dny +1

      Does that really matter though it’s still the fastest chip in a smartphone at least in terms of raw cpu power

    • @thetruthisouttheregofindit128
      @thetruthisouttheregofindit128 Před 26 dny

      Can’t really say for sure but a17p was pretty disappointing despite still having the fastest cpu. This a17 we see was most likely supposed to be a16 a year ago but supply issues with TSMC’s n3b and ray tracing issues with gpu delayed it to this year.
      It feels like a stopgap, even more than current a16 (which is basically a15+). It has immense cpu power but isn’t as efficient compared to older gen A chipsets and gpu is lacking compared to 8gen3

  • @eddiegardner8232
    @eddiegardner8232 Před 23 dny

    For most of us, M4 is like a 2000mph car, but we live in a 60mph world. Hard to tell M1 from M4 in everyday use. The CZcams content generators won't be happy until they can render a 30 minute 8k video file in 10 seconds.

    • @TamasKiss-yk4st
      @TamasKiss-yk4st Před 22 dny +2

      We live in a world where console level AAA games coming to tablet/laptop (some of them even for iPhone, and not the already released trimed mobile version games, it's totally the same version like on the PS5..), so the performance not just usable for some content creator (it's also more like just a video editor tool, it's usefull for illustrators, music makers, 3D animators etc..)

  • @utubekullanicisi
    @utubekullanicisi Před 26 dny +4

    My predictions are that 6 efficiency cores is where they want to be in terms of the E-core count now, and as the M3 Pro (and only the M3 Pro variant in the M3 chip family) already managed to sneak in 2 extra E-cores, the core count of the non-binned M4 Pro will stay unchanged at 6+6. The M4 Max might make the jump from 12+4 to 12+6, which will make the M4 Pro and Max certainly not as big of a jump as the baseline M4 in terms of the multithreaded performance increase, but it will still be a decent upgrade in the CPU with the higher IPC P and E-cores. As for the A18 Pro, I think that will continue to have 2 P-cores but will make the jump to 6 E-cores.
    M4 Pro: 12-core, 6+6,
    M4 Max: 18-core, 12+6
    M4 Ultra: 36-core, 24+12
    A18 Pro: 8-core, 2+6

    • @GlobalWave1
      @GlobalWave1 Před 25 dny

      I would love to see a 10 core GPU upgrade on the max chips.
      To at least be competitive with the new 50 series coming out next year on laptops.
      Any ideas if any GPU differences in laptops this year? Last year was some good GPU upgrades but not really on core counts.
      4090 mobile still decimates the M3 max GPU in most tasks.

    • @TamasKiss-yk4st
      @TamasKiss-yk4st Před 23 dny

      @GlobalWave1 only with pluged in charger, but most people buying their device with more brain.. i mean if you if you already glued to the connector, the desktop will gives you more performance.. but the difference in M3 Max and Mobile RTX 4090 is not 2,5x.. because Apple only use 60W at max meanwhile the RTX can go up to 150W.. so easy for apple to put 2,5 more GPU cores on the chip and pump 150W into it and beat the RTX 4090 in benchmarks.. but since it's a laptop, they just want to keep the power consumption on that level where your laptop remain usable for longer.. since the whole reason to buy a laptop is that, you mainly use it from battery, what is the point to buy a car as this is the fastes car on the planet, until you can stay in the garage.. what will happen when you need to move between 2 garage and you still need your car performance, but it's patheticly low..?

  • @LeicaM11
    @LeicaM11 Před 6 dny

    I would imagine, that the M4 is 1.5 times as fast as the M2, but not „1.5 times“ faster, because then it would be 250% as fast as the M2.

  • @mbahchiemerie115
    @mbahchiemerie115 Před 26 dny +7

    I think we forget that the X Elite and X Plus are on the 4nm node. Hence the Apple CPUs have a higher transistor density and so could have way more room to create wider execution units and clock them higher with a lower penalty on efficiency. So it's not an apples to apples comparison when you talk about the 9 core M4 Vs the 10 core X Plus.

    • @GaryExplains
      @GaryExplains  Před 26 dny +13

      Yes, and no. The physical process node doesn't necessarily allow the CPU designer "more room" in terms of transistor count. On a higher process node it just means that the same number of transistors take up more space (square mm). However the logic for the wider pipeline etc is the same. Having said that, aiming for a certain clock frequency and a certain thermal envelope is in part determined by the process node. But I think "the Internet" has gone a bit crazy about 4nm can't be compared to 3nm etc, no one was saying that at 14nm vs 10nm or whatever, especially in the Intel world. Is 3nm better, yes, of course, but people buy products not specifications sheets. It is valid and necessary to compare a Windows on Snapdragon PC to a MacBook regardless of the process node, because real people will spend real money on these things and they want to know which one is faster, which one is more power efficient, etc. The argument that you can't compare them isn't going to help anyone.

    • @moundercesar3102
      @moundercesar3102 Před 26 dny +4

      @@GaryExplains what really makes not apples to apples comparison is the fact that the 10 core from the M4 are 4 performance cores and 6 efficiency cores while the SD X elite are 12 performance cores, you should've added the m3 pro and m3 max.

    • @GaryExplains
      @GaryExplains  Před 26 dny +6

      I included a much larger comparison in my video on the X Elite. But since this video is neither about the M3 or the X Elite, I think the comparison I have include here is sufficient. The other video is here if you are interested: czcams.com/video/3tv6tVORMdc/video.html

  • @jimjackson4256
    @jimjackson4256 Před 22 dny +2

    Except that they are expensive as hell.Does anybody even need that much power to watch this video?

    • @GaryExplains
      @GaryExplains  Před 21 dnem +1

      To watch this video or any video, no of course not.

    • @half-qilin
      @half-qilin Před 6 dny

      Maybe not for this video. That said, someone might need it to decode a 32K 360FPS video in 2100.

  • @Pushing_Pixels
    @Pushing_Pixels Před 8 dny

    Definitely a step up, but I'm not convinced sacrificing a P-core for two E-cores is a good trade with the 9-core models. I live in hope that one day Apple will stop being so stingy with RAM, but I'm preparing for disappointment.

  • @TechLevelUpOfficial
    @TechLevelUpOfficial Před 26 dny

    Will wait for real world benchmarks, GB6 was updated to support SME just before the M4 was released... It's the last benchmark to take seriously for figuring out the real IPC improvements.

    • @GaryExplains
      @GaryExplains  Před 26 dny +7

      Why is it the last benchmark to take seriously for figuring out the real IPC performance? Would you prefer benchmarks that only use integers and ignore CPU instructions set additions from MMX onwards? 🤦‍♂️

    • @lekejoshua4402
      @lekejoshua4402 Před 26 dny

      ​@@GaryExplainshe is just saying it's fishy that they updated the app to make the leap seem greater. Considering also that it has happened many times naw

    • @GaryExplains
      @GaryExplains  Před 26 dny +2

      It isn't fishy, it is part of the business model www.geekbench.com/corporate/#development

    • @ThePowerLover
      @ThePowerLover Před 26 dny

      @@lekejoshua4402 GB already supported things like MMX and AVX, used in Intel and AMD CPUs...

    • @GaryExplains
      @GaryExplains  Před 25 dny +2

      @ThePowerLover @lekejoshua4402 Exactly, more specifically GB 6.1 (I think) supported Intel AMX, and the support for SME2 just brings the Arm side up to par with the Intel side.

  • @maxxroach8033
    @maxxroach8033 Před 26 dny +1

    I’m hoping GoFetch has been fixed with the new M4s.

    • @andyH_England
      @andyH_England Před 26 dny +2

      Of course, and I believe it was fixed on the M3. The M1 and M2 may still have this exploit but I would not worry about it.

    • @mrstuartrobertson
      @mrstuartrobertson Před 26 dny

      @@andyH_England Gofetch was not fixed in M3 and you should worry about any exploit no matter how remote. Wiki does not mention M4 neither does the gofetch site so hopefully now fixed

  • @axe-z8316
    @axe-z8316 Před 26 dny +3

    snapdragon will use 2x the power, I mean they've said so... its still not even in the same realm .

    • @TamasKiss-yk4st
      @TamasKiss-yk4st Před 22 dny +1

      They even reached 56°C in the same test where the Macbook was 45°C, and the fun fact, the Snapdragon also measured the loudness, and proved that, their device wasn't a fanless device, but still was way hotter like the Macbook. So sure, the power consumption will be a big question mark (because based on the higher temperature it used more power, and also there is thr cooler problem, that also say that it's used even more..), we will see when they finally release.

  • @greecemobile7610
    @greecemobile7610 Před 26 dny

    a18 pro will have 2+6

  • @fai8t
    @fai8t Před 25 dny +1

    8:15 contradicts 10:18

    • @GaryExplains
      @GaryExplains  Před 25 dny +1

      How?

    • @fai8t
      @fai8t Před 24 dny

      @@GaryExplains M3 15,000 and the M4 looks like a 18,000 in 8:15
      but in 10:18 M3 is 12,000 now and M4 is only 14,500

    • @GaryExplains
      @GaryExplains  Před 24 dny +3

      Both graphs are generated using the same data and using the same tool. The labels on the second graph are the actual numbers (for both). The first graph is "stacked" which is why I guess you are reading it like that. I will be more careful with stacked graphs in the future.

  • @satishkpradhan
    @satishkpradhan Před 25 dny

    Lol,
    There was a time when apple would just draw grph with no scale or just say fastest Ipad ever.
    Now it even specifies the branch prediction improvement like AMD does for Ryzen.

  • @debojitmandal8670
    @debojitmandal8670 Před 26 dny +2

    No A18 pro will get 2+6 and A18 will have last gen A17 cores with enhanced Npu
    3+5 is too much for a phone it will overheat like anything even by putting graphite cooling .

    • @GaryExplains
      @GaryExplains  Před 26 dny +2

      Yeah, your prediction sounds more reasonable! Good call.

    • @vernearase3044
      @vernearase3044 Před 24 dny

      More likely A18 pro will be 2+6 and A18 will be some kind of binned A18 Pro - or maybe there will _only_ be an A18.
      Apple seriously wants to get every device off of N3B to improve yields and reduce complexity and costs.
      N3B was specially commissioned for Apple since they wanted 3nm and N3E wasn't ready. The sooner they can retire the N3B process nodes the better.

  • @DanielLopezlopno
    @DanielLopezlopno Před 19 dny

    Now my YT videos are going to look amazing.

  • @BeaglefreilaufKalkar
    @BeaglefreilaufKalkar Před 2 dny

    The snapdragon X plus and Elite are afaik activly cooled They will have to be compared with the M4 pro and M4 max

    • @GaryExplains
      @GaryExplains  Před 2 dny

      The Snapdragon X Elite and Plus can be used in different thermal setups, with active cooling and without. We will have to wait and see what the different laptops configurations are. Not long now before they become available to consumers.

    • @BeaglefreilaufKalkar
      @BeaglefreilaufKalkar Před 2 dny

      @@GaryExplains I am a bit surprised that there are no real world testresults out there. Curious to see how fast they will run Blender, Batch processing in Lightroom, Affinity, Video processing in DaVinci etc compared to the M3 Series.

    • @GaryExplains
      @GaryExplains  Před dnem

      The devices aren't out yet, that is why. They start shipping on June 18th.

    • @KokkiePiet
      @KokkiePiet Před dnem

      @@GaryExplains Nobody doing testing sofar>

    • @GaryExplains
      @GaryExplains  Před dnem

      @KokkiePiet How can they test when there are not devices out?

  • @NormTurtle
    @NormTurtle Před 24 dny

    make video no snapdragon Elite X too

  • @shalomrutere2649
    @shalomrutere2649 Před 25 dny

    38 Trillion ops/sec on that neural engine doesn't meet Microsoft's minimum requirements to run Copilot on Windows. I wonder how that will work when people use Windows on parallels on the coming Macs.

  • @jonb8633
    @jonb8633 Před 24 dny

    Watching this video using my new 512 mb 11 inch M4 😅

  • @jasonluong3862
    @jasonluong3862 Před 22 dny

    It gets harder to justify buying a Mac for an elderly person who just wants to access the Internet. Thank God for Chromebooks.

    • @The-One-and-Only
      @The-One-and-Only Před 13 dny

      which is okay cause there’s a product for everyone based on their needs

  • @jilherme
    @jilherme Před 24 dny

    finally some IPC gains. X elite is still using 4nm, same as M2, so I will forgive it

  • @29kalel
    @29kalel Před 11 dny

    This is just the base m4 imagine what an m4 pro or max will do in a MacBook.

  • @vengirgirem
    @vengirgirem Před 26 dny

    In practice 10 core version isn't that much better than the 9 core version even in productivity applications, definitely not worth the cost increase imho

    • @andyH_England
      @andyH_England Před 26 dny +2

      I think most people upgrading to the 10-core are doing it for the extra storage and/or RAM rather than the extra performance core.

    • @vengirgirem
      @vengirgirem Před 26 dny +2

      @@andyH_England I'm referring to the situation when people specifically spend more money just to get the 10 core variant when they don't need that much storage and ram like with the person who was mentioned in the video

  • @papa-dt1cv
    @papa-dt1cv Před 18 dny

    If they come w 64 core, 256G ram, 16T, add addition standalone gpu 128G, my kids can use it till they retired. -real sustainability

  • @truedaito9420
    @truedaito9420 Před 25 dny

    Sorry I call BS that Apple uses the Arm scalable extension 2. Apple had its own implementation for matrix operations known as the AMX matrix units which have been part of the M series since the beginning. It’s probably a new implementation of these units which have nothing to do with SME2.

  • @ps3301
    @ps3301 Před 21 dnem

    Dont buy any arm based laptop until they use arm9!! Wait!

  • @AndrewMellor-darkphoton

    amx like Instructions in apple silicon sounds weird

    • @GaryExplains
      @GaryExplains  Před 26 dny +4

      Why?

    • @ThePowerLover
      @ThePowerLover Před 26 dny

      They already had a proprietary version of AMX, and now they scrapped it.

    • @AndrewMellor-darkphoton
      @AndrewMellor-darkphoton Před 25 dny

      @@GaryExplains That's a complex instruction i'd expect only Intel or amd to do. I'm guessing that probably holds up to 20% of the die per core.

  • @vmafarah9473
    @vmafarah9473 Před 25 dny

    APPLe with their CPU advantage with Nvidias AI GPU,s could create a revolutionary chip, if both were combined.

  • @paulwoodward8265
    @paulwoodward8265 Před 12 dny

    snapdragon, even based on vendors claims, only wins multicore by having more cores. On that basis it seems unlikely to be winning on performance per watt.

  • @billymania11
    @billymania11 Před 23 dny

    It's obvious the Snapdragon will not be power efficient. Apple will have a clear advantage in that area.

    • @GaryExplains
      @GaryExplains  Před 22 dny +1

      You might be right, but why "obvious"?

    • @billymania11
      @billymania11 Před 22 dny

      @@GaryExplains Nuvia designed a server chip. It has no low power circuitry. Qualcomm knows that but they have no choice but to roll with the X chip and hope to establish a beach-head before Apple and other more polished ARM chips hit the playing field. Time is of the essence for Qualcomm.

    • @GaryExplains
      @GaryExplains  Před 22 dny +1

      What do you mean by "no low power circuitry". Do you mean efficiency cores or something else? Because Nuvia's initial plan was to make a power efficient server chip, not just a server chip. Obviously it didn't build a server chip but changed course mid-way and aimed for mobile, which would include more "low power circuitry", it didn't just keep its server chip design and put it into a laptop.

  • @NegativeReferral
    @NegativeReferral Před 26 dny

    The M4 in an iPad is like a V12 in a Smart Car with cylinder deactivation you can't turn off.

  • @johndoh5182
    @johndoh5182 Před 24 dny

    It will be interesting to see how this compares to Arrow Lake and Zen 5.
    I'd say it would also be interesting to see how it compares to the Microsoft whatever with the new Snapdragon Elite X chip, but it's Microsoft and I ain't buying it because I REALLY dislike the company so I'm not interested one BIT in that comparison.

  • @mikehawk7307
    @mikehawk7307 Před dnem

    So the M4 is boasting only 1.5X faster than the M2? That is disappointing.