Apple M Chips - The End. Was it even worth it?

Sdílet
Vložit
  • čas přidán 21. 08. 2024

Komentáře • 786

  • @QuantumCanvas07
    @QuantumCanvas07 Před měsícem +571

    When you ask GPT to write scripts for you video

    • @solidgalaxy3339
      @solidgalaxy3339 Před měsícem +6

      😂

    • @sirtra
      @sirtra Před měsícem +27

      This has to be some sort of social experiment or joke.
      How can one person get so much fundamentally wrong and create this video with so much confidence.
      It's like taking a bunch of true statements and putting them into a blender creating something which isn't quite right but not entirely incorrect either.. it's a weird hybrid unique to AI generated content.
      "Increasing clock speed AKA current" 😂
      Increasing clockspeed generally does require more power but current is the enemy in silicon chips and the cause of heat - energy that is wasted, ie the biggest inefficiency! You want to engineer less not more of this!
      I refuse to believe a human interested in technology wrote this script.

    • @desembrey
      @desembrey Před měsícem +4

      @@sirtra Dunning Kruger

    • @rursus8354
      @rursus8354 Před 28 dny +7

      Thank you for warning me, so that I didn't waste time to watch it!

    • @jorgvespermann5364
      @jorgvespermann5364 Před 28 dny +1

      I don´t think it could be this dumb.

  • @iokwong1871
    @iokwong1871 Před měsícem +974

    Yet, another CZcamsr who has no idea what they are talking about when it comes to CPU instruction set......

    • @yedaoctopus114
      @yedaoctopus114 Před měsícem +71

      Chatgpt make me a script for new video

    • @cyrusshepherd4902
      @cyrusshepherd4902 Před měsícem +2

      its u

    • @Pistol4
      @Pistol4 Před měsícem +32

      Arthur is the master of bullshit

    • @im4ch3t3dimachete5
      @im4ch3t3dimachete5 Před měsícem +35

      “With a r m chips” says a lot already

    • @micp5740
      @micp5740 Před měsícem +18

      How about you add some credibility to your statement, by being specific?
      Otherwise you just come across as a troll.

  • @jameshewitt3489
    @jameshewitt3489 Před měsícem +482

    "Without getting too technical" - proceeds to demonstrate that the reason you aren't getting too technical is because you literally don't understand it on a technical level.

    • @KicksonAcapulco13-no5rd
      @KicksonAcapulco13-no5rd Před měsícem

      So?

    • @geostel
      @geostel Před měsícem +44

      @@KicksonAcapulco13-no5rd so author of the video should stop telling BS since he does not have a clue what he is talking about

    • @KicksonAcapulco13-no5rd
      @KicksonAcapulco13-no5rd Před měsícem +1

      @@geostel True, but we're not CPU engineers as well. It's mostly science, mathematics, physics, microprogramming and so on. Most of viewers would probably not understand this and close this video. Sad but true.

    • @YNfinityX
      @YNfinityX Před měsícem +5

      @@KicksonAcapulco13-no5rd🤦🏽‍♂️

    • @KicksonAcapulco13-no5rd
      @KicksonAcapulco13-no5rd Před měsícem

      @@YNfinityX feel free, cheers👍

  • @beragis3
    @beragis3 Před měsícem +125

    Arthur did not do research, at the 10:36 mark he says that TSMC can not go lower than 3nm. They mentioned in April moving to 1.6nm which is smaller than Intel's 1.8nm. Samsung, Intel and TSMC are all racing to the 1nm barrier with a goal of 2030.

    • @TheJmac82
      @TheJmac82 Před 28 dny

      nm doesnt mean anything.. They are all made up numbers. Go by transistor density.

    • @ConernicusRex
      @ConernicusRex Před 25 dny +2

      Intel's 1.8 is just 5 nm for everyone else renamed.

    • @TheJmac82
      @TheJmac82 Před 25 dny +1

      @@ConernicusRex I question even that. I would suspect its actually much larger than that. Intel 7 had a transistor gate pitch of 54nm and a fin height of 53. The main number that matters is Transistor density (M Tr/mm2). Edit: Comparing to everyone else. I suspect 1.8 will be ~3nm TSMC give or take a little.

    • @echelonrank3927
      @echelonrank3927 Před 23 dny

      towards net zero nm by 2040

    • @muhammedowais
      @muhammedowais Před 23 dny

      it's what happens when you write the script using ChatGPT which only has info until 2023 🤣

  • @llampp
    @llampp Před měsícem +140

    2:14 you're saying that a 15% speed increase per generation isn't "that much"? THAT'S MASSIVE for a single generation jump.

    • @MoireFly
      @MoireFly Před měsícem +8

      15% is pretty normal; that's slightly worse than the average for successive AMD zen generations from zen 1 all the way up to zen 5 (on that last one believing AMD's claims, but they've been truthful on this front the past 4 gens, so it's pretty plausible). Qualcomm's gen-on-gen increases have also been in this ballpark - and yeah, we all know that intel has been "struggling" - but it's less so that apple is increasing their lead and more so that intel is falling ever further behind. If anything, the rest of the market appears to be catching up to apple.

    • @andyH_England
      @andyH_England Před měsícem +12

      @@MoireFly Intel 13 to 14 gen was a zero upgrade in CPU and go back through the history of Intel's monopoly, and 15% was rarely seen.

    • @MoireFly
      @MoireFly Před měsícem +3

      @@andyH_England Yes, but as explained in the comment you're replying to - that's not been the norm; it's just intel and even for them only for a limited period.

    • @rokor01
      @rokor01 Před měsícem +4

      True and kind of sad at the same time. Ten years ago this would have been kind of weak especially in the mobile space, twenty years ago 15% would have been considered a generational refresh and thirty years ago chip makers would not have released a generation with such low performance gains.

    • @sergioyichiong7269
      @sergioyichiong7269 Před měsícem

      Apple only can increase transistors soon the amount of transistors will make a lot of heat and the pros of usin arm will be nonsense.

  • @Suqrat400
    @Suqrat400 Před měsícem +192

    He doesn't know anything about chip design or manufacturing. Very bad info in this video. I am Electronics Engineer with 15 years experience in chip design field.

    • @parthpatel8532
      @parthpatel8532 Před měsícem +11

      Ah yes, people can't lie on the internet. I am the lead engineer for Apple and this guy is right

    • @gnstallientood5007
      @gnstallientood5007 Před měsícem +5

      20 years experience criticizing content here; feels great to be right and superior

    • @Mark_Williams.
      @Mark_Williams. Před měsícem +14

      @@parthpatel8532 Don't call people liars when you obviously aren't aware enough about the topic to instead agree with them. You end up looking the fool instead.

    • @cozimo64
      @cozimo64 Před 29 dny +5

      @@parthpatel8532 Ah yes, genuine people can't exist on the internet, specialists are a myth and everyone who makes content that validates my preferred narrative are correct.

    • @TheJmac82
      @TheJmac82 Před 28 dny +1

      @@parthpatel8532 he might be full of it but he isnt wrong. I mean almost everything in this video was incorrect. The one that sticks out the most to me was the 3nm is the size of the transistor. Actually writing code for arm is much easier due to less instructions.... I think that one takes the cake. With A - R - M computers coming in a close 3rd.

  • @michaelashby9654
    @michaelashby9654 Před měsícem +79

    30% gain isn't impressive?! Ok, let's see you improve the performance of anything in computer hardware or software by just 1%.

    • @TheCiiyaah
      @TheCiiyaah Před 27 dny

      Greedy!

    • @echelonrank3927
      @echelonrank3927 Před 23 dny

      ha ha what u mean lets see? relax, u will not notice such a small improvement 😞

  • @MarbsMusic
    @MarbsMusic Před měsícem +68

    Tell us you don't understand processor design without telling us you don't understand processor design...

  • @amritrosell8561
    @amritrosell8561 Před měsícem +183

    If you instead of making assumptions about architecture learn a bit deeper about the differences between the various nodes of 3nm architecture that the chips are made on you would perhaps realize M3 was more of marketing strategy from Apple to be the first CPU that used 3nm, but they did it on the same "dead" branch of 3nm node because the branch the M4 uses is very different from M1-M3 uses. The reason why there was so little improvement is mainly because they didn't redesign the chip particularly much except from removing some parts that M2 Ultra uses to use that area for other things. But now with M4, the architecture is using a very different 3nm node AND they have done an overhaul of the chip design as we can see on the M4 in iPad.
    So yes, M3 was a bit of a shrude move and a whole lot of shenanigan and mostly marketing just to be the first CPU that uses 3nm.
    But to use the numbers from M2 to M3 and get the M4 numbers is probably not going to be so correct as both the new 3nm node is much more efficient and the chip design is overhauled to accommodate the efficiency of the new node... So, no, Apples chip design isn't dying, it's evolving. But sometimes they jump onto things just to be first snd that might look Peculiar...

    • @brunonascimentofavero6097
      @brunonascimentofavero6097 Před měsícem +8

      I think your assumption is a bit wrong, the main factor for a new processor node to come down in price is iteration/yeld and scale. The reason apple probably launched M3 so fast after M2 was so TSMC could pick up more scale and iterate faster on 3nm chips to bring costs for both iPhone and Mac chips as well as advance in their 3nm node. One other thing is that the base iPhones still use last years chips, so only having the pros on the new node would mean less scale.

    • @TheWallReports
      @TheWallReports Před měsícem +6

      @@brunonascimentofavero6097 Also the channel host was incorrect in his description that a 3nm architecture means the transistors were 3nm in size. 3nm just means thats the smallest feature size that fabricated w/that technology, NOT the size of the actual transistor.

    • @logtothebase2
      @logtothebase2 Před měsícem +4

      M4 will be an improvement but its not going to be huge, die shrinks are facing other challenges such as not all features, for example cache RAM shrink proportionally forget Tesla and starship the engineering of ASML Zeiss and TSMC is the most impressive on earth, by far and improving it is incredibly, incredibly hard

    • @ashishpatel350
      @ashishpatel350 Před měsícem +4

      apples entire company is a marketing gimmick

    • @sergioyichiong7269
      @sergioyichiong7269 Před měsícem +1

      M chips are on 3rd gen and has 100billion transistors. Intel chips are on 14th gen and have less transistors. Do math or at least try.

  • @khyleebrahh7
    @khyleebrahh7 Před měsícem +27

    Bring back dislike views. To stop miss information

  • @vernearase3044
    @vernearase3044 Před měsícem +95

    If you don't understand computer architecture and processor design, just make shit up based on what you _think_ is going on.
    Just adding transistors don't make the processor faster - adding decoders and making the pipeline deeper with a lot of instruction prefetch and execution reordering and branch prediction make things faster; the transistor count increases to support these things.
    Nuvia was formed from ex-Apple silicon engineers, and the Snapdragon X was a joint project of ARM and Nuvia engineers. They collaborated to build a server chip, then Nuvia was acquired by Qualcomm - so the Snapdragon X is pretty much a bastard child of Apple.
    The M4 is built on TSMC's N3E node whereas the M3 was built on TSMC's N3B node - a custom node built for Apple when N3E wasn't going to be ready in time (and Apple _really_ wanted a 3nm processor). N3B is more complicated to manufacture and has lower yields, whereas N3E is on TSMC's official 3nm roadmap and is compatible with future nodes like N3P which will result in lower cost. M4 has higher memory bandwidth and is (I believe) Apple's first ARMv9 chip.

    • @andyH_England
      @andyH_England Před měsícem +1

      Well explained. I wonder what the repercussions would be if ARM versus Qualcomm were a win for the plaintiff?

    • @vernearase3044
      @vernearase3044 Před měsícem +14

      @@andyH_England Microsoft ached for a processor which would compete with Apple Silicon - they released Windows for ARM and to their horror, the machines which ran it best were _not_ their own Surface laptops but Apple's 'M' family computers.
      So … Microsoft tasked their silicon proxy - Qualcomm - to come up with something that would remove this humiliation since they didn't have the silicon chops to accomplish their objective.
      Qualcomm has the ethical standards of an alley cat - they've been extorting the handset market for over a decade by insinuating their IP into cellular standards and promising to deliver their IP in a FRAND (fair, resonable, and non-discriminatory) manner, but later turning around selling their modems charging first for the modem chip, second as a license for the IP in their modems, then thirdly charging a percentage of the enclosing device's _entire retail price._
      So when Microsoft called on Qualcomm to deliver _at any price,_ Qualcomm stood ready to answer the call.
      Now Qualcomm had the silicon expertise to create the SoC (System on a Chip) - they've been making 'em forever - what they _didn't_ have was the processor design chops to engineer a faster processor. Qualcomm had been building cell phone SoCs but their designs pretty much all used standard ARM reference cores. Qualcomm would take some ARM cores, add memory, a GPU, some cache and abracadabra: a new Snapdragon SoC was born. But what Microsoft wanted was beyond their expertise, so they scoured the market looking for a faster processor.
      Nuvia was formed by a bunch of ex-Apple silicon engineers, and they collaborated with ARM to design a new server processor. ARM had been wanting a new, fast server processor to compete with Intel Xeon processors in the data center, so they provided Nuvia with a cheap architectural license and collaborative services to design a new server processor core.
      When Qualcomm saw what Nuvia had, they acquired Nuvia to put their new processor into a laptop SoC. That's why Snapdragon X has no e-cores - you don't need e-cores in a server chip (though they _are_ handy to have in a laptop SoC). Engineering a corresponding performant e-core would take almost as much additional work as designing the new, powerful p-core.
      When ARM found out they were _pissed._ They'd handed Nuvia a cheap architectural license and collaborative services because they thought they were getting a server core out of the deal - but instead their baby was going into a laptop. If Nuvia or Qulacomm had come to them talking about building a new laptop processor they would've charged 'em _much more,_ but ARM discounted everything since they thought they were designing a processor to penetrate the server market.

  • @diebygaming8015
    @diebygaming8015 Před 28 dny +16

    You never mentioned the actual reason they don't just make arbitrarily large chips is because increasing the size of the chip decreases the yield

    • @JeremyPickett
      @JeremyPickett Před 28 dny +1

      That is true, it isn't debatable for mainstream, consumer, prosumer, and even professional (like dev machines or content creation). But there are some startups angling for the, "you want an enormous chip? Hold my beer market." 🙃 Cerebras I seem to recall just uses the whole wafer. Which takes waaay more guts than I have. But the real question... Can it run crysis? (I'll see myself out)

  • @SpaceTimeAnomaly
    @SpaceTimeAnomaly Před měsícem +145

    The 5nm or 3nm is NOT the transistor size -- it is only the size of the smallest structure.

    • @TheWallReports
      @TheWallReports Před měsícem +4

      🎯💯Exactly! It the smallest size feature of a structure that fabrication technology permits.

    • @jadoo16815125390625
      @jadoo16815125390625 Před měsícem +11

      No the line width for 5 nm node is ~25 nm. “5” is only a number for marketing

    • @MysterCannabis
      @MysterCannabis Před měsícem +19

      Not even that. There is nothing physically inside that has 3nm size. Process nodes are the same size but their geometry and architecture make them perform like a 3nm theoretical tegular planar transistors. But it's just a marketing term. It was possible because of the introduction of fin fet transistors.

    • @koenignero
      @koenignero Před měsícem +2

      Dont tell him,it will blow his mind

    • @Mikri90
      @Mikri90 Před měsícem +2

      @@MysterCannabis yep, the nomenclature lost any sense of relation with physical size a while back. It's basically a useless matric for the general public.

  • @itstehgamer
    @itstehgamer Před měsícem +20

    Arthur *Whiner* seems like a pretty fitting name tbh

  • @Holden_McHock
    @Holden_McHock Před měsícem +37

    Bro's getting destroyed in the comments 💀

    • @CaioFreitas1987
      @CaioFreitas1987 Před 23 dny +2

      this video is ridiculous

    • @jimmymac2292
      @jimmymac2292 Před 20 dny +1

      Bruh literally laid out Apple rinsed and repeated making their chips bigger, with higher clocks, and more transistors. Then said the M3 should have existed because it... fell in line with what he laid out. Sounds like weird cope

  • @BrentLeVasseur
    @BrentLeVasseur Před měsícem +128

    I’m watching this video on my new M4 iPad Pro. It plays CZcams superfast! I was able to watch this video in half the time!😂

    • @LazyGrayF0x
      @LazyGrayF0x Před měsícem +8

      I whip out my intel mac during superbowl so I dont miss half of half time show if I watched it on my m1

    • @Hart-en-Ziel
      @Hart-en-Ziel Před měsícem +1

      Even watching it 50% of the time was a waste of time

    • @LazyGrayF0x
      @LazyGrayF0x Před měsícem

      @@Hart-en-Ziel word. 2000’s were the bomb. Even dilly dilly was great. Now, ehh

    • @Kobold666
      @Kobold666 Před 29 dny

      I managed to watch 1 minute and it had nothing to do with my Windows machine. Just a bad video.

  • @dpptd30
    @dpptd30 Před měsícem +12

    Correction, the X Elite laptops are NOT the first ARM windows laptops, there are dozens of windows laptop with an ARM SOC before the X Elite all the way back to the Surface RT, this is just Microsoft's ANOTHER attempt to bring windows to ARM, and so far, they still haven't delivered yet, with it still having multiple app compatibility issues such as most adobe professional apps just not even available. To call this a new competition is very misleading, it'll be like saying the Surface Pro X from just a few years ago is a competitor to the M1 Macbooks. They only caught up in terms of performance, they haven't caught up in the actual things that are the problem for windows on ARM for years: compatibility.

  • @LukaPetrovic84
    @LukaPetrovic84 Před měsícem +20

    Maybe check into how 'Elite' is pronaunced...

    • @mattelder1971
      @mattelder1971 Před měsícem +3

      Glad I'm not the only one annoyed by his pronunciation.

    • @SolarLantern424
      @SolarLantern424 Před 29 dny +2

      Maybe just play Elite for a while it would be a good start.

    • @DimitarBerberu
      @DimitarBerberu Před 25 dny

      Most words in English are not pronounced as written. Crappy spelling ;)

    • @LukaPetrovic84
      @LukaPetrovic84 Před 25 dny +1

      @@DimitarBerberu My native languange is Serbian, we exactly read as written and pronounce it the same way. I know exactly what you mean...

    • @DimitarBerberu
      @DimitarBerberu Před 25 dny

      @@LukaPetrovic84 I speak all Yugoslav languages + Aromanian & Esperanto - all phonetic (why stay stubborn & complicate spelling ;)

  • @TheRockingest
    @TheRockingest Před 26 dny +4

    I was all fired up to bash this video, but after reading the comments, it looks like everything has been addressed! I have faith in humanity!

  • @gambaloni
    @gambaloni Před měsícem +21

    Some things that you got wrong is the understanding of smaller nm process isn't correlated to size of the processor, it used to when it was the switch from 14nm to 7nm (give or take) but now it just means a more efficient production for the processor, where improvements come from the lithography (either by laser improvements, more cleaner etching on the wafer, less defects, etc). The winner in this nm production race is TSMC who are the kings of EUV and FinFET production.
    However, there's still one more hurrah of Moore's law, which is Gate All Around Field Effect Transit (GAAFET). This is why both Intel, Samsung and TSMC are pushing so hard to create foundries based on it, it will be a 'reset' of some sorts where they will once again compete to take clients as they are trying to be the first ones to produce it. Intel is also pushing for backside power delivery (which they marketed as PowerVIA) which might actually push them in the lead for processors. TSMC is also working on backside power delivery and I think Samsung is looking into it.
    Considering Apple's desire to be the first one on a new node production, it should be interesting (and possible) if they have some of the future A/M chips made by Intel (Intel's CEO is very interested in making Apple a customer for their chips). We'll see what happens in 2025 where we expect GAAFET to show up.

    • @echelonrank3927
      @echelonrank3927 Před 23 dny

      i dont think is just power delivery. backside everything delivery. it should help by attaching all of the working surface of the chip more directly to cooling,
      and therefore help to increase the capacity for bloatware and computational waste

  • @Noobtaco
    @Noobtaco Před měsícem +44

    Who calls it A.R.M? No one. It’s arm. 💪

    • @FlyboyHelosim
      @FlyboyHelosim Před měsícem +7

      Or "E-Light" for Elite. LOL

    • @jinchoung
      @jinchoung Před 29 dny +2

      srsly. wtaf

    • @saurabh_tanwar
      @saurabh_tanwar Před 25 dny +2

      Cause he never seen a tech video talking about this thing before making the video

  • @GregoryDumont2
    @GregoryDumont2 Před měsícem +17

    It's "eleeete" not "e-light" lol

  • @tutacat
    @tutacat Před měsícem +8

    It's not TSMC's fault that quantum tunnelling/leakage exists.

  • @swdev245
    @swdev245 Před 27 dny +7

    Is this the spiritual successor to The Verge PC build video?

  • @rozetked
    @rozetked Před měsícem +14

    Не понял

    • @TimssTims
      @TimssTims Před měsícem

      хахахахах

    • @Name-tn3md
      @Name-tn3md Před měsícem

      инглиш учи

    • @xivxvi263
      @xivxvi263 Před měsícem +1

      Если бы не этот коммент, то я бы уже подумал что блоегеры откуда то берут тепмлейты для своих видео.

  • @HeavenSevenWorld
    @HeavenSevenWorld Před měsícem +61

    Snapdragon X chips aren't anywhere close to M3 regarding perf/W, both in idle and especially in heavy load - research first instead of making assumptions based on marketing materials. Also, "almost no loss in performance" in terms of executing x86 apps on ARM under Windows is utter bullshit that is misleading your viewers and may end up in making wrong choices.

    • @mrrolandlawrence
      @mrrolandlawrence Před měsícem

      the biggest issue is MS. they are the kings of making things overly complicated. id wager emulated mode used 50% more energy too & that the selected programs were ones that did well on perf. not a fair sample of the market.

    • @sergioyichiong7269
      @sergioyichiong7269 Před měsícem +4

      Do research yourself instead of repeating info you read somewhere with no personal evidence that the snapdragons are not faster.
      You just cant run a test on your self so dont talk about data you dont know its real.
      You dont know what is research.
      Have you checked by yourself on a serious test that they re slower? I m confident that the answer is NO
      Research is not watching maxtech or marques.

    • @chidorirasenganz
      @chidorirasenganz Před měsícem

      @@sergioyichiong7269they’re slower. Deal with it

    • @DimitarBerberu
      @DimitarBerberu Před 25 dny

      Snapdragon X Elite focuses on AI (multiprocessing & more memory as needed for AI). M3 is for past gen single processing. Emotions will not save Apple. Huawei is already getting ahead with its HarmonyOS next complete solution.

    • @HeavenSevenWorld
      @HeavenSevenWorld Před 25 dny

      @@DimitarBerberu The Oryon cores in Snapdragon X Elite were designed to be used in a server chip (by engineers who literally stole the recipe for a fast ARM core from Apple), so it's power-hungry (for ARM standards) and is being raped by M3 Max in every way, which can be configured with up to 128GB RAM and has a much better GPU for AI use cases in general, so get your facts straight; especially in the case of HarmonyOS that no one cares about.

  • @PhoenixNL72-DEGA-
    @PhoenixNL72-DEGA- Před 29 dny +4

    I remember reading news on Acorn starting their design on the Acorn Risc Machine for use in their Archimedes line of home computers back in the 80s. ARM has come a long way since then...

  • @MStoica
    @MStoica Před 17 dny +1

    I’ve only clicked on this video thumbnail because I had some leftover popcorn. But halfway through it I am amazed that the author hasn’t pulled it down yet… he can’t be serious 😂

  • @10p6
    @10p6 Před 21 dnem +2

    Apple should have never added the Neural part of the M chips, and instead used the space for X64 multicore processor. At 5nm they could have given the Macs very fast and efficient Windows compatibility, and still used that part of the CPU for other processing when running Mac OS. Instead they alienated a lot of buyers who need to run windows software.

    • @aliventurous
      @aliventurous Před 19 dny

      Apple wants to convert Windows users to Mac. Why would they design chips that competitors could use?

    • @Alexlfm
      @Alexlfm Před 12 dny

      You can’t just design an x86_64 core and use it. You need to license the architecture same as ARM and the only active x86 licensee is AMD and the only x86_64 licensee is Intel. There’s no way they’d ever get a license and even if they could it would be cost prohibitive for such a feature.

  • @cmd8086
    @cmd8086 Před 24 dny +1

    I usually don't dislike videos but this one deserves it. I currently own both an Intel i9-13900K and an Apple M3 Pro, so I know how they are in the real world.

  • @thewelder3538
    @thewelder3538 Před 28 dny +3

    You can tell you're a Mac user when you start taking about instruction sets without having the slightest idea what you're talking about.
    x86 processors don't have a legacy instruction set. They simply have an instruction set. The only difference between earlier x86 processors and the latest ones is that the later ones have additional instructions added to the base instruction set. Stuff like SSE, SSE2, AVX etc.
    ARM processors have exactly the same thing, just not to the same extent. The main difference between x86 and RiSC based processors is how they work, not their instruction sets. On RiSC everything is done on chip, so you get loads of registers and access to memory itself is somewhat limited; whereas with a CiSC processor like x86 access to memory is relatively quick and so you get many fewer registers.

  • @froschfreak1699
    @froschfreak1699 Před měsícem +22

    I think we shouldn’t expect a revolution with every minor processor update. The M2 gave me Nanite support in Unreal Engine 5, the M3 added hardware raytracing. There are important steps forward for a lot of users. I am everything else but disappointed.

    • @newyorkcityabductschild
      @newyorkcityabductschild Před měsícem +5

      these non-tec savvy CZcamsrs have no clue about the actual advancements, they just expect raw power and not refinement.

    • @mrrolandlawrence
      @mrrolandlawrence Před měsícem +2

      totally. but id also say that the perf bumps are actually quite good and if i recall the intel previous mac bumps in speed i cant recall them being too impressive. i do remember though sub 2hr battery when editing video on my intel MBP. i will get the M4 and so i can run local LLM's for analysis of financial datasets. otherwise im still stoked with my M1's.

  • @EnriqueRivera-sz2ph
    @EnriqueRivera-sz2ph Před 15 dny

    I showed this to my EE-292L professor and he laughed pretty heartedly. He was slightly bothered that over 100k people saw this and potentially believe the speaker knew what he was talking about.

  • @bujin5455
    @bujin5455 Před měsícem +42

    The analysis of what Apple is up against is pretty solid. But the analysis regarding how close the Snapdragon is, is pretty optimistic to say the least. I also think the industry at large seriously underestimates just how hard it is to do a complete architecture switch. Apple is the only company on earth to do it successfully, and they've done it successfully multiple times, and at this point, they make it look easy.
    Microsoft on the other hand has tried many times, and has yet to do it successfully. Largely this is because of mixed incentives. Microsoft doesn't control the whole stack, one of their largest areas of strength is their legacy code base, and vast software compatibility, especially for legacy systems, which M$ caters to more than almost any other company.
    Also, these Snapdragon chips do not provide Apple Silicon like performance while providing AS like power efficiency (which was the game changer), they're actually as bad as x86, while not providing x86 performance for all that x86 software out there, and MANY software houses just aren't going to be incentivized to update their code, assuming they're even still around to do that. Additionally, there's loads of important things they can't run, many video games for instance (which is a forcing function in the industry), and other productivity titles as well.
    This is going to be a VERY difficult transition, because you don't have a single hardware/software stake holder who can manage the whole thing, and force the move forward. The real question isn't whether Apple can maintain their lead, it's can the PC industry actually manage to switch? And of course, AMD and Intel will be doing nothing to help push the industry in that direction, in fact they're hugely vested in making sure it doesn't happen. It's quite possible that vanilla ARM PCs are going to fail to gain traction, though Apple's success with the M-series chips does provide a measuring stick which will help incentivize people to try to make it happen, so maybe, but I don't think Apple has to "worry." After all, what Apple really got out of the move, was being the master of their own destiny, and being able to bring all of their software under a single architecture, and all of the strategic and economic advantages of scaling their own silicon. Kicking the x86's butt was just the cherry, not the icing, let alone the cake.

    • @SteelyEyedH
      @SteelyEyedH Před měsícem +4

      Thanks. Good summary.

    • @DoublePlus-Ungood
      @DoublePlus-Ungood Před měsícem +3

      As much as MS seems to hate legacy they ARE legacy. Of course they get weak in the knees at the thought. Apple can do it cuz Apple can put out a new $2400 microwave that doesn't even fit a pizza slice with the wrong plug and people would wait in line to buy it.

    • @DimitarBerberu
      @DimitarBerberu Před 25 dny

      MS is S/W company & much better than Apple in that. Apple is niche Hardware co & much better on that. Huawei is coming on top of this marketing jungle with better Human Capital & Asia behind their back. Watch for HarmonyOS Next ;)

  • @Elkarlo77
    @Elkarlo77 Před 25 dny +1

    A few things:
    1) x86 Processors are RISC Processors since 1992 which got an CISC interpreter upfront. And the last development of the x86 stage was 2011 with the SSE4.2 Iteration of those Chips which is for media processing. Making it much more complicate to programm in Assember, but thats what compiler and higher Languages are for.
    2) The Problem Apple faces is the point of dimishing returns due shrinking. Shrinking down to 10nm everything still profits. But going down to 5nm only compute units have 60% more efficiency, memory cells only 40%, io Parts only 20%. Going even lower this gets more and more pronounced. Thats the Reason why AMD and now Intel are producing Chiplets, they keep some parts at 6nm and 12nm to keep them Cheap while other Parts are produced in 7/5nm and now in 3nm. The ARM Architecture depends massivly on Cache-Memory in their Pipelines for good Performance. And thats the Problem the M-Chips now faces, the Performance boost the M-Chip saw was due a restructering and lot of Caches in it. Which was a brilliant move, but the performance relies on the Cache in the Chip, and as the Cache doesn't improve a lot due shrinking, it is the bottleneck, so Apple needs to putt more Cache in it to get the performance improvement they want. But Memory is one thing: Slow and hot. And Apple is at the balancing point where increasing the Cache will cost more and more power with less and less gains. So more and more needs to be done balance this problems out.

  • @rsdotscot
    @rsdotscot Před měsícem +3

    The transistors are not 3nm, it's just called the '3nm process'. Anything smaller than ~7nm and you begin running into quantum tunnelling errors.

  • @IOOISqAR
    @IOOISqAR Před měsícem +3

    Those Qualcomm Chips have each 12 Performance Cores, you can't compare those to the baseline M3.

  • @TransCanadaPhil
    @TransCanadaPhil Před 29 dny +4

    I’ve grown out of caring about minutiae like this. Still using an intel i5 imac from 2015 to edit my final cut videos; works fine. I wonder about this new class of tech enthusiasts whom seem more interested in shopping for a shiny new object every year rather than really learning and using their gear. I own a piano that’s 40 years old, works fine. I don’t salivate about replacing it every year or claim (as tech journalists often do) that Steinway or Yahama is “finished” or “dead” because people aren’t rushing out to replace their perfectly good pianos every year. The comp industry needs to become more like every other good. Long lasting products are considered “good quality” that people want to keep and not constantly replace. There’s just this odd “immaturity” that pervades the tech journalism sector; a lack of life experience and long-term maturity. It’s always like listening to a 6 year old child opine about the latest piece of candy being the “greatest ever” and he must have it yesterday.

    • @psyker4321
      @psyker4321 Před 25 dny

      Yep, no reason for them not to continue with AMD dedicated GPUs. Now their OS lags like hell on laptops and even m2 mac mini meanwhile my 2017 macbook pro is smooth and more usable

  • @namd3
    @namd3 Před měsícem +2

    Fun Fact: Not all of the transistors on the chip will be 3nm

  • @gustavinus
    @gustavinus Před měsícem +2

    ARM is just as old as x86. It is becoming king because of SoC. And because it draws little power with low heat, it is better for embedding in SoCs.

  • @alonsolugo2974
    @alonsolugo2974 Před měsícem +3

    When you go lower than 3 nm you start going to the realm of quantum computing and that stuff is messy

  • @popquizzz
    @popquizzz Před měsícem +3

    No, No, NO! at 6.37 The 3nm process does not mean that the size of the transistor is 3nm in size. That is inherently wrong and deceiving. In fact you could have a transistor in a gate array one size and a transistor used in memory storage both use the same 3nm process but be very different in size.

  • @MackGuffen
    @MackGuffen Před 22 dny +1

    My Ryzen 5900 became a lamp stand once I received my M1 Macbook Pro. Even if the updates are only 15%, which I don’t need yet, that’s still pretty good, plus Macs just run period!

  • @lemmonsinmyeyes
    @lemmonsinmyeyes Před 27 dny

    'That's not how this works. That's not how any of this works', commercial quote is very apropos.

  • @Hardwaregeekx
    @Hardwaregeekx Před 27 dny +2

    Personally, the M1 works just fine for me. For me, long battery life, low heat, low power consumption is where it is at in a notebook. The increasing power consumption and heat to the point where you actually need a fan is a real turn off for me.

  • @ronkemperful
    @ronkemperful Před měsícem +53

    Great review. Eventually the laws of physics will be the limiting factor for chip manufacturing regardless of who makes them. The next step will have to be to rewrite code for operating systems in general. I remember when a graphic operating system was able to run on just 4 mb of RAM, now 4 gigabytes of RAM is a minimum for Windows 11. Features keep on being added to all OS but in reality a lot of deadwood and bloat has also been added. Computers have increased in speed and bandwidth since the Mac came out in 1984 and Windows 3.0, but only so much can be improved without running into the laws of physics… atoms cannot be made smaller but operating systems can.

    • @jeffersonmp4
      @jeffersonmp4 Před měsícem +3

      Quantum computing maybe?

    • @jimtipton8888
      @jimtipton8888 Před měsícem +6

      What a great comment! What would it be like if the industry focus on the operating systems and software.

    • @Pipe_RS91
      @Pipe_RS91 Před měsícem +1

      ​@@jeffersonmp4That is quite far from consumer computers right now.

    • @minddrug709
      @minddrug709 Před měsícem +2

      Wait until we go subatomic

    • @axlrose357
      @axlrose357 Před měsícem +1

      Yeah mediocre vga 640x480 with 8 bit colors. Now 4k with 24 bit. So computer need a lot more memory to deal with.

  • @RaniRani-zt2tr
    @RaniRani-zt2tr Před měsícem +30

    The M1 Mac air is still impressive and I’m trying to get it now new

    • @mavfan1
      @mavfan1 Před měsícem +2

      what reasons are there that you have not succeeded?

    • @RaniRani-zt2tr
      @RaniRani-zt2tr Před měsícem +4

      @@mavfan1 money reasons🤣😂

    • @yourlocalriri123
      @yourlocalriri123 Před měsícem +4

      700 dollars new from Walmart is a steal!

    • @wthilmi
      @wthilmi Před měsícem +1

      It is still the best for its price to benefit ratio. Still using it now.

    • @RaniRani-zt2tr
      @RaniRani-zt2tr Před měsícem

      @@yourlocalriri123 I know bro

  • @krzysztofpelon5633
    @krzysztofpelon5633 Před měsícem +1

    6:35 Smallest diemenson on the die structure is 3nm, single transitor is much bigger.
    7:44 4. Optimistaion in current architecture

  • @kylemvanover
    @kylemvanover Před měsícem

    I’m just here for the arguments about iPads replacing Macs, where y’all at? Guess I clicked on the wrong video.

  • @jeffchastain2977
    @jeffchastain2977 Před měsícem +16

    The difference from the Intel Chips and their operations and the M1 was and was always going to be a huge jump. But when you are working the same class of architecture differences are going to slow. Anyone who upgrades at every new release is an idiot. But my M1 MacBook Pro was a big jump over my Intel 9 MacBook Pro, and my M3 Pro MacBook Pro is a big jump ahead of my M1 MacBook Pro. To call Apple "done for" is ridiculous. If Snapdragon lives up to its hype/specs and they put into something that is more durable than the crappy fragile Windows based computers that consist of that market today, and Microsoft can make their operating system into something actually as great as MacOS, then they actually might give Apple a run for its money. Until then I will stick with Macs

    • @cogmission1
      @cogmission1 Před měsícem +1

      I also commented about this. Utilizing innovated ARM Chips to run a Windows operating system is like putting lipstick on a pig. 🙂

  • @shantooobeg
    @shantooobeg Před 28 dny +2

    This is what happen when a farmer try to be a pilot for a day. That's kind of information he is providing in this video.

  • @fanshaw
    @fanshaw Před měsícem +1

    Might be misunderstanding increased power draw. At a very low level, all chips do the same operations. ARM famously has lower performance, but much lower power draw. That's because the desktop-oriented chips (things with more battery/power than a phone) do more speculative execution - they execute both possible outcomes of instructions at a branch in the hope that the branch selects one of them and the result can be used without waiting for the outcome of the branch test. Power is wasted on unused operations, but when you get a hit on a successful operation, you don't have to refill the entire instruction pipeline, you can just merge it back in.
    What Apple have done is steer the industry in the direction of SOC for desktop usage. This is where much of Apple's performance - its all on-chip. That has drawbacks too. You can't add memory, but Apple has difficulty justifying running production lines for M4 chips with 512G memory on them, so just because Apple can expand the processing power, doesn't mean you'll get a balanced system, or a system which allows you lots of memory, but with light processing power, or whatever it you need.
    Apple also relies on hardware acceleration to make things fast, so new standards are difficult to include and a new hardware is needed for new features. In Apple's case, new cpu, new memory, new accelerators, new peripheral interconnects - everything, because its all on-chip. For Apple to improve, it has to break with all its old systems which don't have the next gen feature. Apple want to own everything. That means everything has to be done by them. This might be ok if you're cycling your phone every two years, but that's not what you want for workstations.

  • @billraty14
    @billraty14 Před 25 dny

    Clock speed isn't current, but current draw is related to clock speed. Clock speed is how often the chip changes states. The relation to current draw and heat is a by product of needing to successively charge and dischard transistor gates, which in CMOS act like capacitors.

  • @ThreeBeingOne
    @ThreeBeingOne Před 29 dny

    I’ve literally never been happier with any other design. M1 🤘🏾

  • @jimcabezola3051
    @jimcabezola3051 Před měsícem +1

    Elite is pronounced "ee-LEET." It's not pronounced "ee-LIGHT." French is hard, but...not that hard.

  • @Holycurative9610
    @Holycurative9610 Před měsícem +1

    M1 to M3 had a 25% increase and you think that's not impressive from a new producer of CPUs. If the M1 was equal to an 2nd gen core 2 duo I would understand your POV but it wasn't and to get a 25% jump in only 2 generations is nothing short of miraculous. I don't use Apple because of their anti-repair practices but that doesn't mean I don't appreciate good stuff.

  • @woolfel
    @woolfel Před 7 dny

    lets be honest, the reason Qualcomm managed to catch up is they hired former Apple M1 architects and engineers. Competition is good and there's no magic. It takes asston of work to make it happen. Apple was never going to keep the same level of improvement with each new version. Before M1, Intel used tik tock approach to rolling out new architecture. Why are people surprised the improvements have slowed down?
    the reason for M3 was to gain experience with new node. you don't magically get great yields with a new node. Anyone bitching, try making machines that can create chips with the same density as N3 node. Someone has to be first and someone has to be the first company work through the pain.

  • @k98killer
    @k98killer Před 25 dny

    This is a good example of why it was a mistake to hide dislikes.

  • @funkelator
    @funkelator Před 26 dny

    Watching this on a 2015 model computer running an Intel Core i3-6100U I bought for $70 USA in 2019. Currently running an up-to-date distro of Linux (Zorin). Everything works, and the video is playing back flawlessly.
    Guess I'll continue to keep on keepin' on with this setup, maybe look at a 2024 computer running an M-series processor or a Snapdragon in a decade or so...

  • @asadanik5987
    @asadanik5987 Před 27 dny +1

    still i am not feeling bad with intel macbook pro 13 inch 16gb ram.. and its worth for 2024/2025/2026 i believe.. nothing said about hype. its real i am using it for my main daily machine as a software engineer.

  • @JM_2019
    @JM_2019 Před měsícem +2

    There is no real need to make cpus faster every couple of months. That might be a nice vendor competition but it will not decide what people buy.

  • @niv8880
    @niv8880 Před 27 dny

    I don't care if Apple falls behind Microsoft: I will never be a Microsoft customer. I run Linux and Apple at home - it's all I need.

  • @MeinDeutschkurs
    @MeinDeutschkurs Před měsícem +1

    Are the TOPS at the apple chips based on ANE (Apple Neural Engine)? I just use GPU on M2 Ultra.

  • @27baltimore
    @27baltimore Před 27 dny

    The problem is it's not really competition because Apple has the software. The OS Snapdragon doesn't have any OS that is customly made for the Snapdragon processors. That is the big difference and there's no competition until all of that is under one housing

  • @citywitt3202
    @citywitt3202 Před 27 dny +1

    Man so many thoughts, here’s the top three.
    1. You’re a good presenter, but you need to focus on getting stuff right, over it looking right.
    2. Yeah M3 was filler and it’s clear they aren’t sticking with it, but the iMac has always used laptop components since at least as far back as the aluminium intel iMac days in 2007, I don’t know about prior intel and definitely not the G5. But with that in mind it makes complete sense it uses the M3.
    3. Completely wild how Apple builds an insane chip with insane graphics but I get a perfectly respectable performance on my AMD 5700g with integrated graphics at less than ¼ the cost. This battle will not be won on specs. It will be won on real world stuff like battery life, can my games run well enough for me, and what happens when it breaks.
    Bonus point: what happens when my pc or non Mac laptop breaks is I get new parts and fit them for under £100. When my Mac breaks, it’s time for a new machine at a cost of over £4000 to match the spec.

  • @davidlt
    @davidlt Před měsícem +1

    The ISA part is not a significant part these days when going for performance. It takes 3-5 years for a new major micro-architecture design. Thus you are unlikely to see year-to-year significant improvements. The process node alone gives a fraction of improvement (also it's mainly for logic, and analog parts [incl. SRAM] doesn't scale anymore. A lot of improvement these days also are coming from packaging itself. The main thing providing significant performance and efficiency with Apple M1 was micro-architecture design. They built some large and wide OoO cores.

  • @mitfreundlichengrussen1234

    dream on....

  • @clintmiller88
    @clintmiller88 Před měsícem +1

    I stuck with my m2 because the yields were so low initially on the m3 I knew something was up. Apple bought all of the 3nm chips they kept the market from even getting the 3 nm chips that was a power move

  • @magoostus
    @magoostus Před 24 dny

    I wouldve liked to hear about the ARMv9 architecture and how the matrix multiply math is significantly faster with M4. also to mention that the M3,M2,M1 are ARMv8

  • @LV-ii7bi
    @LV-ii7bi Před 27 dny +1

    18 minutes to convey absolutely no criticism at all, as if they're shit was flawless

  • @Kr33py
    @Kr33py Před 29 dny

    Thank you commenters for saving me 20 minutes

  • @junaidtariq7466
    @junaidtariq7466 Před 22 dny

    The End? They literally made the entire industry switch to ARM64. was it worth it? hell yeah. M chips are amazing

  • @spookyghost7524
    @spookyghost7524 Před 27 dny

    the transistors themselves don't shrink with a die shrink but its the spacing between them that gets smaller this is what is measured in manometers

  • @sorostube1186
    @sorostube1186 Před 25 dny

    Well, even though it says 3nm, the smallest lanes are 3nm. There are still some circuits that are 5 and even 8nm in size.
    The biggest reason that Apple's rushing out the M4 is the unfixable hardware side-channel attack that researchers found at the start of the year affecting M1-M3 CPU's that the researchers were able to use to pull private keys from the chips.

  • @clintmiller88
    @clintmiller88 Před měsícem +2

    M4 is the first real 3Nm chip

  • @BENNETT_ELI
    @BENNETT_ELI Před měsícem

    Just bought the m1 Mac mini, it's so good. I don't see myself upgrading for the next couple years unless they improve massively.

  • @Plazman
    @Plazman Před 28 dny

    Whatever their reasoning for coming out with the M3, I'm pretty sure it wasn't "to be first." That's not Apple's MO.

  • @JAFOpty
    @JAFOpty Před měsícem +5

    I'd phrase it like: "They went with 3nmp to be the first AND to have something to show in the keynote with a higher number" I seriously doubt most Mac users know or care about the technical aspects, they just see M3 > M2.

    • @davidbiagini9048
      @davidbiagini9048 Před měsícem +3

      A big part of Apple's show is for Wall Street, not the users. Apple fanboys and fangirls will buy anything Apple releases - it's Wall Street that really matters to Apple.

  • @javiej
    @javiej Před 26 dny

    What most Apple reviews get wrong is that Apple true competition is not Microsoft or Intel. The real competition for Apple is ...Nvidia.
    Apple products hold pretty well aginst any other computers as long as they do not have nvidia GPUs. But when they have an nvidia GPU they are better at most graphic tasks such as AAA Gaming, VR/AR, Machine learning, CAD/CAM, 3D applications, Nuke / VFX, Resolve, and so on... Apple machines are still great for video editing, but that is also true for nvidia based systems.

  • @supernova874
    @supernova874 Před 27 dny

    That E-Lite gets me ...

  • @iansrven3023
    @iansrven3023 Před měsícem +1

    M1 whilst good was much closer to snapdragons of the time but you wouldn't know that from Apples marketing

    • @TheJmac82
      @TheJmac82 Před 28 dny

      The only real advantage Apple has had is they use the latest process node, the problem is they dont make a new node fast enough for them to keep that pace. The only really good thing I can say about them is the Quad Channel memory. PC is better in almost every other way, minus having an apple on the back of your computer to look cool.

  • @dirtyharry53-vo4id
    @dirtyharry53-vo4id Před měsícem

    Competition is the key. Without the M1 Chip Microsoft would have bothered us with lousy x86 architecture for the rest of our lives.

  • @talldarkstrangerpr
    @talldarkstrangerpr Před měsícem +8

    The M1 MacBook Pro Max is still kicking butt. I wouldn't take Apple out of the equation yet. It took the competition four years to catch up with them. We'll see.

    • @newyorkcityabductschild
      @newyorkcityabductschild Před měsícem +5

      well 4 years and they did not quite catch up, they just threw more cores at it

    • @yayinternets
      @yayinternets Před měsícem

      Agreed. I have the last version of the MBP M1 Max and it’s still great. Will keep it for a long time just like I have my previous ones. Easily get 2-3x more life from these than I would a PC laptop.

    • @psyker4321
      @psyker4321 Před 25 dny

      Does the OS lag like hell like my M2 mac mini?

    • @talldarkstrangerpr
      @talldarkstrangerpr Před 25 dny

      @@psyker4321 not at all. How much memory yours has?

    • @psyker4321
      @psyker4321 Před 25 dny

      @@talldarkstrangerpr 16GB but cannot scroll smoothly on a 4k monitor, so i just use it for cpu-intensive build tasks.

  • @Alan_Skywalker
    @Alan_Skywalker Před 27 dny

    Each transistor's size in 3nm process isn't 3nm. Rather it's like a little more than 20nm if I remember.

  • @youpa
    @youpa Před měsícem +1

    Thank you comment section for saving me time!

  • @rommellagera8543
    @rommellagera8543 Před měsícem +11

    An old Dev here, I bought my son last year Mac Mini M2 8gb/512gb at 43,000 PHP
    I recently bought a Beelink SER7 PC with 7840HS 32gb/1tb at 30,000 PHP, much smaller, quite (could hardly hear the fan) and probably as powerful as M2 on most use cases
    As an added bonus, years down the road I have an option to upgrade the memory to 64gb or use 2tb or 4tb SSD, Apple should have made the Mini upgradable since it does have a bigger chassis

    • @mrrolandlawrence
      @mrrolandlawrence Před měsícem +1

      apple spend ages on the mac pro - but could not get DIMM memory to work fast enough to compete with the on chip DRAM. distance matters.

    • @chidorirasenganz
      @chidorirasenganz Před měsícem

      The GPU is 50% slower in raw compute and 3d rendering even more. In CPU tasks they are mostly on par. M4 will be significantly faster though

  • @qwertyzxaszc6323
    @qwertyzxaszc6323 Před měsícem

    The reason I immediately bought an M1 Mac was because of how amazing the iPad Pro was. It was much more responsive and faster than my desktop Windows machine for most things. Now with the same chip designers we have new Qualcomm chips and Windows finally has decent laptops. It was definitely worth it.

  • @johnscaramis2515
    @johnscaramis2515 Před 27 dny

    6:35 sorry, but the term "architecture" usually refers to how the CPU is built up internally, instrcution set and so on.
    For production the terminus technicus is "node".
    And usually your CPU architecture is designed around a defined node with defined capabilities and defined limits.

  • @daveh6356
    @daveh6356 Před měsícem +1

    M3 ray tracing was apparently bound for M2 but dropped due to poor power performance - I guess N3B solved that.

  • @thomaslechner1622
    @thomaslechner1622 Před měsícem +5

    5 nm to 3 nm is not 5/3 the number of transistors per area, but (5/3)^2, obviously!

  • @rajkarayadan8080
    @rajkarayadan8080 Před 25 dny

    "competition is catching up faster" - yes, but so? Before the M series, Apple was using Intel chips just like their competition.

  • @JamesCaccamise-ft3ds
    @JamesCaccamise-ft3ds Před 12 dny

    Apple did plan for M3 chip to have design and technology eventually used in the M4. But, TSMC was late on being able to deliver the promised high performance/efficiency 3nm manufacturing technology.
    So Apple reverted to a backup plan of basically using tweaked A16 and M2 designs more compatible with an early lower performance/efficiency 3nm technology. Not ideal but they couldn’t just reuse previous A16 and M2 designs with no performance gains for new products, and at least had to release iPhone 15 Pro’s with some Apple Silicon upgrades.
    Apple was smart in concentrating chip design efforts for the promised higher/performance/efficiency 3nm technology. Worst case scenario (which happened) they could use the slightly tweaked old design with the stop-gap lower performance 3nm technology, for A17 and M3 based new products. Clearly not the planned performance upgrade, but an acceptable moderate performance upgrade as a stop gap solution.
    TSMC’s failure to deliver promised high performance/efficiency 3nm technology threw a big monkey wrench in Apple’s plans, but I think Apple adapted well.

  • @subwaygaragemusic
    @subwaygaragemusic Před 28 dny +1

    Typical techbro video...bruh
    Typing this from a dualcore Macbook Pro

  • @JoTokutora
    @JoTokutora Před 26 dny

    Im still fine with my 2020 Imac with the 10900 and the RX5700. No need to upgrade

  • @StuffYouMightLike
    @StuffYouMightLike Před měsícem

    Tell us, who has all of TSMC's 2nm and 3nm capacity booked up for the next 2-3 years? It's Apple. Saying "Apple is done" is ridiculously disingenuous.

  • @v1kt0u5
    @v1kt0u5 Před 28 dny +1

    5:36 4th benefit: Batter Life

  • @svenshruufx7380
    @svenshruufx7380 Před 28 dny +1

    The video is very inaccurate. The circuit's aren't printed on a waver. And 3nm is just the marketing term, the actual smallest structure sizes are between 23-45 nm depending on what you are looking at. Things you can find out within a 2 min research!

  • @busywl69
    @busywl69 Před 28 dny +1

    this lol. the internet has way too many 'experts'.

  • @kimeraevent
    @kimeraevent Před měsícem +14

    Comparing the base M series SoC to Qualcomm's top tier Elite ARM SoC is wild. Do you hear yourself? You may as well compare the Ryzen 3 to the Ryzen 9 or Intel i9. You're comparing the weakest version of the SoC in a Mac to the strongest version of the competitors SoC. What are you going to start saying next, the Rockchip SoC are competition for the Snapdragon 8's?
    The Qualcomm X Plus is barely comparable to the M1 Pro from 3 years ago. The top end X Elite is on part with the M3 Pro, and that SoC is all performance cores. There is no actual understanding of the comparisons in this video.

    • @nikhilt497
      @nikhilt497 Před měsícem +5

      The price segment is what matters, x elite and plus target the base m3 segment

    • @rainmannoodles
      @rainmannoodles Před měsícem

      It’s also true that even though the X Elite has good CPU performance (at least compared to the MacBook Air base model) its GPU is really weak. The M chips are more well balanced.
      I’m glad to see competition, but it just shows how far the Windows PC market has to go. Apple still has a pretty significant lead.

    • @Filtersloth
      @Filtersloth Před měsícem

      @@nikhilt497the price might matter most to you, but not to everyone or every business.
      If I want to get the equivalent of an M2 Ultra chip in a snapdragon chip, which one would I buy?
      Is there a roadmap from Qualcomm for a chip that will suit the needs of high end video editing?
      There are a lot of businesses with money that will pay for equipment that suits their needs. They pay for it because thats what they need.

    • @Filtersloth
      @Filtersloth Před měsícem

      @@rainmannoodlesactually i think the single core performance of the M3 is better than the snapdragon X elite.
      But everyone is doing benchmarks vs an M3 MacBook Air, which only has 4 performance and 4 efficiency cores, while the X elite has 10 performance cores I think.

    • @iamwisdomsky
      @iamwisdomsky Před měsícem

      @@nikhilt497what's the price for if you can't even use it for 100% of things. There are a lot of apps right now that does not work with Windows for ARM even with Prism.
      Mac on the other hand, you are guaranted that everything works.
      I'd rather stick to my Mac for my peace of mind.

  • @Aikkiang
    @Aikkiang Před 11 dny

    Thank you for the video. Even though there are a lot of wrong information in it. These numbers like 3 nanometer or 5 nanometer are no longer describing physical size. It is a label for: in the old manufacturing process it would have the efficiency of 3 nm. The changed the process many years before. Back in the day lets say 32 nm, it described the length of a gate. Today it is only be used for comparison, to describe the advancement of the newer generations.