Everyone is racing to copy Apple. Here's why.

Sdílet
Vložit
  • čas přidán 15. 06. 2024
  • Apple Silicon is changing how consumer tech is designed thanks to software-defined hardware blocks for homogenous compute. The end of the general-purpose CPU is near.
    “Can’t innovate anymore, my ass.”
    Follow me on Mastodon - snazzy.fm/mdn
    Follow me on Instagram - / snazzyq
  • Věda a technologie

Komentáře • 417

  • @DibakarBarua_mattbusbyway
    @DibakarBarua_mattbusbyway Před rokem +334

    Coming from a computer engineer who helps build ASICs for a living, i am not used to this level of nuance and understanding in Tech CZcams. What a breath of fresh air. Your hypothesis on the need for and consequent rise of ASICs and the upcoming tight loop software hindrance (verticality) is absolutely spot on. Loved this. ❤

    • @snazzy
      @snazzy  Před rokem +30

      Thanks so much!

    • @entx8491
      @entx8491 Před rokem +8

      He is of the few CZcamsrs who appears to know what he is talking about.

    • @bluecement
      @bluecement Před rokem +3

      I agree, man; halfway through, I was like...interesting. I'm a Computer Engineer too.

    • @DeepFriedReality
      @DeepFriedReality Před 11 měsíci +1

      I want to do computer engineering. Might go to college this fall for it! Have any tips?

  • @shapelessed
    @shapelessed Před rokem +278

    At the time, we used ASICs to perform any computations.
    Then came reprogrammable circuits which later became modern CPUs, GPUs, controllers, etc...
    At the end, we will see a someback to ASICs as in application-specific accelerators integrated into our chils more and more.

    • @orion10x10
      @orion10x10 Před rokem +15

      Kind of like main frames and cloud computing! Everything old becomes new again

    • @0xD1CE
      @0xD1CE Před rokem +12

      We may also see FPGAs being utilized for general computing. They're a good hybrid between ASICs and programmable chips. This ties better with "software defined hardware" as that's pretty much what it exactly is.

    • @treyquattro
      @treyquattro Před rokem

      @@orion10x10 that's the essence of computer engineering: constant wheel reinvention, but the wheel gets smaller and faster each iteration

    • @treyquattro
      @treyquattro Před rokem +3

      @@0xD1CE not so likely: they're relatively slow and why make the chip reprogrammable? Economies of scale say produce a specific-use device in huge numbers, but change it for the next release, much as nearly all silicon is produced now. Apple is showing the way forward with virtually nothing replaceable or upgradeable between versions, which is sad from a re-use and ecology perspective, but also necessary for future performance gains.

    • @0xD1CE
      @0xD1CE Před rokem +3

      ​@@treyquattro They're slower than ASICs yes... but they can be significantly faster at doing specific tasks than a general purpose CPU. Also this is already an ongoing research topic known as "reconfigurable computing". It may be more economic manufacturing a single chip that can be reconfigured to do different things than making thousands of different chips that's only good at one thing. In fact ever since the chip shortage, many car manufacturers are now using FPGAs in their cars rather than sourcing hundreds of unique chips.

  • @MakeWeirdMusic
    @MakeWeirdMusic Před rokem +570

    Having a computer science degree, I was expecting some fluff and inaccuracies, but this is really good!

    • @raminMTL
      @raminMTL Před rokem +132

      I too have a computer science degree and confirm that his beard is fabulous

    • @Kelson01
      @Kelson01 Před rokem +46

      I don’t know why you were, Quinn has a good reputation.

    • @codyrap95
      @codyrap95 Před rokem +51

      As if CS degrees meant anything nowadays...

    • @totally_not_a_robot1342
      @totally_not_a_robot1342 Před rokem +19

      Gotta agree. I am studying computer architecture at university now and it’s nice to see a video aimed towards the general public that isn’t full of misleading or outright false analogies.

    • @seeranos
      @seeranos Před rokem +14

      @@codyrap95 they seem to for employers

  • @maxkofler7831
    @maxkofler7831 Před rokem +47

    Fun fact: The ML and Video Accelerator cores are already running in Asahi Linux for development. Even the Speakers do work for almost a Year now but have been disabled due to security issues (fear of blowing them up...). And there has been lots of progress in getting display out to work over USB-C. Asashi is just WILD

  • @Ivan-pr7ku
    @Ivan-pr7ku Před rokem +55

    Domain-specific acceleration seems the be the inevitable future of integrated computing, simply because of physics and economics. In the late 1990s we already gave the task of rendering 3D graphics to a dedicated ASICs and over time those GPUs themselves evolved to be much more versatile and programmable yet still incorporating domain-specific logic blocks. The circle of technological evolution is repeating in different tune every now and then, but the general trend is what we see in the mobile market with ever more powerful SoC designs.

  • @yarnosh
    @yarnosh Před rokem +97

    Amiga did it first? At the time though they didn't have operating systems robust enough to abstract away these hardware accelerators so each application had to code to the bare metal, which was really painting themselves into a corner. Each new chipset had to support previous chipsets. And that just wasn't sustainable.

    • @VinceJames
      @VinceJames Před rokem +4

      Thank you! The whole time watching I kept wondering if Amiga would be mentioned. I actually got to see a real Amiga back in the 90's. Apple had years of experience making iPhones before taking on the task of building a whole computer that way. Amiga was essentially trying to leap 25 years into the future in one bound.

    • @gfabasic
      @gfabasic Před 11 měsíci

      The amiga didn't have a bus error handler or memory protection, it's a toy that pretends to be a computer.

    • @AndrewRoberts11
      @AndrewRoberts11 Před 11 měsíci +1

      FYI: The, 1981, Sinclair / Timex ZX81 was the first consumer computing device to utilise ULA's, to offer assorted bespoke functionality. Sinclair programming a single Ferranti ULA's with the logic to replicate 18 discrete IC's that had featured in the previous years Z80. Saving the company many pennies, and requiring repairers to source replacements from Sinclair, to make a few pennies.

    • @yarnosh
      @yarnosh Před 11 měsíci

      @@gfabasic No personal computer had that at the time. PC was primarily DOS based up through 1995. Talk about a joke of a operating system. The difference is that DOS programs didn't lean on hardware acceleration. Sound and video support was generally pretty limited to begin with so it wasn't hard to run those programs on newer hardware. PCs were DOS compatible up through 2010, at least.

    • @gfabasic
      @gfabasic Před 11 měsíci

      @@yarnosh Commodore cut corners and specifically left these features out to cut costs. There are machines before it that have the required hardware and functionality. Step out of your fanboy bubble for a moment and do some research, it isn't hard.

  • @budgetkeyboardist
    @budgetkeyboardist Před rokem +44

    VERY interesting content. In audio recording there are a lot of boxes by Universal Audio and others that use custom CPUs for custom tasks. They're state of the art, but they're basically no faster than a Core 2 Duo - they're just built for a specific purpose.

    • @arturomonterroso8277
      @arturomonterroso8277 Před rokem +6

      Yes, and staying in the music world, a lot of hardware modellers like the AXE FX and Helix pedals use dedicated processors as well.

    • @budgetkeyboardist
      @budgetkeyboardist Před rokem +1

      @@arturomonterroso8277 Yes! I have an HX Stomp sitting right next to my Apollo Solo. They both have a Sharc CPU. Actually they both cost similar money.

    • @AndrewEbling
      @AndrewEbling Před rokem +2

      I've been hoping (for about the past decade) that GPUs would eventually grow production grade audio processing capability, so it would be possible to offload physical modelling softsynths and convolution reverbs to HW and leave the CPU free for other stuff.

    • @budgetkeyboardist
      @budgetkeyboardist Před rokem +1

      @@AndrewEbling Anything that frees up the CPU would be very, very great.

  • @DavidGoscinny
    @DavidGoscinny Před rokem +108

    That is how Nintendo used to design their consoles at one point. Figure out and decide what the console will be able to do and from there decide on the hardware that will let them do what they envisioned.

    • @ridgero
      @ridgero Před rokem +11

      Every console (except the Xbox) did this

    • @Z4KIUS
      @Z4KIUS Před rokem +1

      @@ridgero xbox is not a console though, it's more akin to an iPhone than to PS2

  • @erictayet
    @erictayet Před rokem +9

    Nice video but I'm not sure everyone is racing to copy Apple M-series SoC with dedicated ASIC. Rather, I think the move is to add general-purpose blocks that accelerate those workload like AVX for SIMD matrix calculations. Then gradually add more and more cores like Intel 13th-gen and AMD Zen 5's efficiency cores to run massive parallel calculation that don't need high clock speeds.
    As someone who's been in this industry for 30+ years and having seen the ebbs and flows of CPU architectures, my issue with ASIC is obsolescence and compatibility.
    Compatibility. Apple M2 still doesn't support AV1 while nVidia/AMD/Intel GPUs, Snapdragon 8 Gen 2 & Mediatek Dimensity 1000 do. M2 also don't support Raytracing and Shaders so most modern games wouldn't look as good as it can (if it can run at all.) And no, you can't emulate those RT cores or GPU pipeline in M2 graphics system.
    Another ASIC disadvantage is silicon limitation. The Image Signal Processor (ISP) restricts how much processing can be done for a particular image sensor or in today's context, sensors package. For example, Vivo phones newer than X70 has a dedicated V1 co-processor alongside the mighty Snapdragon 8 to handle low-light scenes because the SD8's ISP + AI cores aren't powerful enough. (Most likely the Snapdragon frame depth is too shallow for frame stacking. Vivo X70 & later X-series all produce incredible low light photos that are better than Galaxy S23 and iPhone 14.)
    ASICs are built with a rather inflexible buffer, if at all. Many ASICs use unbuffered buses to connect to other components like ISP to sensors. If the Sensors package is not connected to CPU via buffers, it is not possible for software developers to access RAW sensor data and can only use data from the ISP (RAW images != raw data.) So, the capability of the camera is fixed at the design stage of the silicon. ASICs also can't access main memory because it doesn't have a memory controller. The interconnect between CPU and ASICs also costs silicon real-estate, which is a major issue with AMD Radeon 7000 GPU infinity fabric when they move to an MCM design.
    On the flipside, you can decode AV1 videos on modern Intel/AMD CPUs BUT it'll take a comparatively huge amount of power running off the SSE units in CPU. I wonder if the ARM CPUs on the M2 can do it.
    Old formats like QuickTime, AVI, Realmedia, WMA, etc., will not be playable if the SoC maker takes out the ASIC and doesn't supply compatible libraries. Sure, cross-platform players like MPC/VLC will probably support playback of old formats using ALU/FPU/DSP but you lose the advantage of lower power consumption and no, most people aren't going to transcode their old files to the new supported format because transcoding loses image quality and is a hassle in general.
    As for emulation? You cannot emulate more powerful hardware and expect playable FPS. So you can emulate old platforms/games, but modern DX12 games on M2? Forget it. M2 probably has 1536 ALUs in the graphics system compared to RTX 4090 16384 CUDA cores, each with 1 ALU+FPU.
    There's a lot of overhead for x86->ARM emulation. Rosetta is fantastic but there's only so much it can do. For x86 Windows/Linux/ChromeBook, it's not even emulation since it's all x86 + iGPU. It's like Hypervisor or Container with very little overhead.
    So IMO, GP-CPU/GPU aren't going away. And that's why you don't see any companies in the laptop space following Apple. The advantages of ASIC in the laptop space are minimum. There are many ways to accelerate certain functions in the CPU/GPU without resorting to ASIC like making SSE/AVX instructions faster. You do this by lengthening the register size, add more transistors performing operations on the register, increase buffer depth and increase L1/L2 cache.
    For Mobile, Apple/Qualcomm/Samsung/Mediatek and other ARM licensees will continue to utilise ASIC for power hungry functions because their ALU/FPU/DSP are weak due to thermal ceilings. But these SoC become obsolete in less than 5 years due to emerging technologies. Since battery size is the biggest limiting factor, ASIC makes sense due to the reduction in power draw so I think mobile is the perfect use-case for ASIC.
    For laptops, I will never buy an Apple Mac now because I know that M1/M2 Macs will never be able to run modern DX12 games or program in Unreal Engine 5.2. Even if you translate DX12 calls to Vulkan/Metal calls, the hardware just isn't there in the M2 SoC to be able to process all that data.
    If you managed to finish reading this, pat yourself on the back! There are much more info but this comment is too long already and I've spent too much time. [putting on flame suit]

  • @Crusader1089
    @Crusader1089 Před rokem +28

    I've felt for at least a decade software engineers have been letting things slide with regard to efficiency. Even python itself is basically just a usability upgrade to C, which pays for itself by losing efficiency. When you see how well third party code can compress and streamline Windows 11, when you think about how little productivity gain there has been from Office 95 to 365 despite exponentially higher spec requirements, I think the situation will not only be improved by hardware and software working together in closer tandem but also by software engineers working harder to make their code more efficient and streamlined.
    I really don't want to go back to the late 80s early 90s when the IBM PC the Macintosh computer and the Amiga all used custom hardware to squeeze unique software experiences.

    • @donkey1271
      @donkey1271 Před rokem +9

      The issue is that companies are pushed to develop new features rather than improve the efficiency of their code. People wouldn't care if say Apple or MS put the work in to streamline their software because modern systems are sufficiently powerful to handle the tasks.
      It legitimately blows my mind how swollen software has become though, the PS2 had only 4mb of VRAM and 32mb of system memory and that thing was a champ.

    • @redpillsatori3020
      @redpillsatori3020 Před rokem

      Hopefully people using tools like ChatGPT to develop Rust apps will speed things up. Many desktop apps use Electron, which is very memory inefficient, but maybe more devs will start building desktop apps using Tauri and Rust.

    • @morgan0
      @morgan0 Před rokem

      if someone needs chatgpt to do it for them, i doubt they really care enough to learn how to make optimizations

    • @oo--7714
      @oo--7714 Před 11 měsíci

      @donkey1271 the ps2 wasn't a champ, it was weak, and as graphics increase, exonentaly, tbe returns decrease

    • @Crusader1089
      @Crusader1089 Před 11 měsíci

      @@oo--7714 his argument was that the PS2 was weak and still had incredible looking games

  • @nanduri79
    @nanduri79 Před rokem +1

    Wish u uploaded more!! I always learn something cool & new from ur vids

  • @alexlexo59
    @alexlexo59 Před rokem +66

    If only apple open sourced the m1 drivers :/ (that's never gonna happen but imagine how much more enjoyable it would've been gaming on an M1 without much tweaking)

    • @fuefme9332
      @fuefme9332 Před rokem +1

      Reasons why apple sucks
      1. Preventing software downgrades on iphones by not allowing unsigned ipsws to be installed on iphones.
      2. Making Photos Libraries incompatible on different mac os (forcing you to purchase icloud to transfer photos libraries
      3. Remotely Disabling and Bootlooping iphones on older ios forcing them to update and erase the phone.
      4. Not allowing newer backups to be restored to older macos/ios. On mac os you would have to restore everything manually.
      5. Disabling older applications that use internet once newer versions are available that are only compatible with newer OS forcing you to update.

    • @alexlexo59
      @alexlexo59 Před rokem +3

      yup but the M1 and M2 chips are just SOO GOOD for a laptop, thats why i REALLY love what the asahi team is doing by reverse engineering the drivers and giving a better life to those laptops.

  • @varunbobba9247
    @varunbobba9247 Před rokem +5

    Quality time and time again... Just Brilliant

  • @peterjansen4826
    @peterjansen4826 Před rokem +31

    Sadly companies tend to copy all the bad things from Apple while neglecting what it is that truly sets Apple apart. Like how Acer copied the one mouse button design but without having the same quality of the touchpad and touchpadsoftware and that with a thick laptop while Apple only removed that 2 mouse button standard because they wanted to make the laptop thinner all the time. Dumb, that one mouse button is a flaw, it just is that Apple did the rest good enough to get away with it. Another example: the glueing/soldering of RAM and SSD's, that is something that you should not copy, most customers don't like that. Apple just gets away with this because of their strong marketing.

    • @udance4ever
      @udance4ever Před rokem +8

      while I share yr frustration - sounds like you may have missed the point: you put everything on a single chip for the performance gain - it's a tradeoff the industry is embracing at the cost of e-waste unfortunately.

    • @haomingli6175
      @haomingli6175 Před rokem +1

      soldering the ram and ssd originally just lowered the production cost and made it slightly lighter and less bulky. Then, apple's unified memory sorted of required the soldering of the memory to the board. The SSD is one thing apple uses to make money. The one redeeming quality is that apple's ssds are really fast.

    • @Teluric2
      @Teluric2 Před rokem +1

      ​@@haomingli6175lowered production cost? Since when you have access to that data ?

    • @haomingli6175
      @haomingli6175 Před rokem +3

      @@Teluric2 I mean it is pretty obvious why. This way uses less material and is easier to manufacture.

    • @morgan0
      @morgan0 Před rokem +1

      soldered ram and ssds has higher bandwidth iirc, there’s more wires between it and the cpu, but maybe i’m misremembering

  • @Toothily
    @Toothily Před rokem +15

    Not a big issue, _IF_ there’s wide adoption of standards-based APIs (Vulkan, OpenCL, etc).
    Great video Quinn, but you missed a part of the story. Apple actually invented OpenCL, made it available as a vendor agnostic heterogeneous computing platform (in 2009). Years down the track they’ve now abandoned it, while the rest of the industry supports it. It’s a story, just like Metal vs Vulkan/OpenGL, of how Apple has completely shifted gear, from open, now to proprietary frameworks. That mentality is not the Apple I used to know.

    • @darwiniandude
      @darwiniandude Před rokem

      Apple has been bitten many times by not owning the tech. Sure KHTML from KDE Konqueror was forked to make WebKit, but Apple poured much time and resources into WebKit over the years and it was brilliant. So great in fact, Google used it to make a new browser, Chrome. Then Google forked it and went down their own path and now Google has a huge monopoly on browser engines much like Internet Explorer once had before WebKit.

    • @eng3d
      @eng3d Před rokem +3

      I saw a video where a 2000 bucks pc beats the most powerful mac of the market. Apple is committing the same mistake than before propietary technology that nobody is using but them, then it founds itself non competitive and it losts the career against pc. Apple chips are impressive but apple cant do anything better. M1 good, m2 same as m1 but with some minor features, m2 pro and m2 max are the same with more cores. M3 will have more cache and cores and nothing else much, and they are still beat by amd and intel on desktop

    • @Toothily
      @Toothily Před rokem +1

      @@eng3d I agree, their SoCs are great for most consumers, but you can't deny a Threadripper based Mac Pro would be a *beast* and also it would actually have PCIe ports lmao. Only with Apple would that be a luxury.

    • @darwiniandude
      @darwiniandude Před rokem +3

      @@eng3d First conundrum - show me a PC a laptop which outperforms Apple Silicon MacBooks without being plugged into the wall. How many hours does that run for? Ok, next little test - which smartphones are powered by AMD or Intel providing superior performance and battery life to iPhone? Third little puzzle - over time is the industry moving towards fixed in place large desktop workstations, and away from portable computers, or is the trend towards smaller and more portable computing devices, wearables, some speculating the end goal being a pair of AR ‘sunglasses’ and so on? In this third consideration, which architecture looks more promising, X86 or ARM? Final note… I have a great big Xeon workstation. ECC ram, lots of cores, raid SSDs etc. Now where I live the electricity used to run this behemoth costs enough to purchase an M2 Mac mini per year. For some tasks the M2 Mac mini is slower, for some it is faster. However it uses almost no energy in comparison. The Mac Studio I ended up with completes my work WAY faster, and will pay for itself in under 3 years. So it’s like a free computer upgrade, vs simply using the Intel machine I already own. Competition is a great thing. I’ve built dual processor Intel boxes in the 90’s, then AMD wiped the floor with them for awhile and I built AMD boxes. Then Intel’s Core architecture was brilliant for awhile. Paul Otellini stupidly refused Steve Job’s request to make ARM chips for iPhone. Intel had the ARM license then, but they sold it, convinced of X86’s superiority. Paul said when he retired that refusing to make the chips for iPhone was his single biggest regret at Intel. He couldn’t see how the numbers would work, with what Apple wanted to pay per chip. But iPhone was a vastly larger chip business than he’d thought it would be, the numbers would’ve worked. Now Intel has to live with that decision. Apple bought PA Semi in 2007 or so and started the long process of the transition away from Intel. It was decided for certain back in 2011, that they’d leave Intel behind.

    • @Toothily
      @Toothily Před rokem

      @@darwiniandude cool story

  • @jamesdoestech9591
    @jamesdoestech9591 Před rokem

    amazing video dude!!! super well spoken and thought out, KEEP THAT WORK UP!!!!

  • @rodrigomartindelcampo9534

    Very accurate video on the engineering side but the insights and predicted future trends are very well thought out! Good video!

  • @jtmathen
    @jtmathen Před rokem

    This is the first video I have seen from your channel. Instant sub great insights, cadence with digestible high and low level explanations.

    • @snazzy
      @snazzy  Před rokem

      Thanks, and welcome!

  • @hermanwooster8944
    @hermanwooster8944 Před rokem +2

    I've been soaking up information on ARM, and this video still informed me. Excellent work. General computing can't compete with specialized hardware. What this means for software compatibility is another story. I suspect some level of standardization will crystalize and more choice will form again.

  • @JacksMacintosh
    @JacksMacintosh Před rokem

    Love this video! Great pacing, chock full of information as always

  • @tamrat_assefa
    @tamrat_assefa Před 11 měsíci

    The content I find here is just incredible. Great work.

  • @citywitt3202
    @citywitt3202 Před rokem +26

    I predict that within five years, particularly with chips like the M1 Ultra, we’re going to see AI models which can send every possible signal to a piece of hardware, log the response, and write a driver for you as a starting point.

    • @mskiptr
      @mskiptr Před rokem +5

      That's kinda something I'd want to build (high chance I'll try to make this my master's thesis - I'm far away from that rn), but frankly the currently mainstream AI models aren't suitable for this task at all.
      When it comes to language models (like GPT3), they specialize in generating text that would be plausible for someone (on the internet) to say. But they don't really do any reasoning tho… I guess, the best shot we have at making AI do reverse-engineering is via reinforcement learning. But not using the typical approaches (e.g. with neural networks), but with something closer to fuzzing + tweaking a precise mathematical model of the hardware.

    • @citywitt3202
      @citywitt3202 Před rokem +2

      @@mskiptr I was wondering about that too, couldn’t you create a conversational AI by starting with basic principles, teaching it the alphabet of a language, teaching it to make words from letters, then rapidly teaching it how to read, write, listen and comprehend, then feeding it a learning package similar to a curriculum similar to what we already have in schools, that included modules on research that would allow it to be linked up to the internet? It would take a lot less power imo and the data requirements at stage one would be way less. Or has this already been done?

    • @mskiptr
      @mskiptr Před rokem +3

      ​@@citywitt3202 It would be definitely more effective than the _tons of labeled examples_ way deep learning works. The problem is deep neural networks don't learn like that. I haven't heard of anyone training parts of a neural network separately, but even though it would probably be a bit more efficient than end-to-end learning, it would require significantly more manual labor.
      It's just that neural networks don't develop understanding in any meaningful way. They are excellent at discovering certain patterns in their input and they can be easily applied in various distinct domains, but if we want our ML model to perform reasoning we need to use something else.

  • @Lordvibrane
    @Lordvibrane Před rokem

    How in the world do you only have 1 mil subs ... too much fun watching your stuff!

  • @kendrickpi
    @kendrickpi Před 11 měsíci

    Thank you for your contribution, another on Spatial Computing cross platform compatibility would be of interest!

  • @katrinabryce
    @katrinabryce Před rokem +5

    In Windows, those equivalents could be provided by Intel, AMD, or NVidia, or, in many cases, by a choice of two of them.
    They are not binary compatible, but software generally still manages to work on all three; or at least on AMD or NVidia in situations where the Intel option isn't powerful enough. This is because the graphics / ml / etc libraries the software developers use have versions compatible with all of the hardware options, and can transparently detect which one to use at run-time or install-time.
    For that reason, I wouldn't be too worried about fragmentation in the long term.

  • @Piplodocus
    @Piplodocus Před rokem

    One again a very good, reasoned and informative vid. Stay snazzy. 👍

  • @maxmouse3
    @maxmouse3 Před rokem +11

    Great video. The asahi people are insane(ly brilliant).
    I'm trying to follow their work! I'm very excited!

    • @udance4ever
      @udance4ever Před rokem

      have you found a definitive overview of asahi that is broad yet still technical?

    • @mskiptr
      @mskiptr Před rokem

      @@udance4ever Their wiki has plenty of info
      Besides that you can dig through Marcan's toots (treehouse social), tweets (web archive only) and plenty of driver submissions | related discussion on LKML (lore kernel)

  • @acubley
    @acubley Před rokem

    Love it when you do these deep dives. /bravo

  • @Morningbell5
    @Morningbell5 Před rokem

    great video, really informative and interesting to watch, thanks

  • @dmug
    @dmug Před rokem +28

    What I worry about is the right to repair, Apple moving the SSD controller to the SoC (and t2 ) has been pretty terrible for consumers as the one component most likely to die is tethered by Secure Enclave. So far there hasn’t been any real advantages: There’s hardware encrypted NVMe and apple doesn’t have any performance gains to show for it.

    • @bapt_andthebasses
      @bapt_andthebasses Před rokem +13

      "we don't care, bro" - Apple

    • @dmug
      @dmug Před rokem +3

      @@bapt_andthebasses Accurate

    • @rohankumarpanigrahi7475
      @rohankumarpanigrahi7475 Před 11 měsíci

      Buy new best ever MX based apple machine - apple

    • @catchnkill
      @catchnkill Před 9 měsíci

      @@bapt_andthebasses Not true. Apple cares that their techmology is proprietary and you must buy from them. They care that there is no upgrade because they want you to buy a new one instead of upgrade.

    • @bapt_andthebasses
      @bapt_andthebasses Před 9 měsíci

      @@catchnkill ?

  • @-.Oz.-
    @-.Oz.- Před rokem

    Your M1 description reminds me of the PS5’s APU. It has dedicated chiplets for sound, decompression, Ray tracing and more.

  • @inlinesk8r477
    @inlinesk8r477 Před rokem +3

    Asahi is actually a really interesting case rn, Lina is a fricking genius and programmer god

  • @francissreckofabian01
    @francissreckofabian01 Před rokem +1

    Do they have to be so small? Can't they be a bit bigger with good performance? I recently replaced a MP which was so heavy to lift.. There would have been room for a lot more processing.if they wanted.

  • @mariekiraly100
    @mariekiraly100 Před rokem

    Hi there. I JUST discovered your AMAZING videos!!! I can't tell you how much this will help me! I'm also going to forward to everyone I know and put up on FB! I have a quick question. What is the cheapest fax app for my iPhone? I see "free" ones, but then you upload and find out it's $25.00 or fees for each page faxed or rec'd - I want the CHEAPEST! THANKS SO MUCH!!!!

  • @stachowi
    @stachowi Před rokem

    This is the kind of content i love... more please.

  • @LostieTrekieTechie
    @LostieTrekieTechie Před rokem

    This was very thorough and interesting. Thank you for making this

  • @KaneLivesInDeath
    @KaneLivesInDeath Před rokem +2

    I find it funny how Intel has no plans to ditch x64, which has pretty much peaked in 2020, yet ARM64 has yet to reach its halfway point.

  • @WarriorsPhoto
    @WarriorsPhoto Před rokem +8

    Saludos Quince. 😮🎉
    Personally I'm impressed with modern day Apple chips. Whether M or A variant. The performance for our modern lifestyles is very impressive. Plus they cool compared with other chips on the market. 🎉❤😊
    Let's hope the hardware encoders get better and better over time. 😮

    • @dinozaurpickupline4221
      @dinozaurpickupline4221 Před rokem

      why does review code community or devs say something about this?why aren't there separate departments to figure out the efficiency of code or the most efficient method to do a particular thing,this isn't highschool its corporate,mediocre code shouldn't be accepted unless there's some compatibility issue

  • @lcarliner
    @lcarliner Před rokem

    I believe that the first instance of software define computer architecture was embodied into the Burroughs B5000 system for Algol-60! It was also one of the very first embodiment of virtual storage and paging! The core system software was written into an enhanced Algol-60 language compiler and number of systems programmers and engineers was less than of a couple of dozen man-bodies and was noteed for unuually big=gree and efficient implementations!

  • @TechSowa
    @TechSowa Před rokem

    Sheesh, you never miss. Another great video

  • @siontheodorus1501
    @siontheodorus1501 Před rokem +5

    I believe this move is going to be okay on mobile or embedded devices. But it will be hard for desktop with lots of hardware configurations and a very broad application. Or desktop chip can be like apple silicon too but at the cost of very expensive chip and maybe the ability to do any overclocking.

  • @derickndossy
    @derickndossy Před rokem +1

    In love with computer engineering

  • @vladm3
    @vladm3 Před rokem

    Great video. Very interesting topic

  • @MyReviews_karkan
    @MyReviews_karkan Před rokem +1

    Marcan and Asahi Lina are actually wizards who fly on brooms. I've seen them with my own eyes.

  • @DenverHarris
    @DenverHarris Před 11 měsíci

    Excellent analysis!

  • @supdawg7811
    @supdawg7811 Před rokem +2

    There is a limit to specificity when it comes to frameworks. Developers want standard mathematical and logical functions, which means that APIs will be very similar to each other. This means that a standardized API will always be achievable, and likely preferable.

  • @El.Duder-ino
    @El.Duder-ino Před rokem +1

    HUGE KUDOS and RESPECT to the Asahi Linux team! Amazing job folks, great report btw👍

  • @DigitalNomadOnFIRE
    @DigitalNomadOnFIRE Před rokem

    This is basically the architecture of the Amiga back in the 80s.
    Specialised custom chips galore.
    It was so ahead of its time.

  • @Cevheroglu
    @Cevheroglu Před rokem

    Could no be more agree. Compatibility is future. I was thinking about this lately, and wanted to invest in a company that provides it in the stock market. (Nope vmware doesn’t count) but couldn’t find anything more than kickstart projects.

  • @DeepFriedReality
    @DeepFriedReality Před 11 měsíci

    Such an informative video. I’ll sub!

  • @bryans8656
    @bryans8656 Před rokem

    Thanks for this interesting perspective.

  • @AkashYadavOriginal
    @AkashYadavOriginal Před rokem +2

    I also fear this Hardware dependent upgrades are coming to our PCs. M1 as a standalone processor isn't really as fast if we take away the specialized accelerators. So when Apple makes new and improved accelerators in their future M series CPUs. Older Macs would be unable to upgrade as their hardware won't be compatible with software. If Windows 11's demanding hardware requirements are being criticised, soon every major upgrade of Windows would require a new hardware.

  • @hishnash
    @hishnash Před rokem +6

    As to if M1/2 Vk drivers will provide the same compatibly as AMDs hand held that is still up in the air. The GPU arciture Apple are using (based on powerVR) is very differnt so the set of optimal VK apis that would be exposed by a VK driver for the GPUs is very differnt to those that would be exposed on AMD gpus. The reason the steam deck is able to provide a high performant compabilty from DX to VK is that these games have already been written to run on AMD's GPUs through DX so they are already trended for the underlying hardware features even if the api interface to these is a little differnt. VK is not a write once run anywere api like OpenGL the entier point of VK is that the driver gets out of the way and the developer can (and must) code for the gpu in question.
    Currently the only gaming pipeline that are TBDR aware on VK are those that run on mobile android games that is a very long way away from the DXVK translation layer set of apis expected. Sure they could provide a load of shims (some as GPU compute passes others might even need to be cpu based) but these could lead to some big perf hits if games run assuming the feature is optimal in HW and infact it is 100x slower as it is done in software on apple GPUs emulating and AMD gpu.

  • @barfourkyei3469
    @barfourkyei3469 Před rokem

    Battle between proprietary vs open source if you ask me. Also, how far can the Apple Silicon go in advancement in relation to Moore's law. Very insightful video nevertheless I must say... thank!

  • @thebeardedmanband9074
    @thebeardedmanband9074 Před 11 měsíci

    Loving the beard brother

  • @hishnash
    @hishnash Před rokem

    Rossata 2 can translate 32bit x86 (even 16bit) the issue with running 32bit applications is the kernel interface that macOS exposes is 64bit only. So 32bit macOS games cant be translated as they still expect 32bit address space offsets when calling system apis etc. However apps that run through wine and crossover can be run since wine exposes a fake windows kernel api to the windows game and then switches into 64bit mode before calling the corresponding macOS kernel apis. This is infact also what they are doing on linux when running 32bit apis since the linux kernel on apple silicon can not support 32bit address space operations (apple silicon is strictly 64bit only).

  • @mckidney1
    @mckidney1 Před rokem +11

    Great video, but it does not combat the misinformation about normal CPU and GPUs being just faster. They evolve too and extensions and revisions of x86 come up as well.

    • @bradhaines3142
      @bradhaines3142 Před rokem

      hopefully we just get past x86

    • @1idd0kun
      @1idd0kun Před rokem

      ​@@bradhaines3142
      Why thought? It's not like ARM is much better. Most of the efficiency in ARM chips come from specific design philosophies, like wider execution pipeline, rather than the instruction set. Meaning x86 can be just as efficient if it adopts some of those design philosophies, and AMD and Intel are already starting to do just that.

    • @Lee.S321
      @Lee.S321 Před rokem +1

      @@1idd0kun When you see laptops like the ASUS S13 (AMD 6800U) having similar (& in some areas better) performance & efficiency than the latest M2 Mac Air, you do wonder why there's such a push for the entire world to transition from x86 to Arm for marginal gains at best.

  • @pupperemeritus9189
    @pupperemeritus9189 Před rokem +2

    I hope in the race of having specialized SoCs others don't follow the same path to overly proprietary, unrepairable and unexpandable devices with completely locked down software.

  • @sam_metal
    @sam_metal Před rokem

    Thank you for showing me that a mountaineering video game exists!!

  • @MaxPrehl
    @MaxPrehl Před rokem

    This is a fantastic video Snazzy.

  • @benderbi
    @benderbi Před rokem

    Amazing video, as always 👏🏼👏🏼👏🏼👏🏼

  • @maksl.
    @maksl. Před rokem +6

    Hey Quinn! kinda off topic but I wanted to take the time to thank you for your numerous tutorials. Helped me out a lot since I am a new mac user :)
    PS: I‘d love for you to do another video on terminal tipps and tricks, since some of the info seems outdated, for example I couldn‘t get youtube-dl to work :(

  • @remigoldbach9608
    @remigoldbach9608 Před rokem +3

    The problem I see, is when the technology becomes obsolete. For example AV1 seems to be the right direction: hardware encoders for H.264 and H.265 will become obsolete and useless… The performance you had are gone for the new AV1 encoding.
    Great Video btw !

    • @0xbenedikt
      @0xbenedikt Před rokem

      This will be exactly why it will not become obsolete any time soon

    • @remigoldbach9608
      @remigoldbach9608 Před rokem

      @@0xbenedikt what do you mean ?

    • @alexmeek610
      @alexmeek610 Před rokem

      Or you get situations like the one apple found itself in where it thought the industry would slowly move away from OpenGL

    • @0xbenedikt
      @0xbenedikt Před rokem

      @@remigoldbach9608 H264 and H265 will remain Hardware supported for decades to come given the huge library of content encoded in it. They’ll just add AV1 support eventually.

    • @remigoldbach9608
      @remigoldbach9608 Před rokem

      @@0xbenedikt I’m not that sure, that it will be relevant in future, but only time will tell.
      Thanks for your reply with the explanation !
      We are living interesting times !

  • @adam872
    @adam872 Před rokem

    When I was first studying computing in the late 80's and early 90's, we had ASICs doing useful stuff in silicon like RAID for storage arrays or digital signal processing (the Motorola 56k being a notable example). It's interesting to see us return to this after decades of doing a lot of these tasks on general purpose processors. I personally loved the era of specialised processing hardware on workstations (like the NeXT cube or SGI range), so to get back to it like Apple have done makes me happy.

    • @Teluric2
      @Teluric2 Před rokem

      You cant put sgi and apple in the same box. Macs are a cheap sgi copy.

    • @adam872
      @adam872 Před rokem

      @@Teluric2 I'd say on the workstation side they're quite analogous. Both make (or made, in the case of SGI) high-end workstation hardware that runs/ran Unix and use ASICs for specific jobs. SGI just so happened to make HPC server kit too. Great stuff it was as well.

  • @morgan0
    @morgan0 Před rokem

    i think we’re entering an age of processors where we make it take up more space, more transistors, to get more speed. i think we might also see some restructuring to reduce the time lost from signal transmission, less back and forth and instead moving the data around more efficiently. and part of that is making it larger by having more functional units, so more can be done in parallel. i have more in-depth thoughts on this but it’s hard to converse in youtube comments.

  • @adamcaswell1924
    @adamcaswell1924 Před rokem

    Your outro reminds me of the Airwolf theme song 😂.

  • @mulasien
    @mulasien Před rokem

    "But Arch carefully, because there be daemons"...that was a good one 😆

  • @dinozaurpickupline4221

    God damn snazzy the beard is coming in strong,reminded me of robbie williams,rip

  • @MarshalerOfficial
    @MarshalerOfficial Před rokem +1

    The Asahi Linux team are doing the same thing as the demoscene did on the Commodore Amiga.

  • @Z4KIUS
    @Z4KIUS Před rokem +6

    the issue with all these SoC is the fact they contain a lot of blocks you'll never use, Apple PC chips are decent at being CPU but that's all, when you need more raw CPU power or more RAM they start falling off, and all these specialized blocks won't help when you just don't perform tasks they support
    additionally powerful CPU and GPGPU can do things never imaginable when they were designed while upgrading existing specialized blocks is basically impossible so we'll always need the CPU
    imagine new codec emerges, at current resolutions it will be a lot of compute for the CPU and suddenly it will stutter even though your machine could decode 12 streams of the former-latest codec at 1W

    • @udance4ever
      @udance4ever Před rokem +1

      this is where programmable logic will come in handy - we're not there at a general purpose level but we'll get there for small sub-units like the example u point out re: decoders soon!

    • @Z4KIUS
      @Z4KIUS Před rokem +1

      @@udance4ever FPGA are great for prototyping but way too expensive for mass use, also I wouldn't count on having capable enough chip to be able to keep up too long, may as well be too weak for next gen streams in case of video decoding
      but that probably could work for encoders, when you don't need it to be done live, just faster than on CPU, that could work

    • @udance4ever
      @udance4ever Před rokem

      @@Z4KIUS technology is always advancing - there will be a generation of chips where the whole concept of "general purpose computing" will be completely redefined. now whether or not we see these go mainstram in our lifetime remains to be seen! 🔮

    • @floppa9415
      @floppa9415 Před rokem

      lol, I didn't see your comment and just wrote the same

    • @haomingli6175
      @haomingli6175 Před rokem

      I think you will be using all the blocks on the Apple Silicon SoC. None of them is unnecessary.
      You see that there is already little room to improve in terms of silicon. The only way to increase performance give a size-weight limit would be to design more and more specialized blocks. You'll just have to buy a new machine to get an upgrade. That also won't be too frequent, because usually people will be able to use a single laptop for at least 3-5 years, sometimes even more, without needing to upgrade.
      Then if a new codec emerges, new hardware is also going to appear. It seems to me that codecs are updated far less frequently than hardware anyways, so that is not something to worry about.

  • @Slurkz
    @Slurkz Před rokem

    Great video, thanks a lot! 💜

  • @notjustforhackers4252
    @notjustforhackers4252 Před rokem +3

    Linux and the dev community around it are awesome.

  • @goodmew1763
    @goodmew1763 Před rokem

    Gorgeous video!

  • @xrobertcmx
    @xrobertcmx Před rokem

    My M1 Air lost to my 4700U at encode, but this also explains why the file sizes are larger on the M1.

  • @goobfilmcast4239
    @goobfilmcast4239 Před rokem +1

    8:11 .......Not Yet ! ........... I am not a Dev, Coder, etc ..... but it seems to me that a these are just complex programming "issues" that will require brute force (and funding) to overcome. Anyway, hardware companies that focus on Consumer-level x86-based products are simply afraid that the end of one-size-fits-all OS model will drastically cut into the last advantage they now hold. Basically, that means diminishing appeal (and profits) as high performance portable devices and ARM-style ASIC solutions gain steam...no pun intended.

  • @GreenBlueWalkthrough
    @GreenBlueWalkthrough Před rokem

    Yeah that's nivce for mobile devices and game consoles on sertin days but what about the rest? Like how would they replace a discrete GPU? And why should they? So what I think they should do is simply have some chips that are massive in stead of smaller is always better like we have now.

  • @MicrophonicFool
    @MicrophonicFool Před rokem

    I totally agree. When Vulcan is native, the limits will fall

  • @sameerjg
    @sameerjg Před rokem +4

    Well I love apple silicon just the lack of sales is a bit meh so let’s see what they’ll do abt that. It definitely makes sense since it’s again not rly worth the upgrade but hope they make the performance jumps higher or even more efficient. Let’s see what they do with tsmc 3nm . Still waiting for that mac pro

    • @yarnosh
      @yarnosh Před rokem

      It's a pretty big upgrade for people doing video editting. The M1 alone basically replaced the Mac Pro costing many times more.THe problem with task-specific accellerators is that they are..... task specific. The average user won't see the big leap in performance. For the general consumer, the power savings is where you get the most benefit while being performance competitive, but not performance dominating.

    • @Park93
      @Park93 Před rokem +5

      For laptops, I would absolutely say that the upgrade is justified. The doubled battery life, lack of overheating, and insanely fast performance is really felt. I switched from the latest Intel MacBook Air to the M1 MacBook Air and the difference is incredible.

    • @sameerjg
      @sameerjg Před rokem

      @@yarnosh to clarify it’s abt apple silicon performance upgrades from m1 to m2 . The m2 has sold not so well. And also there will be a Mac Pro which just a higher end chip but on m3 probably since 3nm if they can finish it lmao.

    • @sameerjg
      @sameerjg Před rokem +1

      @@Park93 it’s abt m1 to m2. Since m2 has sold poorly in comparison. From anything else to m1 amazing

    • @JacquesCoetzerAU
      @JacquesCoetzerAU Před rokem +4

      In my personal, perhaps slightly biased opinion at least one of the reasons for Apple's slump in M2 sales could be to do with the fact that these M1 processors are already such an insane upgrade over the rubbish Intel processors of the past that it will last many users for years and years to come. There simply is no incentive for the incremental upgrade to M2 unless you're living on the cutting edge, working in an industry that demands the latest and greatest or can afford these upgrades as a casual user. I'm quite happy to stick with my M1 Pro for the time being, unless my circumstances change. M2 is a nice to have, but not a necessity for most people. This is all opinion and speculation though, there are probably other factors to consider like the unstable global economy and whatnot that could be impacting sales too.

  • @GayDingo
    @GayDingo Před rokem

    I love your very Spanish "Nada" at 8:25

  • @JustATempest
    @JustATempest Před 11 měsíci

    I want to point out that at the very end around 10:30 you mentioned that apples moving to software defined silicon. I recognize you probably meant that they're moving towards features that require custom drivers they control, A software defined chip, is a Chip that's designed to be programmed to become a CPU. You could program this CPU to act like any CPU.

  • @gjermundification
    @gjermundification Před rokem

    2:44 How can I access the linear algebra part from XCode?

  • @Eric-ue5mm
    @Eric-ue5mm Před rokem

    Knew why i followed you all these years ago. We really have enought "Tech" CZcamsrs that know next to nothing about tech. 💪

  • @javierandreiotaku
    @javierandreiotaku Před 11 měsíci

    good video, but i notice that you say that Rise of the Tom Rider runs using proton but its a native game in linux

  • @phenixnunlee372
    @phenixnunlee372 Před rokem

    So shamelessly plug one of my favorite professors. Peter milder wrote a paper "Single-Chip Heterogeneous Computing:
    Does the Future Include Custom Logic, FPGAs, and GPGPUs?" you can find it on his personal sight but yeah basically came to the same conclusion.

    • @snazzy
      @snazzy  Před rokem

      I’ll go check it out!

  • @KarloSiljeg-ci6wg
    @KarloSiljeg-ci6wg Před rokem

    This is why new architectures and task specific ICs will come back to fashion. With the availability of open cores and open ISAs like RISC V will push some architectures to the side over the next 5-7 years

  • @veryfrosty
    @veryfrosty Před rokem +7

    I know a Grand Seiko when I see one, excellent choice 😎

  • @SchioAlves
    @SchioAlves Před rokem

    AMD started a similar approach (of bringing the performance of lower level programming without the hassle of it though an API) with it's GPUs in the early 2010's with the Mantle API, which the promises was later repeated by Apple when releasing it's equivalent, Metal. That API than was donated and became Vulkan

  • @Robert-sj8ld
    @Robert-sj8ld Před rokem

    Good subject👍

  • @joatmor
    @joatmor Před rokem

    Linux on my M1 macbook sounds great. I hope it gets Gnome Shell support.

  • @justlou717
    @justlou717 Před rokem

    Great video!

  • @dv_xl
    @dv_xl Před rokem +2

    The barrier to ubiquitous software support is well defined APIs. Apple doesn't want to design a hardware platform but if a CPU/GPU/ASIC manufacturer wants to provide features like this: they will be required to have well defined APIs from which people will produce Linux drivers.
    The obvious new challenge is that as the complexity & specificity of these components increases so does the requirement of more complex layers of logic, and so comes forth the requirement for API standards.
    The reason 3D graphics are becoming more ubiquitous is because of Vulkan, which provides that usable software API for GPUs. The creators, Khronos Group work on royalty free, open standards for video. Without this integral piece, we end up with a DirectX situation all over again.
    To make matters worse - with the loss of revenue from the loss of exponential growth at these silicon manufacturers, I predict we'll see hardware manufacturers start to claw back revenue from the software side. Intel already has begun to do this, charging a fee to unlock specific functions of the hardware. My biggest fear is that in ~10 years CPU manufacturers start selling their silicon as a service, where a monthly "licensing fee" is required for your CHIP+ experience, or whatever.

  • @br3nto
    @br3nto Před 11 měsíci

    1:39 why couldn’t a breakthrough like this happen? For example, I was reading in a museum in the nuclear physics section that neutrons can/have to be guided like light because of its wavelike properties. Besides, the bosonic properties of light may be the saving grace for increased computation in the same region of space.

  • @michalsulicki5148
    @michalsulicki5148 Před 11 měsíci

    Great material, however - all of this was once done and it was running super fast and stable. There was once machine called Amiga :)

  • @tipoomaster
    @tipoomaster Před rokem +5

    Dreaming of a take on the Steam Deck with M3. But Apple would have to massively court developers to bring native AAA games to Apple Silicon and Metal 3, unless they did something like Proton, which maybe they should throw a bag of cash at Valve to do.

    • @richmahogany1710
      @richmahogany1710 Před rokem

      Valve would happily do it for free I would imagine, but they would likely require the vulkan API.

  • @Piipperi800
    @Piipperi800 Před rokem +2

    I think one great example of people already going and copying Apple in regard to this, is AMD and Intel with their dedicated accelerators. Intel has since the 11th gen CPU's iGPUs been on the top of the game in terms of supported hw accelerated video codecs. And recently, AMD has started to level up their media accelerators with RDNA 3 as well, since before RDNA 3, especially their media encoders have not really been great. Intel's iGPUs also now have some form of AI accelerators too.

    • @1idd0kun
      @1idd0kun Před rokem

      And AMD also integrated AI accelerators in their new mobile SOCs (the ones they call Phoenix Point). Too bad laptops based on those chips aren't out yet (they were delayed until next month if I'm not mistaken).

  • @nicksterwixter
    @nicksterwixter Před rokem

    This was such a good video!

  • @ryandietrich8604
    @ryandietrich8604 Před 11 měsíci

    Only 4 comments mentioning FPGA's? This is the answer, and I can't wait. Multi-core FPGA would be insane (where they would get more refined over time based on the task load).

  • @SuperSpy00bob
    @SuperSpy00bob Před rokem

    Lina and the rest of the Asahi team have been kicking ass lately.

  • @capricorndavid
    @capricorndavid Před rokem

    Spot on! Great video!

  • @Z4KIUS
    @Z4KIUS Před rokem +1

    Chrome OS is using Linux kernel and using containers, not virtualization, to run everything else, even if the separation strength is set to hardcore

    • @mskiptr
      @mskiptr Před rokem

      ChromeOS still opts to run things in VMs. I guess it's a security decision