NO other PC can match it for $4,000? - M1 Ultra Mac Studio review

SdĂ­let
VloĆŸit
  • čas pƙidĂĄn 14. 05. 2024
  • 🌏Get Exclusive NordSecurity deals here ➌ nordsecurity.com/linus All products are risk-free with Nord's 30-day money-back guarantee!✌
    Get 10% off all Jackery products with code LinusTechTips at www.jackery.com/?aff=27
    Apple claims the Mac Studio with M1 Ultra is the most powerful computer you can buy for $4,000. Does that put AMD, Intel, and Nvidia on notice - Or is Apple making claims they can’t back up?
    Discuss on the forum: linustechtips.com/topic/14390...
    Buy an Apple Mac Studio M1 Max: geni.us/c9DL
    Buy an Apple Mac Studio M1 Ultra: geni.us/DFJvaa
    Buy an Apple Studio Display: geni.us/3ho7o
    Buy an Apple Magic Keyboard: geni.us/8ezY
    Buy an Apple Magic Mouse: geni.us/h04VhmR
    Buy an Apple MacBook Pro 14” M1 Pro: geni.us/S8vIu
    Buy an Intel Core i9-12900K: geni.us/516f8Q
    Buy a Nvidia GeForce RTX 3090: geni.us/eZkB
    Purchases made through some store links may provide some compensation to Linus Media Group.
    â–ș GET MERCH: lttstore.com
    â–ș SUPPORT US ON FLOATPLANE: www.floatplane.com/
    â–ș AFFILIATES, SPONSORS & REFERRALS: lmg.gg/sponsors
    â–ș PODCAST GEAR: lmg.gg/podcastgear
    FOLLOW US
    ---------------------------------------------------
    Twitter: / linustech
    Facebook: / linustech
    Instagram: / linustech
    TikTok: / linustech
    Twitch: / linustech
    MUSIC CREDIT
    ---------------------------------------------------
    Intro: Laszlo - Supernova
    Video Link: ‱ [Electro] - Laszlo - S...
    iTunes Download Link: itunes.apple.com/us/album/sup...
    Artist Link: / laszlomusic
    Outro: Approaching Nirvana - Sugar High
    Video Link: ‱ Sugar High - Approachi...
    Listen on Spotify: spoti.fi/UxWkUw
    Artist Link: / approachingnirvana
    Intro animation by MBarek Abdelwassaa / mbarek_abdel
    Monitor And Keyboard by vadimmihalkevich / CC BY 4.0 geni.us/PgGWp
    Mechanical RGB Keyboard by BigBrotherECE / CC BY 4.0 geni.us/mj6pHk4
    Mouse Gamer free Model By Oscar Creativo / CC BY 4.0 geni.us/Ps3XfE
    CHAPTERS
    ---------------------------------------------------
    0:00 Intro
    0:55 Meet the test suite
    1:14 Gaming benchmarks and testing issues
    3:24 World of Warcraft - For real this time
    5:12 Productivity benchmarks - Synthetics
    6:16 Productivity benchmarks - Real-world
    9:28 A note on storage speed + PugetBench
    10:42 Thermals
    12:21 Power & Efficiency
    14:05 The Mac Studio's design... And the SSD
    15:47 M1 Ultra - Apple needs to stop overpromising
    17:00 Mac Studio - Priced all wrong
    18:13 Conclusion
  • Věda a technologie

Komentáƙe • 3,6K

  • @trevorwright6186
    @trevorwright6186 Pƙed rokem +2783

    I really wish you guys would start doing benchmarks for the audio crowd. Testing DAW performance. Edit: but also thank you for everything you guys already do!

    • @LinusTechTips
      @LinusTechTips  Pƙed rokem +2054

      We have plans, but we haven't had the opportunity to put something together yet. It's probably going to be a job for the Labs to develop. -AY

    • @viper627
      @viper627 Pƙed rokem +65

      How the hell does a reply get more likes then the actual comment-

    • @Tirth-Patel
      @Tirth-Patel Pƙed rokem +10

      ​@@viper627 đŸ€«đŸ€«đŸ€«

    • @Yuzuki1337
      @Yuzuki1337 Pƙed rokem +89

      For audio it mostly boils down to this: Can you afford a Mac? If yes it's your best bet - their drivers are just that good. (and that's coming from someone who doesn't like Apple)

    • @limen5442
      @limen5442 Pƙed rokem +13

      @@LinusTechTips Woo! It would be nice to see discussion of MADI PCI-E interfaces, and edge use cases (Endless Analog's CLASP tape machine syncing, for example) and how they work with 2020s machines.

  • @Skrenja
    @Skrenja Pƙed rokem +2212

    I am no Mac fanboy, but _damn_ that power draw is impressive. Makes the rumoured 40 series GPUs look even more ridiculous now.

    • @NonsensicalSpudz
      @NonsensicalSpudz Pƙed rokem +213

      except a dedicated GPU is more powerful in general, it does way more, there's a reason why dedicated cards are quite large, SoC haven't hit the capability yet

    • @davide4725
      @davide4725 Pƙed rokem +138

      Um, no. Maybe if Apple silicon could compare in performance to those GPUs, but we're not even close.

    • @merlozzo
      @merlozzo Pƙed rokem +19

      would be interesting comparing the system Power consumption with a lower tier gpu, something with the performance of Apple silicon. that i9 is hungry for sure, but the gpu eats a lot too!

    • @chrishoppner7875
      @chrishoppner7875 Pƙed rokem +65

      We had a massive hike in power prices where I live last July, and the power draw alone is enough for me to not even bother considering a 40 series GPU. I'd be paying more for power than I pay for rent just running that damn thing.

    • @catalyst429
      @catalyst429 Pƙed rokem +10

      I'd care if it was a laptop 😂, a prius might get good mpg compared to its 0-60 but its still slow and that's not what people that care about 0-60 look at

  • @amendus
    @amendus Pƙed rokem +1219

    Hiring Anthony was the best thing Linus ever did. Love his reviews!

    • @adibafiq6945
      @adibafiq6945 Pƙed rokem

      @Christian Michael huh?

    • @frankroquemore4946
      @frankroquemore4946 Pƙed rokem +6

      For sure

    • @sharkshock9080
      @sharkshock9080 Pƙed rokem +18

      Agreed, I like his no nonsense tech nerd style.

    • @maxjames00077
      @maxjames00077 Pƙed rokem +5

      👍 agreed

    • @oscarcontreras
      @oscarcontreras Pƙed rokem +13

      Here for this comment!!! I remember when this dude was so shy Linus had to push it in front of a camera and he was so smart but so awkward. But finally, here he is, better than never.

  • @andygatito1985
    @andygatito1985 Pƙed rokem +75

    anthony is easily my favorite host of LTT. he just seems like such a chill, friendly guy who loves what he does. keep it up!

  • @MagNovax
    @MagNovax Pƙed rokem +1705

    Yay! Anthony video! I feel like I haven't seen him awhile. Probably cause he was preparing this review.

    • @erkkiboy
      @erkkiboy Pƙed rokem +13

      Was thinking the same 😊

    • @falagarius
      @falagarius Pƙed rokem +61

      Was preparing his brows

    • @BayareaXotics
      @BayareaXotics Pƙed rokem +3

      @@falagarius ☠☠☠

    • @normanish
      @normanish Pƙed rokem +10

      He’s looking fresh like a newly budding flower 🌾

    • @MarcusLB1998
      @MarcusLB1998 Pƙed rokem +5

      How can you miss him? Lol

  • @Zt3v3
    @Zt3v3 Pƙed rokem +245

    I never feel like Anthony is reading a script; he's getting better and better at hosting. Kudos man.

    • @mndlessdrwer
      @mndlessdrwer Pƙed rokem +7

      Anthony is really good at reacting on the fly and his overall hardware and software knowledge is good enough that he can improvise as necessary, which is a very useful skill for video hosting. He's also a very authentic personality, like many people at LMG, which makes his videos quite entertaining.

  • @MoritzvonSchweinitz
    @MoritzvonSchweinitz Pƙed rokem +239

    Small suggestion: I know your graphs always include a "lower/higher is better" legend. But if you'd somehow visualy represent which is better, it would be way easier to grasp the data quickly. Something like having the "better" side green or whatever.

    • @dallinsonntag3160
      @dallinsonntag3160 Pƙed rokem +6

      What you can’t read a graph lol. That’s such a nit picky thing to worry about

    • @itaintitpewds
      @itaintitpewds Pƙed rokem +44

      @@dallinsonntag3160 to make better content, everything matters, small things pile up to make better content

    • @dallinsonntag3160
      @dallinsonntag3160 Pƙed rokem +1

      @@itaintitpewds you CANNOT tell me that switching colors would boost more views my friend

    • @itaintitpewds
      @itaintitpewds Pƙed rokem +31

      @@dallinsonntag3160 it is not about boosting views, it is about making better quality content, small intuitive things make a lot of difference, you may not think about it but it does for a lot of people, i watch linus tech tip's videos in things im not even interested in just because it is fun to watch, like home appliances, why would i want to watch some guy build his house, because it's just fun, that's how intuitive it already is, so proving my point that adding small things over time pile up to make better content for everyone

    • @dallinsonntag3160
      @dallinsonntag3160 Pƙed rokem

      @@itaintitpewds it sounds like you need to find a hobby my friend

  • @joergengeerds360
    @joergengeerds360 Pƙed rokem +15

    side note for the Prores encoding section: prores still has a significant encoding overhead, and is not fully limited by the RW speed of your media, but still more limited by the CPU/GPU performance. to measure pure/raw render-write performance, render to a tiff-sequence (uncompressed) to see the performance finally being limited by the media, and not the cpu

  • @Baelthaazar
    @Baelthaazar Pƙed rokem +486

    Note: The artifacts blinking in and out of existence on that boss on WOW was an issue on one of the patches. It was happening on PC as well.

    • @Pynoxim1
      @Pynoxim1 Pƙed rokem +18

      It's clearly because they're in the area "Shimmering Cliffs" /s

    • @ItsFlipsy531
      @ItsFlipsy531 Pƙed rokem +3

      Yeah I was about to comment this halondrus boss room buggy af lmao

    • @zes7215
      @zes7215 Pƙed rokem

      wrrr

    • @sweatybrawls
      @sweatybrawls Pƙed rokem

      Yeah I get the same thing on my 3080ti in a few places in Korthia and in SotFO

  • @simmehchan
    @simmehchan Pƙed rokem +501

    That boss in World of Warcraft was bugged recently with the assets blinking in and out of existence. I experienced it on PC, so can't be sure that it was the Mac's issue.

    • @NonsensicalSpudz
      @NonsensicalSpudz Pƙed rokem +6

      yeah especially with WoW being one of the first m1 native games

    • @josephcrapper6395
      @josephcrapper6395 Pƙed rokem +14

      Was going to comment this, it’s still broken now, one of the only fights with this kind of glitch currently in the game, very unlucky LTT, good review as always however!

    • @dirtiestharry6551
      @dirtiestharry6551 Pƙed rokem

      @David V We get it, Microsoft touched your no no when you were a kid, let it go now

    • @hnmAck
      @hnmAck Pƙed rokem +3

      @David V oh you, untrickable David.

    • @Shadowninja1200
      @Shadowninja1200 Pƙed rokem

      @David V Sure you run all those but it doesn't mean you know what a slanted review looks like. - Me: the guy who has a m1 laptop and gaming pc with windows and a linux vm running on it.

  • @darkroast9907
    @darkroast9907 Pƙed rokem +91

    The biggest issue I have with M1 in general is that programs are either exceptionally performant or wildly behind Windows computers, and it all boils down to whether or not they're designed with M1 in mind. That wouldn't be so bad if not for the fact that the speed at which developers have tried to optimize their programs for M1 has been so slow that, by the time Apple silicon is properly utilized, we'll be at M3 at the very least. Like the iPad Pro - it's a lot of power and not a lot of ways to use it.

    • @ladislavzima8382
      @ladislavzima8382 Pƙed rokem +6

      Still, the speed of optimizing on Apple is the fastest compared to other platforms.

    • @Teluric2
      @Teluric2 Pƙed rokem +1

      Now the fans blame dev because M1 runs poorly they re not aware that M1 is overrated no matter what devs can do it wont run faster.

    • @wiskdee
      @wiskdee Pƙed rokem +6

      @@Teluric2 unrealistic, it’s clear with apples own video editing software that the new technology can be optimised to get insane performance , editing 16 raw timelines at once with no lag is insane and mind blowing. it’s in its First gen but by gen 4 i think optimisation of the chip will be done and the true power can be unleashed

    • @spencerrr9878
      @spencerrr9878 Pƙed rokem +4

      @@Teluric2 You can see in this video that Applications and workloads developed specifically to run on this specific ARM hardware run exponentially better and faster than their windows counter parts, its not the chip/architecture it is literally just developers lagging behind (as always) to push updated app versions (which in some cases is understandable because it may require a complete application rewrite but still its been damn near 3 years). It. Is. Not. The. Chip.

    • @rbdan
      @rbdan Pƙed 4 měsĂ­ci

      This comment seems really inaccurate? I can never tell if something is running on x86 or arm64, Rosetta does a pretty good job and I've never been able to tell the difference on my Mac.

  • @slowzen
    @slowzen Pƙed rokem +170

    Genuinely appreciate the breadth of coverage. A breath of fresh air after seeing everyone just cover video rendering and move on, as if everyone watching was doing just that.

    • @thepgo666
      @thepgo666 Pƙed rokem +18

      I really hate that part about youtube reviews. I understand that it's their daily bread and what they understand, but it's kind of a bubble that probably not more than 0.01% of the audience is in. Cinebench is cool as a reliable multicore benchmark, but video encoding doesn't really give us a real world usage metric.

    • @plankera
      @plankera Pƙed rokem +1

      Reviewers should give it the Minecraft test.

  • @jazz836062
    @jazz836062 Pƙed rokem +413

    At ~4:20, I experience the artifacts you are referencing on my PC version of that fight as well. The fight had some recent bugs that I am told are fixed now, but I haven't been able to verify. I have been experiencing those buildings disappearing since day 1 of the raid. This is not just a Mac issue.

    • @michealpersicko9531
      @michealpersicko9531 Pƙed rokem +2

      Maybe it only showed up for them during the PC testing since they can only comment on what they see and it wouldn't be too surprising given how potty Apple Silicon CPU's integrated GPUs are that they would have weird and bespoke graphical glitching.

    • @flugeldd
      @flugeldd Pƙed rokem +1

      do you have DX12 enabled? i have had some weird artifacting with a intel nvidia system artifacting. When i switched back to DX11 it was fixed

    • @jazz836062
      @jazz836062 Pƙed rokem +1

      @@flugeldd I have DX12. I'll give DX11 a shot when we get back there.

    • @risingstar1309
      @risingstar1309 Pƙed rokem +2

      Nice

    • @peas320
      @peas320 Pƙed rokem +1

      Yeah there are a lot of people in my raid team who also saw that pillar in particular flickering in and out and it seemed to be based on camera position and angle

  • @krounde7392
    @krounde7392 Pƙed rokem +407

    The assets dissapearing in WoW is a common problem, i experience it sometimes too. Doubt it's related to Mac.

    • @terxlas
      @terxlas Pƙed rokem +9

      Purely anecdotal of course, but I've had multiple windows laptops and a desktop (current) none of which have shown the "flickering" effect on certain assets popping in and out of the game, however back a couple of years ago on an macbook pro I did notice the issue frequently.

    • @jbruning1291
      @jbruning1291 Pƙed rokem +8

      Only time I've noticed it was on setups that were Linux or Mac native systems, or Windows systems with some non-up to date drivers. The Linux users even managed to patch the glitching themselves with their witchcraft.

    • @josejuanandrade4439
      @josejuanandrade4439 Pƙed rokem +2

      I've never had that issue in windows. And i use a huge array of pcs. Everything from nvidia to amd, and even play on laptops and desktops with igpus. Never seen that. I could list you all the diferent configurations of pcs but we've be here all day lol.

    • @babaganoosh7020
      @babaganoosh7020 Pƙed rokem +4

      have you ever talked to a girl without giving your credit card number first

    • @uk2k007
      @uk2k007 Pƙed rokem

      @@babaganoosh7020 caption if the week đŸ€Ł

  • @AuthenTech
    @AuthenTech Pƙed rokem +222

    As a real world test, editing content in FCP is insanely fast and efficient, my short form verticals export in mere seconds. It’s nutty for video creators

    • @mariankallinger7984
      @mariankallinger7984 Pƙed rokem +22

      In the real world people are using FCP?

    • @SeraphX2
      @SeraphX2 Pƙed rokem +9

      the only people that would need to spend 4k on a productivity machine.
      graphics people...maybe, but could get by with less.
      everyday people who might use a mac laptop of some sort who are jsut pushing excel sheets around, won't need to spend this much.
      So basically, they've made a machine with potentially/theoretically the power to run games as well, but due to being dumb about how they support stuff, basically made a machine only video editors should buy.
      How does Apple even exist.

    • @RK-lq2ud
      @RK-lq2ud Pƙed rokem +23

      @@mariankallinger7984 it's cheaper than Adobe licence, so yea suprise suprise, people actually use FCP and Logic for professional works

    • @simpleton7
      @simpleton7 Pƙed rokem +5

      My work laptop is an HP zbook with a RTX 3000 and vs my personal m1 pro laptop, which honestly is a bit more modern and more expensive, but the main impact is none of the weird hitches the windows laptop has without any clear performance bottleneck, they both behave differently, even when I'm doing very low-impact stuff like editing lower-res files in photoshop. The mac is of course pushing more pixels with the internal display too, and I've never once turned the speakers up to override fan noise.

    • @casdragon_5939
      @casdragon_5939 Pƙed rokem +2

      I use mine for 3D modeling/rendering and its amazing.

  • @BenKlassen1
    @BenKlassen1 Pƙed rokem +53

    Thank you Anthony! Great review! I'm in a cold climate so I'll stick with a PC for the heat and repairability.

  • @ChristianStout
    @ChristianStout Pƙed rokem +824

    I'd also like to see how it compares to a fully spec'd-out 2019 Mac Pro.

    • @leorawesome9518
      @leorawesome9518 Pƙed rokem +75

      The 2019 isnt gonna do well 😂

    • @emma70707
      @emma70707 Pƙed rokem +25

      Much much better for most things... Intel chips were run super hot and would throttle.

    • @rossharper1983
      @rossharper1983 Pƙed rokem +3

      You can extrapolate that with the information already available to you

    • @chudchadanstud
      @chudchadanstud Pƙed rokem +14

      Literally became obsolete the year it came out. Can't even upgrade the chip.

    • @flamingkillermc2806
      @flamingkillermc2806 Pƙed rokem +11

      @@leorawesome9518 Not true a fully maxed out 50k mac pro will beat it out. Watch maxtechs video he has already done this

  • @devnull5109
    @devnull5109 Pƙed rokem +295

    The artifacting mentioned at 4:16 also occurs exactly the same way on my Ryzen + RTX 3080 machine running Windows. Anecdotal, but it may not be exclusive to the M1

    • @fynale2049
      @fynale2049 Pƙed rokem +22

      It's actually a bug on halondrus' fight due to the fight using a phased version of his arena that goes away once the encounter is done.

    • @josejuanandrade4439
      @josejuanandrade4439 Pƙed rokem +1

      @@fynale2049 How come i've never seen that bug then?

    • @Doofindork
      @Doofindork Pƙed rokem +2

      @@josejuanandrade4439 The bug simply doesn't apply to everyone. Works fine on some computers but glitches out on other.

    • @fynale2049
      @fynale2049 Pƙed rokem +2

      @@josejuanandrade4439 seems to be a dx12 issue. My guild had to pause a glory run because it made some of the orbs needed for halondrus achievement disappear.

  • @TheTargetedScapegoat
    @TheTargetedScapegoat Pƙed rokem +3

    Always awesome, Anthony.
    I’m glad to see you tested Macs in environments with software that they are used for or at least have been known to be used for. Intensive audio processing in multitrack studio applications or just high resolution Wacom Cintique digital painting are where my semi pro interests lie.

  • @SmooviesTV
    @SmooviesTV Pƙed rokem +169

    Essentially the main thing bogging down Apple Silicon is the lack of third-party support. Really hoping that Apple start incentivizing devs to expedite M1 apps, because Rosetta isn't as near as fast as Apple claims

    • @Teluric2
      @Teluric2 Pƙed rokem

      So all the claim that M1 will makena revolution was a bluff. Many companies are not developing for M1. No audodesk no catia no nx

    • @grn1
      @grn1 Pƙed rokem +7

      Apple's plan to incentivize app devs isn't likely to change from what it's more or less been since iPhone first released (possibly sooner). Push their stuff onto as many consumers as possible and sign exclusivity deals so app devs have to support it if they want to reach the largest possible user base. The reason iPhones are as common as they are is because of the way Apple pushed them onto the masses by making it more affordable to get their fancy phone (by paying monthly) when other manufacturers still expected people to buy their phones out right (which I always do). Of course now everyone does it this way but that is, from my understanding, a large part of how they got such a huge market share (they also used other forms of psychological warfare like making SMS an ugly green and encouraging that exclusivity (there's gotta be a better word) type of mindset).

    • @TinyBearTim
      @TinyBearTim Pƙed rokem

      I don’t even think 3ds max runs on a mac so games devs won’t buy them even if they are good unless they load windows on to them

    • @idoltrash2353
      @idoltrash2353 Pƙed rokem +11

      The problem is that every time devs get used to a particular framework apple go and change everything on a whim, they have absolutely 0 regard for backward compatibility and its nearly impossible to keep up. This is how they lost the scientific computing market, used to be 10-15 years back you'd go to a conference and everyone would have macbooks but over time with Ubuntu becoming more user friendly and WSL-2 being genuinely very good, people just got fed up with accidently letting the OS update and finding all their open source software broken.
      Apple have been steadily painting themselves into a corner saved only by their brand image and an almost militant approach to exclusivity, that doesn't mix well with genuinely productive third party relationships.

    • @TheStopwatchGod
      @TheStopwatchGod Pƙed rokem

      They only claimed it's faster than the intel Mac the Apple Silicon one replaces

  • @jtrevathan33
    @jtrevathan33 Pƙed rokem +392

    I would love to see how engineering workloads like spice, matlab, numpy, and CAD, perform on the mac.

    • @kobold2376
      @kobold2376 Pƙed rokem +32

      i think so as well, all ML-Stuff, recently pytorch added supports for gpu on M1 which is really nice because of all that unified memory, you could really nice prototyp some DeepLearning on the m1

    • @dzjuben2794
      @dzjuben2794 Pƙed rokem +12

      Probably not too well, unfortunately

    • @hishnash
      @hishnash Pƙed rokem +10

      @@dzjuben2794 depends a lot on the workload, if your doing tasks that are very bandwidth heavy the massive bandwidth that the M1 Max/Ultra have for the GPU alone is very impressive and if your using a build of Numpy/SciPi that uses the AMX units then you also have a LOT of extra unto 8 TFLops of FP32 and 4TFLops FP 64 (and I think 1 TFLOP fp80) for matrix ops.
      The massive memory bandwidth and rather large on die cache can make a massive difference in data sci workflows were one is very commonly matching hashes, joining and filtering. things were the raw compute cycles is a tiny fraction of the work compared to pumping data through the pipe.

    • @alexstubbings_
      @alexstubbings_ Pƙed rokem +15

      Right now matlab and simulink run very poorly, especially simulink. Matlab should be getting an apple silicon update in 2023a, but simulink probably won't get updated for quite some time. Solidworks runs fine on my m1 pro via parallels, but not noticeably different from the m1 macbook air. Fusion should be getting an AS update soon, so that should be interesting to see.

    • @hishnash
      @hishnash Pƙed rokem +4

      @@alexstubbings_ For sure apps that have not been updated in this space like matLab etc are not good. But the python data-sci space is now finally fully native stack and is looking very good.

  • @NexuJin
    @NexuJin Pƙed rokem +230

    I'm really missing any testing on DPC latency on these systems. It's such a hidden specs that only comes into the picture for most people when they start making/playing music or anything that relies on real-time processing of data. And these Mac's aren't just targeted at video editors, but also music producers. 19ms round about latency is just enough to notice with audio.
    I know LTT isn't into making music and only does video stuff. But it's somewhat important as more people starting to use their computer for multimedia purposes, including streaming. Often wireless (because they can) and that's where the problems comes into the play: high DPC latency caused by either GPU or WiFi drivers.
    So please, start considering testing laptops and desktops systems on their DPC latencies!

    • @D-One
      @D-One Pƙed rokem +13

      I would also like to see some audio tests, even if very simple ones but for that you have to go to specialized channels, LTT generally only cares about gaming and a bit of video.
      Isn't that dependent on what audio interface you use? if you have a 4$k computer I assume you will have something like an RME, UAD Apollo, Apogee, etc... aka decent interface with very optimized drivers.

    • @coolinmac
      @coolinmac Pƙed rokem +5

      No one cares

    • @jamaharon
      @jamaharon Pƙed rokem +5

      DPC latency is more of a function of the OS optimization and drivers than the hardware, In context of DAW's it's used more to resolve issues than to assess performance. About your 19ms remark, the DPC latency numbers are related to audio buffer size but they are not the same thing, any modern system today can go below 5ms latency round trip.

    • @drumphil00
      @drumphil00 Pƙed rokem +1

      I'd certainly like to hear more about DPC latency with apple silicon. I imagine that the link between audio glitches in systems with the T2 security chip and USB2 audio devices would have shown itself in DPC latency results.

    • @welchomestudio
      @welchomestudio Pƙed rokem +14

      @@coolinmac Millions of people throughout the world make music. There are more people making music and owning a home studio than people editing videos. So... no one cares? Really?

  • @Frankthegravelrider
    @Frankthegravelrider Pƙed rokem +8

    Would be great to see tensorflow benchmarks: cpu mode, gpu mode and cuda, alot of us deep learning engineers like to run small runs on our macs before we push it onto the cloud.

  • @solidreactor
    @solidreactor Pƙed rokem +22

    For that price, would it make sense to compare it to a Threadripper 5000 series? Would be interesting to see how ARM compares to both "x86" vendors.
    Also x265 is quite performant on Radeon cards so also interesting to compare both GPU vendors

    • @jimmybayconn
      @jimmybayconn Pƙed rokem +1

      yeah configuring smth similar with a threadripper clearly is the more fair test. BUT apple claims to be faster than there 12900k so they gotta test it agains tthat

  • @kllrnohj
    @kllrnohj Pƙed rokem +197

    Developer here with the angry (not really) comment. My primary complaint/issue would be comparing the M1 Ultra against the 12900K instead of the 5950X. Which is actually kinda the complaint I'd have in general. Per LTT's own review, the 5950X was the stronger overall "productivity" CPU. Given the focus of the M1 Max & Ultra, it then seems odd to use the 12900K instead of the 5950X.

    • @mastroitek
      @mastroitek Pƙed rokem +54

      I think it is because apple used the 12900k as a comparison for the M1 Ultra. But yeah, here it should be "fastest 4k$ pc vs M1 Ultra", so a 5950x would make sense.

    • @cup_and_cone
      @cup_and_cone Pƙed rokem +11

      It's not called AMD extreme makeover.

    • @andrewfrey6960
      @andrewfrey6960 Pƙed rokem +1

      I want a full amd build comparison. While intel's cpu might be slightly faster, it's much more efficient. Power usage would show it's true potential. Also, I have yet to see benchmarks for amd's HIP support in blender.

    • @llothar68
      @llothar68 Pƙed rokem +6

      Dont agree, with the lower idle power the 12900K and better in almost everything compared to the 5950 (performance, boards, PCIe) is for sure the better value for developers. But in the end both are the same league. Just as the Ultra is, except for the price league where it is unbelievable 300% more expensive, and 2500 against 7000 Euro hurts.

    • @randombloke10
      @randombloke10 Pƙed rokem

      I imagine despite the audience of the M1 Max/Ultra equipped machines, the viewers of the channel are predominantly consumers

  • @stever1514
    @stever1514 Pƙed rokem +41

    so wonderfully thorough.. wtg.. loved that you figured out the one benchmark was faster because of the drive speed difference. i wish there was more coverage about ssd speed differences between current mac models and the size differences too, since apple only talks about the 8tb models and most reviewers spec out base model only.

    • @justonefra
      @justonefra Pƙed rokem +2

      Not only that but I think it should have been compared to compile time measured in a Linux distro more than in windows. The windows kernel is not really good for IO performance. Even more so with a lot of small files

  • @cultOfApple
    @cultOfApple Pƙed rokem

    Lot of tech details in here . Very informative.. thx

  • @Nightcaat
    @Nightcaat Pƙed rokem +12

    Regarding the CPU bottleneck in Dolphin: that’s normal. In Dolphin, the only demanding tasks that can run well in parallel are the CPU, GPU and DSP. Breaking up any of these into smaller tasks just to run it on more cores is likely to just make it run slower, so they don’t.

  • @5urg3x
    @5urg3x Pƙed rokem +43

    9:10 There’s another tech channel that talked about this. The M1 ultra supposedly gives you double the media encoder blocks, but they can’t all be used at once due to some problem. I forgot the details.

    • @thebuddercweeper
      @thebuddercweeper Pƙed rokem +1

      I think it's a software limitation... hopefully

    • @hishnash
      @hishnash Pƙed rokem +5

      @@thebuddercweeper it is. You can use them at once currently but only if you're encoding multiple streams. such as exporting more than one resolution at once.

    • @thebuddercweeper
      @thebuddercweeper Pƙed rokem

      @@hishnash Ah yes I remember hearing that now

  • @szaszm_
    @szaszm_ Pƙed rokem +225

    7:15 I think platform differences can be a big factor, but I've never measured it directly. C++ compilation reads and writes many small source, header and object files, so I think filesystem performance can have a big impact. I can't say for sure, since I never benchmarked the same compiler on two different systems, but compiling on windows / msvc regularly takes 2-4x as much time as linux / gcc or llvm clang (which take roughly the same time) on the project I'm working on, and I usually blame NTFS for this difference. On mac OS, you could try homebrew GCC and compare it to linux GCC, since they are basically the same. XCode Clang (aka. AppleClang) and LLVM Clang are different, though.

    • @flamewingsonic
      @flamewingsonic Pƙed rokem +7

      I agree with this. In my experience, using GCC or clang on Windows through MinGW/msys2 is often around 2x-4x as slow as compiling with the same version GCC/clang on the exact same computer whether on Ubuntu directly or with Ubuntu through WSL2 for larger projects, where file I/O is a factor. But I wonder if the difference they have seen is not due to compiling with clang on OSX and GCC on the other system, given that OSX symlinks GCC to a version of Apple Clang.

    • @Razermantis7649
      @Razermantis7649 Pƙed rokem +2

      Platform differences are KING, I'm guessing

    • @benjaminschaaf
      @benjaminschaaf Pƙed rokem +3

      I have the same experience. Out of the 3 Windows is usually the slowest due to expensive process spawning and a horridly slow filesystem, whereas Linux is in my experience by far the fastest. Compiler differences are of course also huge. To properly compare the hardware you'd have to run the same OS on both.

    • @trickypr
      @trickypr Pƙed rokem +3

      Another factor is the way the OS handles in-memory caching. Mozilla Build (and I imagine this is similar for chromium) takes advantage of OS memory caching to work around slower I/O layers, especially when linking. From my experience, windows is worse at managing this kind of cache, which will result in slower build times.

    • @hishnash
      @hishnash Pƙed rokem +2

      People have done compilation comparisons on apple silicon (linux that is missing a load of boost modes on the cpu) and these chips outperform linux on intel/AMD workstations.
      Much of this is likely down to the large in-house talent apple have from the LLVM teams im sure these were consulted during the design of the underlying hardware.

  • @katchou1337
    @katchou1337 Pƙed rokem +1

    so much information :o gj (; enjoyed listening to you ^^

  • @Havirgem
    @Havirgem Pƙed rokem +1

    That's a review!
    I anxious to see LTT labs new material

  • @Satelitko
    @Satelitko Pƙed rokem +77

    4:14 - That's an issue with the game, not the system. At certain camera positions large props like that tower can disappear. I can only assume why it happens, but I can tell you how to reliably reproduce it if you want. It happens on my system all the time (3600X, 2070S, B450 MB).

    • @dylangarcia9468
      @dylangarcia9468 Pƙed rokem +2

      he didn’t say it was an issue with the system

    • @VectorGaming4080
      @VectorGaming4080 Pƙed rokem +1

      @@dylangarcia9468 He said he only experienced it on the M1 Ultra variant.

    • @markjacobs1086
      @markjacobs1086 Pƙed rokem

      Looks like wonky code for object culling to me. Well, what can you expect from such a relic of a codebase that this is likely still running. 😅

    • @dylangarcia9468
      @dylangarcia9468 Pƙed rokem +1

      @@VectorGaming4080 he said it was the developers fault tho for not having planned it out

  • @rwncop
    @rwncop Pƙed rokem +28

    Would love to see M1 benchmarks for industry level post-production software such as Avid Media Composer and Pro Tools. Don't think a single channel on CZcams has done this yet.

    • @VMYeahVN
      @VMYeahVN Pƙed rokem +8

      They haven't, and none will unfortunately. It's realistically because no one who does CZcams will ever touch Avid Media Composer (i can't speak to Pro Tools though). Media Composer is pretty strictly only ever used at industry level like you said. No one on CZcams is operating at unscripted tv/scripted tv/feature film level. And any who would are young enough where they came up on FCPX, Premiere, and Resolve. I've used Media Composer on my M1 Pro Macbook Pro though and long as you have enough RAM (it still runs through Rosetta 2, Avid hasn't coded it natively for Apple Silicon yet) it runs almost perfectly fine. You very easily forget it's running through emulation. Avid did a big rewrite of Media Composer almost from the ground up with Media Composer 2020, so it's more streamlined to be able to run through emulation. Avid lists themself that if your system only has 16 GB of RAM to turn off certain features like the background phonetic indexer, since it running through Rosetta 2 has a RAM utilization cost, but otherwise it runs great especially if you have 32 GB+ of RAM.

    • @noblesse4728
      @noblesse4728 Pƙed rokem +2

      Usually, on the industry-level software, you go through the corporate-level or enterprise customer service to ask for those numbers and benchmark on hardware.

  • @hardlyb
    @hardlyb Pƙed rokem +4

    I bought the M1-ultra Mac Studio today because I needed to upgrade my 14 year old Mac trashcan. I only run a few things, but often use all 64G of memory, and all 8 cores on the trashcan (really, all 16 hyperthreads), so I'm hoping this new machine is faster for those workloads. I'm expecting to have to (after more than 15 years!) recompile a bunch of utilities I wrote myself, and reinstall lots and lots and lots of software, so it may be a while before I actually try to move over to the new box.

    • @beeldbuijs1003
      @beeldbuijs1003 Pƙed rokem +2

      The 'trash can' Mac Pro is from 2013. That is 9 years ago, not 14. I type this on 14 year old Mac Pro. That is the 2008 model aka the Mac Pro 3,1. Is that what you meant?

  • @BrianDavids
    @BrianDavids Pƙed rokem +5

    Great deep dive. Thank you Anthony!

  • @officaldjflo
    @officaldjflo Pƙed rokem +24

    They brushed Anthony eyes brows. Hit him with some blush and warmed his face. Some roseyness on his lips might be a little much but I like his hair a lot better now . Overall 8/10 I'd smash.

    • @KXKKX
      @KXKKX Pƙed rokem

      Anthony is so freaking talented but I can’t stop worrying about his health when I see his videos. He’s on his way to an early grave and that would be a great loss.

    • @YuProducciones
      @YuProducciones Pƙed rokem +1

      @@KXKKX yeah man, hope he is loosing weight .. at least bit by bit.

  • @vonpotatostein
    @vonpotatostein Pƙed rokem +259

    As said before, Apple's sillicon is a great breakthrough without a doubt, however what makes them great is their weakness: having a computer that I use for work with a single point of failure for RAM, CPU and GPU is a no go for my needs (other people mileage may vary) Same for the mac only SSD.

    • @skyhigheagleer6
      @skyhigheagleer6 Pƙed rokem +3

      Irrelevant argument when you remember laptops exist

    • @Marc-zi4vg
      @Marc-zi4vg Pƙed rokem +30

      @@skyhigheagleer6 Soo do you want it to be the status quo now?

    • @excarnator
      @excarnator Pƙed rokem +17

      But doesn't the fact that it's an SoC mostly prevent such elements (which are no longer separate components) to fail? Like, when did your phone or tablet's CPU, GPU and RAM last fail you?

    • @Marin3r101
      @Marin3r101 Pƙed rokem +19

      @@excarnator you are comparing parts that do not have massive heat. Ulp parts cant compare to desktop components. Nice terrible analogy.

    • @MadsonOnTheWeb
      @MadsonOnTheWeb Pƙed rokem +5

      Not only that. The silicon itself is kinda too specific. If anything even if minimal changes. It will depends in its CPU cores which are kinda week for general purposes

  • @Viaexplore
    @Viaexplore Pƙed rokem

    Try overclock (if possible) or better heatsink on M1 Ultra Mac Studio, is it performing better or same?

  • @clangsison
    @clangsison Pƙed rokem +3

    anthony, i love and appreciate the detail in your reviews. thank you! especially that excellent point about prores and the storage speed.
    but next time, please settle with an eyebrows max instead of and eyebrows ultra ;p

    • @K0ALA.
      @K0ALA. Pƙed rokem +2

      I thought I was the only one who noticed this đŸ€Ł

  • @cromefire_
    @cromefire_ Pƙed rokem +63

    For video encode you probably want to test Intel's QuickSync too, because it should handle weird stuff like 2:3 a lot better than the notoriously constrained NVENC.

    • @cromefire_
      @cromefire_ Pƙed rokem +6

      @@Prophes0r Nope, a lot of businesses and people use them for VoDs. For example CZcams. It's all hardware. Yes CPU encoding is better (by far less that the 95% you quoted, more like probably 20-30%, also you are comparing H265 and H264 and the same QP values wich are different for every encoder which is bullsh*it) and should be preferred, but that only holds true as long as you have the time and the resources (more like a Netflix use case, where you don't have that much content but everything is being watched a lot), if you are in a hurry or have terabytes of content it's basically all you've got. Hell Intel just announced a GPU for for example CDNs called arctic sound m and google builds similar hardware themselves for CZcams. Similarly many content creator, especially on the go, use QuickSync to not wait for ages. (Remember especially with QuickSync there's parameters and presets you can tweak to make it look better than the default settings)

    • @markjacobs1086
      @markjacobs1086 Pƙed rokem +2

      I don't think NVENC was really ever meant to be more than a decent way to stream content consumed on the same PC at a relatively low cost. For offline rendering it doesn't really seem worth it to sacrifice the image quality (unless that's somehow different, I never really used NVENC for that purpose).

    • @cromefire_
      @cromefire_ Pƙed rokem

      @@markjacobs1086 Well that is only a fair point as long as you don't have any pressure on time (certain delivery dates for example) in that case you just crank up the bitrate by a few percent and you're fine (for offline usage the tradeoff isn't quality usually but just size).

    • @markjacobs1086
      @markjacobs1086 Pƙed rokem

      @@cromefire_ It's not like CPU encoding takes enormous amounts of time either (unless you really go ham on both the quality preset & bitrate).
      Maybe relevant if you own a dated CPU though.

    • @cromefire_
      @cromefire_ Pƙed rokem

      @@markjacobs1086 Well if you go for x264 fast or so (when it'll be fast) hardware might actually be a better quality and if you go for slower x264 you can also use hardware h265. Of course you could also use x265 then, but x265 is way slower (and everything is amplified on mobile). And you can do the same logic for VP9 and (soon, when Arc is out one day) AV1 which will get you even better quality and even lower speed, software AV1 is < 1fps with 16 Cores and a bunch of RAM.

  • @mfnbpwnz
    @mfnbpwnz Pƙed rokem +32

    Really unfair to ask Firaxis to port their game over when there would be no one to play it. I'm sure Civ VII will have native support.

    • @edwardritcheson1608
      @edwardritcheson1608 Pƙed rokem +1

      Hope Civ 7 will be out soon! Really looking forwards to it

    • @ekinteko
      @ekinteko Pƙed rokem +1

      Imagine something like GTA 6 getting ported to the Apple Ecosystem.
      They've got some serious performance with those CPU-Flash-GPU and their Semi-Native software in iOS/macOS Swift and Metal API are snazzy.
      With someone like Rockstar who spend years and do many optimisations, you could have XB1-level on an A11/iPhone Max, with XB1X-level on an A12Z/iPad Pro, with XsS-level on a M1/MacBook Air, with XsX-level on M1 Pro/MacBook Pro, and lastly Gaming PC-level with that M1 Max and M1 Ultra chipsets on the Mac Studio.
      Just imagine taking all your next-gen gaming with you wherever you are, without needing to Cloud Stream them. And it scaling up it's graphical fidelity depending on the thermal profile of your device. That would be so cool.

  • @exodous02
    @exodous02 Pƙed rokem

    Wanted a Mac and these were out and the more time goes by I'm glad I went with the Mac mini. I'm not a professional though, longer compile and render times are annoying but don't cost me $$.

  • @jusch5937
    @jusch5937 Pƙed rokem +3

    Man

    I love having Anthony as a host. He‘s just so calming to listen to.

  • @bobsykes
    @bobsykes Pƙed rokem +16

    Such a useful review/comparison. I came to the same conclusion that you did, a really loaded 14” M1 Max MacBook Pro makes the most sense between all the “pro” M1 machines, and the super low power draw pays off the most in that application, too. An astonishing display makes it even better.

    • @Skullet
      @Skullet Pƙed rokem +1

      I think it depends if you need the mobility of a laptop, if you only ever use it at a desk then I personally woudn't buy a laptop for that, but other people's mileage may vary here.

    • @smellyghost
      @smellyghost Pƙed rokem +1

      Yeah I love my 14” M1 Max MacBook Pro. Best combination of power and portability.

  • @Luke-A
    @Luke-A Pƙed rokem +148

    Apple: "We don't cherry pick out data we just ignore everything that we don't put our badge on"

    • @ultraL2
      @ultraL2 Pƙed rokem +6

      In fairness they never said they don’t cherry pick

    • @kalmenbarkin5708
      @kalmenbarkin5708 Pƙed rokem +7

      @@ultraL2 in fact they literally say they do. They call it “selected industry metrics”

    • @ultraL2
      @ultraL2 Pƙed rokem

      @@kalmenbarkin5708 re-read what I wrote

    • @kalmenbarkin5708
      @kalmenbarkin5708 Pƙed rokem +5

      @@ultraL2 I read it right the first time. I’m saying not only do they not say they don’t they literally say they do

    • @pirojfmifhghek566
      @pirojfmifhghek566 Pƙed rokem +3

      "But... but... look at all of our synthetic benchmarks! You can fit so many synthetic benchmarks into this bad boy." *slaps cube*

  • @bizjakboris
    @bizjakboris Pƙed rokem

    Great video. What about davinci resolve noise reduction test at 100% setting? I am interested how it compares to 3090.

  • @flokisgiggle6912
    @flokisgiggle6912 Pƙed rokem +3

    Anthony’s skin is glowing. He looks great :)

  • @AJ-zl8bz
    @AJ-zl8bz Pƙed rokem +70

    LTT, I have said it before, I love me some Anthony performance breakdown vids. It is fun, calming (linus) and informative. Can't wait for the new vids from the new department!

  • @marisakirisame8543
    @marisakirisame8543 Pƙed rokem +54

    Something few ppl have mentioned: M1 Ultra has 114 billion transistors, while RTX3090 has 28.3 billion and a Ryzen 5950x has 19.2 billion (transistor count of 12900 is not known). What Apple's doing is kinda trading transistor count for power efficiency, especially in targeted workloads such as encoding.

    • @MyrKnof
      @MyrKnof Pƙed rokem +7

      Its an absolutely HUGH MONGUS chip. When you get a reduced instruction set, some fixed function hw and are 100% in control of both HW and OS, you can do these kinda things on low power budgets.

    • @utubekullanicisi
      @utubekullanicisi Pƙed rokem +3

      Apple prefers to increase the TDP of their chips for higher TDP-supporting products (products with bigger thermals envelopes) by going wider with silicon, instead of increasing frequency on the same die. (M1 Ultra's total die surface area is ~920mm2). That is what affords them that efficiency. For the past 10 years they've always chosen to keep frequency low and silicon size big. The M1 Ultra's performance cores boost to a mere 3.23GHz, and yet their performance is roughly in line with Zen 3 cores clocked at ~5GHz. The GPU cores boost to only 1.3GHz. Their IPC on all fronts is off the charts. Granted, as Anthony says, the prices match what they're offering.
      And yet, seems like they're not stopping with the Ultra as a chip that has exactly the hardware of 4 M2 Max dies (codenamed Rhodes-4C) is rumored for the Apple silicon Mac Pro, and if we assume linear transistor scaling from M2->M2 Pro->Max, that would have *270 billion* transistors. At this point it seems like Apple will be the first chip maker to hit the 1 trillion transistor mark.

    • @marisakirisame8543
      @marisakirisame8543 Pƙed rokem +6

      @@utubekullanicisi IDK which approach is better - more transistor (cost) and lower wattage or less transistor and more wattage. Some of my friend in electrical and computer engineering think Apple M1 is "wasting sand" LOL

    • @utubekullanicisi
      @utubekullanicisi Pƙed rokem +4

      @@marisakirisame8543 With Apple's approach you will get more efficiency but higher price, with most others' approach you will get lower efficiency but lower price. Apple's motto has always been "don't care about the BoM (bill of materials), but build the best thing that the current technology allows", so at least they're doing something that match their motto. I'd say neither approach is "the best", but depending on what you care about the most, one of them can be better for you.
      In any case, I would try to be open minded about both approaches.

    • @utubekullanicisi
      @utubekullanicisi Pƙed rokem +4

      @@SpartanArmy117How is their approach "build the most average thing in the market and charge the most you can" when the newest MacBook Pros have class leading miniLED displays that have more dimming zones, brightness, contrast ratio, etc. than any other laptop display, class leading speakers that have more dynamic range than any other laptop speaker, one of the best if not the best webcam hardware with the best image processing thanks to the M-series chips deriving the class leading ISP from the A-series chips for the iPhone, and a class leading trackpad? I could go on and on about the hardware advantages that Apple has in various product platforms that don't necessarily show up in spec sheets. They might have higher profit margins on their products than anybody else (though Apple's reported profit margins that are supposedly around 30% seemingly suggest otherwise), but I definitely disagree that their approach is "build the most average thing and charge as much as you can", it's more like "build the most over the top thing you can with unnecessarily expensive components and charge as much as you can for it".

  • @DanielLavedoniodeLima_DLL
    @DanielLavedoniodeLima_DLL Pƙed rokem +29

    I'm glad you mentioned the suitability for hot climates because this is often glossed over in the reviews. Living in Brazil, an Intel notebook in a small room with second display and intensive usage means that I need AC on almost all the time if I want to be comfortable in the room, while using a M1 machine is as if it was nothing. This was one of the main reasons why I switched from a Dell XPS 13 to a Macbook Air M1

    • @Cat-kp7rl
      @Cat-kp7rl Pƙed rokem +4

      Same. Here in Phoenix, AZ where it has already hit 114F(44.5C) this summer, my gaming PC will heat up an entire room noticeably hotter than the rest of the house and besides the PCs own power consumption, the AC runs harder further raising the power bill.
      Yet, with my M1 Max I can literally work on it outside*. My heavier workloads consist of running Altium(EDA software for PCB design) or PathWave ADS/EMpro for EM field solving, in Windows 11(Parallels VM).
      *Why work outside? When my kids are swimming/playing in the yard.

    • @ViniSDL
      @ViniSDL Pƙed rokem

      O difĂ­cil Ă© comprar da Apple com o dĂłlar a 5 conto e a Apple vendendo por quase o dobro do valor da conversĂŁo direta

  • @Isamu27298
    @Isamu27298 Pƙed rokem

    I would love to see some audio production tests in the future. When I think of Mac Users the first two groups that come to mind are designers and audio producers

  • @hi_tech_reptiles
    @hi_tech_reptiles Pƙed rokem +33

    Would have loved to see an R9 system for comparisons, including power draw. Mostly to satisfy my love of graphs and data but still, other reasons too.

  • @chrisricetopher21
    @chrisricetopher21 Pƙed rokem +172

    great work
 you’re the GOATS of all things tech. Thanks for all the ridiculously hard work you put in.

    • @atharvtyagi3435
      @atharvtyagi3435 Pƙed rokem +4

      These people run three tech channels, posting quality content constantly on all of them, they are amazing.

    • @Dionyzos
      @Dionyzos Pƙed rokem +3

      This video was pretty good but many other LTT reviews need work to be technically on par with some other channels. I hope the lab fixes that.

    • @leosalonen1564
      @leosalonen1564 Pƙed rokem

      @@Dionyzos they are a entertainment tech channel rather than a technical tech channel like GN.

  • @alexsch5583
    @alexsch5583 Pƙed rokem

    Looking forward to your videos on the m2s in mac mini!

  • @astruxium
    @astruxium Pƙed rokem

    Always enjoy episodes with Anthony as the host, he gives some great insights in a very personable and understandable way. Real interesting video👍

  • @AshtonZee
    @AshtonZee Pƙed rokem +33

    Not gonna lie Anthony, you just educated me on a bunch of different things. The difference between ssd speed and CPU speed, thermal modules, names of different platforms to test your own PC, Copper cooling, heat output, and power supply testing.
    Thanks again.
    Once again, your calm voice has made me understand bench marks I wouldn't be able to understand by myself.
    Thanks.

  • @bragemogstad7124
    @bragemogstad7124 Pƙed rokem +16

    Thanks for a great review with attention to detail and depth of knowledge. A fair and square battle between the titans pc & mac. Really enjoyed it. I expected to see faster read/write speeds for the ssds on the 12 gen intel pc. Aprox 20% faster. Perhaps the ssd class was a bit low for this config.

  • @brendonv101
    @brendonv101 Pƙed rokem +1

    Anthony's reviews are always top notch. Definitely the most detailed out of the entire crew. Please never leave. 😂

  • @cosmic3689
    @cosmic3689 Pƙed rokem +2

    By the way! the WoW visual artifacting on that fight occurs on windows too, it's a pretty major bug introduced in a recent patch somehow, and kills you a decent amount in that raid fight!

  • @TheJackiMonster
    @TheJackiMonster Pƙed rokem +19

    Wait, couldn't you also make compile benchmarks compiling for the same architecture? I mean otherwise this lead could be architecture specific using different optimization flags or even ignoring parts of the code because of architecture specific macros.

    • @yahgent
      @yahgent Pƙed rokem +2

      I have no idea. I don’t use Apple products, play no video games, and I am very unfamiliar with this kind of technology.

    • @yoted
      @yoted Pƙed rokem +1

      I’m curious what their compiler flags were

    • @jimemmonstein847
      @jimemmonstein847 Pƙed rokem

      They're running Intel native code on the Mac and the Intel machine is still getting thrashed at 10% of the power draw.
      in 3 more Mchip iterations, it'll be worth it.

    • @alexandrebelair4360
      @alexandrebelair4360 Pƙed rokem

      @@jimemmonstein847 In some application, sure.

  • @NaviUpgrade
    @NaviUpgrade Pƙed rokem +59

    Thanks for taking on this difficult task! I was super nervous when the ultra came out that my fully built 5950X and 3080 Ti was obsolete. This makes me feel much better about my render machine's solid performance... Just wish adobe would optimize for Zen 3.

    • @joshuareveles
      @joshuareveles Pƙed rokem +3

      Just use Davinci and never go back lol

    • @kravvormagagor9595
      @kravvormagagor9595 Pƙed rokem +6

      The reasons are entirely self evident and obvious. He literally said "render machine". Plus it's a joke

    • @bilalsadain
      @bilalsadain Pƙed rokem +5

      Just because a newer model comes out doesn't mean your current model suddenly becomes bad.

    • @starrims
      @starrims Pƙed rokem

      @@joshuareveles vinci doesn’t have good support for Ryzens

    • @joshuareveles
      @joshuareveles Pƙed rokem +1

      @@starrims says who? I’ve seen countless high end editing rigs run flawlessly with Davinci

  • @abrarwasi1371
    @abrarwasi1371 Pƙed rokem +1

    Mr. Sebastian, can we get a vid where some of the engineers at LMG are tasked with taking a console/NUC and improving its dimensions, cooling and performance? just saw a vid of DIY Perks' slim Playstation 5, and I think you guys could do the same thing! great vid btw props to Anthony for being great as always

  • @initialb123
    @initialb123 Pƙed rokem

    Thanks for wrapping that all up for me, lots of info well presented, ticks every box. Have yourself an interaction comment and what not.

  • @dearestdennis
    @dearestdennis Pƙed rokem +46

    Them brows on fleek Anthony! ♄

  • @jamo8154
    @jamo8154 Pƙed rokem +15

    Not looking great for M1 Ultra when next gen GPUs are soon to be released 👀

    • @BrucyJuicy
      @BrucyJuicy Pƙed rokem

      Nvidia has a 4 year cycle so every 4 years they drop a new generation card same will happen again we are now at 2 so 2years left for release

    • @BrucyJuicy
      @BrucyJuicy Pƙed rokem

      Nvidia has a 4 year cycle so every 4 years they drop a new generation card same will happen again we are now at 2 so 2years left for release

    • @TalesOfWar
      @TalesOfWar Pƙed rokem +5

      So are next get M series chips. Remember you won't need a 1000W PSU to run it either lol.

    • @i_grok_u2902
      @i_grok_u2902 Pƙed rokem +2

      @@BrucyJuicy They are releasing the new cards-4000 series this fall. Plus Nvidia releases every 2 years, so your statement was totally wrong.

    • @sean8102
      @sean8102 Pƙed rokem

      They don't really compete with each other. Plus of course apple is already working on their next gen products

  • @northlyte
    @northlyte Pƙed rokem +2

    I missed Anthony, glad to see him back! One of my favourite hosts!

  • @JakeMetz503
    @JakeMetz503 Pƙed rokem

    A recent basic test showed AVB functionality via the on-board ethernet of an M1 Studio. I'll be doing more extensive testing next week.

  • @TheMotlias
    @TheMotlias Pƙed rokem +38

    Id personally put the m1 against the 5950x for the compile test, that thing is consumer and a beast for compiling in my experience, then agin of youre talking about prosumer or professional workstations why not test it aginst a threadripper?

    • @TheMotlias
      @TheMotlias Pƙed rokem +13

      @@nobodylmportant its just they claim its the fastest computer for $4,000 then you have to test against the very best that you can get for 4k

    • @imranzero
      @imranzero Pƙed rokem +3

      @@TheMotlias a half decent threadripper is gonna cost you $2000+

    • @pirojfmifhghek566
      @pirojfmifhghek566 Pƙed rokem +1

      Yeah... and the M1 Ultra they used in these tests is already $5800.

    • @Teluric2
      @Teluric2 Pƙed 5 dny

      Because it would make M1 look like trash.

  • @Greerere
    @Greerere Pƙed rokem +49

    I'd be curious to see an underclocked pc Vs Mac on power draw. if intel have shown us anything, it's half of your power is going in to the last few percent points of performance

    • @robertfullard5646
      @robertfullard5646 Pƙed rokem +8

      This, all day, every day. 12900k is definitely a quick chip but, its power budget and money budget are way in excess of what most would need. A lower priced, lower power chip could yield some deeply interesting stuff. Half the price of the PC and run it stock and see where you stand. Probably 85% of the performance for half the cost. It is all well and good and makes good videos when you test the crazy expensive against the crazy expensive but most cannot come close to affording that. The priority of entertainment over information needs to change. I'd be very happy to see more realistic stuff with a conclusion saying... This is twice the price of this but you'd be mental to buy the more expensive one as you can get most of the performance and buy a second one for giggles for the same price.

    • @giornikitop5373
      @giornikitop5373 Pƙed rokem +3

      true. a 12700k would be more than enough, still 8 pcores just half the ecores (4). and im not sure how usefull those 8 ecores are. but most of the power budget goes to turbo boost from ~4.3 to 5 and 5.2 ghz. with carefull settings in the bios, you can have most if not all of the performance and way lower power consumption. but, again they compare cpus as they come from the factory so, meh. m1 would still win in power draw but the difference wouldn't be so dramatic. but for the gpu, there is not much they can do...

    • @danieloberhofer9035
      @danieloberhofer9035 Pƙed rokem +8

      If Intel have shown us anything, it's go get a Ryzen ;)

    • @ericbauer4559
      @ericbauer4559 Pƙed rokem

      Well just go look at pc laptops then.

  • @shrimperlincs3395
    @shrimperlincs3395 Pƙed rokem +9

    It's great to see more Chip makers. The more the better.

  • @datainsight1724
    @datainsight1724 Pƙed rokem +2

    No TensorFlow testing? The unified memory and 128GB would've been interesting to compare against the 24GB 3090, especially with GANs that effectively scale linearly with vram

  • @Blackened
    @Blackened Pƙed rokem +24

    I'd love to see Linux datapoints for the Intel platform, especially for the dev related tests

    • @briansrensen6603
      @briansrensen6603 Pƙed rokem +1

      That would be nice, and a cleaner comparison. Would also like to see a complex program with both memory and cpu bottlenecks at different parts of the simulation (e.g. weather model like WRF or ocean model like GETM). That could pinpoint some limitations and/or strengths of the platform.

    • @bubblodetechmore5512
      @bubblodetechmore5512 Pƙed rokem +1

      Costly experiment, lol.

    • @zekicay
      @zekicay Pƙed rokem

      Yes, especially since windows is much slower when closing active processes.

    • @mbsfaridi
      @mbsfaridi Pƙed rokem

      Yeah, using Intel’s Clear Linux distro.

  • @burntalive
    @burntalive Pƙed rokem +51

    Would like to see the comparisons to AMD CPUs for productivity.

    • @Neopulse00
      @Neopulse00 Pƙed rokem +2

      I second this. APUs in my case though.

    • @directlinkrexx4409
      @directlinkrexx4409 Pƙed rokem

      Why??

    • @MethmalDhananjaya
      @MethmalDhananjaya Pƙed rokem +6

      @@directlinkrexx4409 because AMD fans always wants to see AMD being superior than everything else.

    • @57thStIncident
      @57thStIncident Pƙed rokem +6

      Sure. For some of these loads and at this high price point, I wonder how Threadripper/Threadripper pro compares.

    • @Tyrim
      @Tyrim Pƙed rokem +1

      Yeah, i would love to compare this to threadripper pro+quadro cards which are more power efficient (a lot) would he totally different results

  • @zachcygan
    @zachcygan Pƙed rokem

    The blinking that Anthony talks about around 4:25 is a graphics driver issue. I have played wow for over a decade and have only experienced it recently after GPU driver updates. It has gotten better over time, but it happens from time to time.

  • @ngroy8636
    @ngroy8636 Pƙed rokem +1

    Haha I want to see tensorflow tho (maybe you want to train on the m1 instead of 6 card a100 :p ?) . Tho ram/io matters a lot to utlize the full computational capacity. Cudnn optimization is also magical! I donno what's a fair test, but perhaps mkl + cudnn vs tensorflow-metal + tensorflow-macos on some dnn model with ~ 1e9 parameters?

  • @anishgupta7463
    @anishgupta7463 Pƙed rokem +8

    I was waiting for this content 😍😍

  • @JuanPablo-ho7fg
    @JuanPablo-ho7fg Pƙed rokem +11

    Great review. Thanks Anthony.
    Question: Will there be any mention of Asahi Linux in the future? It's in an usable state right now, and they're developing their own GPU driver which will be ready sooner than later.

    • @Haskellerz
      @Haskellerz Pƙed rokem +2

      Ubuntu on Nvidia Orin (12 ARM cores + 2048 CUDA cores) is pretty stable because Nvidia specifically wrote custom drivers for it.

  • @Proxima_X
    @Proxima_X Pƙed rokem

    I have done the Gathering Storm benchmark on an i5-7200u and the hd grafics 620 (8 gb single channel 2133 DDR4). It took my old laptop around 2:40 to finish it. And it costed $400 back in 2017. So I guess both aren't a gaming machines.

  • @JupiterRexMusic
    @JupiterRexMusic Pƙed rokem

    Was tempted to get one because I work in Music so I use Logic, Ableton, and Dorico extensively but the one thing that holds me back is the non-upgradability. Sure I’ll start off strong but three or four years down the line I might have to be shelling out for something with more ram.
    I’m holding out for the Mac Pro which is fully upgradable and is reportedly going to have an M2 Extreme chip.

  • @yuryzhuravlev2312
    @yuryzhuravlev2312 Pƙed rokem +39

    Anthony! You can't compare compilation for x86/AMD64 and ARM because a compiler does completely different tasks. x86 has a much more complicated instruction set, and the compiler should also do much more actions, for x86 exist much more optimizations! ARM machine code has a different size than x86.
    To compare, you must build x86 Chrome by cross-compiling on an M1 chip.

    • @szaszm_
      @szaszm_ Pƙed rokem +6

      According to replies on my top level comment, even cross-compilation adds some overhead. The only comparable numbers seem to be those produced on the same architecture, targeting the same architecture, using the same OS and toolchain versions.

    • @Teluric2
      @Teluric2 Pƙed 5 dny

      Just speculation. Can you compile both to run a real life test?

  • @VMYeahVN
    @VMYeahVN Pƙed rokem +30

    Hardware itself seems really nice, just still waiting on software support/optimization on a lot of things. Hopefully developers of all softwares pick up the pace on that.

    • @alexandrebelair4360
      @alexandrebelair4360 Pƙed rokem +3

      Devs won't do shit lol.

    • @sean8102
      @sean8102 Pƙed rokem +4

      @@marcogenovesi8570 unreal engine and unity already fully support apple silicon. But the market for Mac gamers is so tiny most devs still don't bother. Especially if they use a in house engine. From what I can tell there are literally two games that are both macOS native and Apple silicon

    • @VMYeahVN
      @VMYeahVN Pƙed rokem

      @@marcogenovesi8570 If all you're focused on is gaming, then sure. But I pretty much any other software, support is either already there, coming soon, or being worked on.

    • @VMYeahVN
      @VMYeahVN Pƙed rokem

      @Zack Smith You sound super defensive. We weren't making the comparison, but go off though.

    • @lalnuntluangachhakchhuak5767
      @lalnuntluangachhakchhuak5767 Pƙed rokem

      Apple has media engine which do good in video editing. Since most CZcamsrs are video editors its not surprise the hype on youtube. But video editing is just a fraction of productivity task in actual computer world. In any graphical computing tasks they will never match ray tracing cores embedded in modern graphics cards.

  • @drdelewded
    @drdelewded Pƙed rokem

    I want a M2 Studio.. so Ill wait till it exists..
    and when all my programs are compatible.
    Also a rackmount kit for it, im sure sonnet is
    working on one.

  • @ericumforgotsorry5423
    @ericumforgotsorry5423 Pƙed rokem

    It's nice to see you Anthony. I enjoy your videos as much as a Linus video. Love you too Alex!

  • @contournut5726
    @contournut5726 Pƙed rokem +3

    Dev here. I'd have to know about the chromium build system and what compiler it's using on each platform.
    Compiling is pure CPU. If you really want to get at raw compilation performance, load linux on both machines and get the times. It won't matter that there's no GPU support in asahi yet. Although, even then you'll be cross compiling on one OR you'll be compiling different outputs and it could easily be the case that the optimizer for ARM is more or less complex :/.
    But mostly, thanks for doing a compilation benchmark :). And double thanks for not taking the "but it's MacOS so you don't actually have to compare it to modern machines" cop out. I'm so sick of that. Some of us just want to know if apple silicon is genuinely amazing.

  • @RalphonzoMcgoo
    @RalphonzoMcgoo Pƙed rokem +12

    That wow clip of the flickering building happens on pc too, think it’s more of a wow problem than a pc/mac thing

  • @kareltimotheeheritier495

    Regarding the WoW bench, we had the problem with de disappearing assets on pc too. I game on both an m1 16 and a 9700K with a 3080Ti. It was a specific problem in sepulcher this last week on the Halondrus fight. Not specific to M1.

  • @yuxuanhuang3523
    @yuxuanhuang3523 Pƙed rokem

    The only place I would think of using a Mac studio is in video editing and encoding. Many studios I know have got a few macbooks for their timeline editor, studio for final color grading and export. Their vfx team still works with x86 because of compatibility issues and performance. For example, dropping denoise on footage in Davinci on M1 would make the playback fall to 3fps while it can still manage around 10 on a mid-tier windows machine.

  • @berndkemmereit8252
    @berndkemmereit8252 Pƙed rokem +3

    Anthony, the walking knowledgebase.....just incredible....I love his reviews, I never miss a Anthony review

  • @laraava
    @laraava Pƙed rokem +18

    We love Anthony!
    Always more Anthony!
    “We only have one Anthony” -Linus
    “Anthony tech tips!!!”

  • @DevonElmore
    @DevonElmore Pƙed rokem +1

    At about 4:22, that's not the Mac's fault - that graphical error on that encounter is a known bug and exists on Windows/Linux as well.

  • @Owy.
    @Owy. Pƙed rokem

    What type of monitor does the M1 ultra use? I thought all imac are just all in ones so you cant like hook one upto a Desktop / normal monitor screen

  • @DuyNguyen-yx2vd
    @DuyNguyen-yx2vd Pƙed rokem +13

    I wish there was an easy way to differentiate the "Higher is better" vs "Lower is better" graphs. By the time I look down to check which one I'm looking at, we're already onto the next bullet point.

    • @jadamsnz
      @jadamsnz Pƙed rokem +1

      Maybe an arrowhead overlay on the graph's bars pointing in the direction of better...?

    • @reappermen
      @reappermen Pƙed rokem

      Hm, I wonder if you could make them left/right bound respectively. Ver yobviously different, and you could read them the same as a quick glance. Aka no matter if higher or lower is better, the 'better' bar is the ones with the endpoint further to the left (or right, depending on which one you make left/right bound)

  • @memadmax69
    @memadmax69 Pƙed rokem +17

    I've seen WoW do the flicker thing on windows occasionally as well.
    Apple silicon has big potential, but as the owner of a m1, I'm thinking M3/M4 and better software support before I pull the trigger on apple silicon again.

    • @Crowski
      @Crowski Pƙed rokem

      I’m glad I read this. Been playing wow since Vanilla. Been wanting a MacBook but so far it’s been like getting into an ice bath lol.
      I’m waiting to see what M3 has to offer.

    • @memadmax69
      @memadmax69 Pƙed rokem +1

      @@Crowski I too been playing WoW since vanilla lol. Anyways, WoW runs awesome on M1 cause blizz decided to support MacOS/ARM fully. Its buttery smooth on max settings. Unfortunately, the mass majority of games out there do not or dont work at all, and you have to go around with parallels or something similar. Hence the second part of my post "better software support". I'm only really considering M3/4 cause M2 is really just an improvement of M1 sorta like how intel was doing with its processors(tik/tok). Still, enormous potential if apple would just pull its head out of its ass with the platform and open it up a bit.

  • @lgf30022
    @lgf30022 Pƙed rokem

    I came to the same conclusion about the Mac Studio with Ultra.
    It was overkill for anything but the most intensive 3d modelling/rendering workloads. This is why I went with the Mac Studio with M1 Max. Saved a bunch of money, added the OWC Gemini which gave me 24TB of RAID 0 and more Thunderbolt/USB and Displayport expansion. Didn't buy this for games but for content creation, CZcams and Zoom.
    The software does have to catch up though. I find many cores not being utilized so we sadly have unused resources when I'm running Resolve, Photoshop and LightRoom all at once (with email , browser, etc...running as well). But, I love this little thing. Simple, elegant and does the job. Forget gaming, it's a MAc, not a gaming rig. For that, Gimmie a good PC with NVIDIA graphics and I'm a happy camper.
    Good review and I agree on many points here!

  • @pixels_per_inch
    @pixels_per_inch Pƙed rokem +1

    I wonder how much of the idle power usage is from the RGB

  • @tehbeard
    @tehbeard Pƙed rokem +63

    Curious to see what that chrome compile would be like on a high end Ryzen with more cores than the i9.

    • @MrIzzy5466
      @MrIzzy5466 Pƙed rokem +15

      Oh yea. That's one benefit of PC they didn't show, other brands of hardware. An R9 5950X would be interesting to see in place of the I9

    • @saricubra2867
      @saricubra2867 Pƙed rokem +6

      Code compilation likes more memory bandwidth (source is Hardware Unboxed) than core count. Intel currently wins against AMD and loses against Apple. If Zen 4 supports quad channel memory for all platforms, it will beat intel (if they keep dual channel for the CORE series).

    • @imranzero
      @imranzero Pƙed rokem

      ryzen more coreℱ meme is dead.

    • @ryanthompson3737
      @ryanthompson3737 Pƙed rokem +1

      I don't know why people claim Ryzen has more cores than an I9 when all they really did was merge a high end ryzen with their epyc chips. Threadripper is not a gaming cpu, and has WAY more capabilities than ANY workstation user would need... it literally outperforms their own Epyc server chips in pure performance, and only lacks in some minimal features such as cache and allowable ram (though 2tb of ram seems plenty for any workstation user)
      Beyond that, there are rumors that next gen threadrippers will support dual cpu setups, meaning the workstation user, 99% of them NOT simulating the physics of our solar system, will have the abilities of a small supercomputer.
      Tldr, stop comparing threadripper, a server CPU they purposely put in the ryzen family for shits and gigs, to an I9, a properly named CPU for the categories of work it's able to do.

    • @ffwast
      @ffwast Pƙed rokem +10

      @@ryanthompson3737 cope harder intel

  • @swayamkrishnan8273
    @swayamkrishnan8273 Pƙed rokem +105

    My teacher once asked me to draw a graph. I copied the graph Apple showed in their presentation of the m1 Max beating the RTX 3090. i got a 0 in the test.

  • @daveh6356
    @daveh6356 Pƙed rokem

    Well spotted with Embree. Intel has hobbled the revered Cinebench for long enough but no longer. Around the time this video was released, a new build opening up the M1’s 256-bit SIMD was released. Looking forward to seeing a boost of 8-13% on Cinebench scores.

  • @thekwoka4707
    @thekwoka4707 Pƙed rokem

    I'm not sure about a benchmark for EVE, but I can say even on a base m1pro, it's run beautifully smooth at all maxed out graphics.