128-core MONSTER Arm PC: faster than a Mac Pro!

Sdílet
Vložit

Komentáře • 809

  • @fujinshu
    @fujinshu Před 6 měsíci +549

    Also, about the Ti in the GPU, NVIDIA pronounces it both ways. Jensen, the CEO, pronounces it as T-I (Tee-eye), while Jeff Fisher pronounces it as Ti (Tie/Ty)

    • @JeffGeerling
      @JeffGeerling  Před 6 měsíci +149

      The plot thickens!
      Internally I say T-I, when I pronounce it out loud it comes out "Tie", so who knows lol

    • @zblurth855
      @zblurth855 Před 6 měsíci +44

      @@JeffGeerling I guess you need to send red shirt jeff to Nvidia HQ so we may know the answer, better not have another gif situation

    • @mbe102
      @mbe102 Před 6 měsíci +27

      @@JeffGeerling well its originally Ti-tanium, isn't it? So it makes sense. But I've only ever heard Tee Eye.

    • @JamesGillean
      @JamesGillean Před 6 měsíci +7

      @@JeffGeerling I don't know if i can handle your Tie pronunciation Jeff. It's like a punch to the ol' squeedily spooch.

    • @nathanielhill8156
      @nathanielhill8156 Před 6 měsíci +9

      ​@@JeffGeerlingit used to be T-I, but they retconed it into Tie. My personal belief is a Texas Instruments trademark got involved.

  • @QuentinStephens
    @QuentinStephens Před 6 měsíci +255

    One thing on your RAM vs core discussion: L3 cache requirements scale non-linearly with core counts thanks to the increased incidence of L2 cache misses.

    • @JeffGeerling
      @JeffGeerling  Před 6 měsíci +61

      That's why the chip architecture is critical with more and more cores. AMD, Intel, and Ampere all seem to take slightly different approaches. I've enjoyed some of the ChipsandCheese articles on these new architectures!

    • @shanent5793
      @shanent5793 Před 6 měsíci +8

      Do increased L2 misses increase or decrease pressure on L3? If it's non-linear then is it log, exponential, or polynomial?

    • @QuentinStephens
      @QuentinStephens Před 6 měsíci +14

      @@shanent5793 It's non-linear and there's an exact formula. Let's say you have a 5% chance of a cache miss per core, so a 95% chance of a cache hit. The percentage chance of a cache miss with N cores is (1 - (.95^N)) * 100. Obviously the chance of a miss - that 5% - is dependent upon the workload. The more misses you have, the greater the pressure. And the fewer the RAM channels you have the greater the effect of L3 cache misses.

    • @shanent5793
      @shanent5793 Před 6 měsíci +6

      @@QuentinStephens that's just the chance of at least one miss. Multiple misses will have binomial probabilities so their sum converges to linear. 128 cores are expected to have twice as many misses in total vs. 64 cores. Either way more cores causes more L3 pressure so why does the Ampère only have 16MB which is less than desktop CPUs with only 6 cores or 12 threads?

    • @QuentinStephens
      @QuentinStephens Před 6 měsíci +14

      @@shanent5793 I'm not sure you're correct about the binomiality but yes, I do agree that the 16 MB cache does seem rather low, especially when we have Epyc CPUs with 1 GB cache for similar numbers of cores.

  • @IamTheHolypumpkin
    @IamTheHolypumpkin Před 6 měsíci +130

    Honestly I wouldn't at all be surprised if valve would tell us tomorrow that they release a fork of Box-86 and Box-64 build right into steam so to support all steam games on ARM and RISC-V.
    Valve would be insane enough to do this and there's no number 3 so it allowed.

    • @circuit10
      @circuit10 Před 6 měsíci +22

      It would make sense if they’re considering using ARM for a Steam Deck successor, like maybe that new Qualcomm one that’s meant to be really good?

    • @AlwaysBolttheBird
      @AlwaysBolttheBird Před 6 měsíci +3

      It’s one of the reasons I love my steam deck so much. Issue? Not in 2 hours haha

    • @KingVulpes
      @KingVulpes Před 6 měsíci

      I don't know, code weavers contacted valve about built in support for crossover on macOS steam and they still haven't done anything about it (source: I contacted code weavers themselves about it and they said that they did pitch the idea and that it is up to Valve)

    • @mgord9518
      @mgord9518 Před 6 měsíci

      ​@@KingVulpesBecause Valve's primary focus is on Linux, not macOS.
      Another thing is Crossover is a paid product, I find it highly unlikely that CodeWeavers was interested in just providing it to Valve for free without getting a cut, that's probably why Valve wasn't interested.
      Providing x86 emulation for ARM, however, could directly benefit Valve as it would allow for future low-draw devices, although I'm not holding my breath.

    • @nempk1817
      @nempk1817 Před 3 měsíci

      the problem is not steam, the problem is that you will use it to play the most simple games for the simple reason ARM.

  • @BlackPanthaa
    @BlackPanthaa Před 6 měsíci +154

    Threadripper 399x user here, they will never fix over 64 thread usage in windows. I've tried it all.

    • @WartimeFriction
      @WartimeFriction Před 6 měsíci +66

      Sounds like it's time for you to do the free upgrade to a superior Linux based OS

    • @ciprianrobo
      @ciprianrobo Před 6 měsíci +101

      @@WartimeFrictionyou forgot the "I use arch btw" as part of your comment

    • @mindrage92
      @mindrage92 Před 6 měsíci

      I think the culprit is that Win32 API function: GetLogicalProcessorInformation only supports up to 64 proccessing units, due to using only a 64bit flag value for each cpu. GetLogicalProcessorInformationEx is the more modern one.

    • @leonpano
      @leonpano Před 6 měsíci +1

      Proton will have maximum 64 threads

    • @vgernyc
      @vgernyc Před 5 měsíci +5

      Even Windows Pro for Workstations?

  • @SaltCollecta
    @SaltCollecta Před 6 měsíci +96

    3 grand for a 128 core CPU. I remember when Intel used to charge 5 grand for a quad core server. Lol, what an exciting time to be alive. I will buy one in a few years when it's stable and on the used market for a reasonable price.

    • @dzello
      @dzello Před 6 měsíci +7

      The issue is the lack of support from software.
      Not enough stuff makes use of all the cores.

    • @SaltCollecta
      @SaltCollecta Před 6 měsíci +5

      @@dzello I have a feeling that golang with a huge workload would do pretty well.

    • @DeltaSierra426
      @DeltaSierra426 Před 6 měsíci +4

      Yeah, lol. Can't get to 128 x86 cores at $3K even on ThreadRipper, either, unless it's used.

    • @dzello
      @dzello Před 6 měsíci +1

      @@DeltaSierra426 Those limitations are definitely unfortunate.
      Making a powerful CPU by making it bigger with a bigger socket? Easy.
      Making a powerful GPU by making it bigger with a bigger socket? Easy.
      Even if we don't improve the technology, we can add more and make it bigger.
      But then...
      Games: I'll use 1/128 your CPU and 1/3 of your GPU.

    • @kepler_22b83
      @kepler_22b83 Před 4 měsíci

      @@dzello I think making a program able to use the potential of this hardware isn't that hard. It's just that people don't usually do it. With time, and more and more complex software this extra horse power might be needed... Though, there's indeed a limit for consumer grade applications, and crossing that limit is just being inefficient or lazy with your code

  • @someguy9175
    @someguy9175 Před 6 měsíci +79

    I really want to see these in a consumer level platform while keeping itself upgradeable.

    • @iikatinggangsengii2471
      @iikatinggangsengii2471 Před 6 měsíci +5

      most people will be pleased even with half quality, they kind of work well together

  • @KG4JYS
    @KG4JYS Před 6 měsíci +100

    We're finally returning to the RAM situation we had a decade ago, where workstation motherboards had lots of RAM slots. My (now very old) super micro x8dah+-f board has 18 (9 per CPU). IMO, the biggest problem with modern processors is the extremely limited PCI-e lanes available. Look at chip specs over the years, and it's something that has steadily decreased. With Thunderbolt and NVME, PCIe lanes are the most limiting feature on all my computers - even laptops.

    • @JeffGeerling
      @JeffGeerling  Před 6 měsíci +39

      Yeah; I have run into that on my Ryzen 7000 series desktop, there are few motherboards that even expose the lanes in a way I can fully utilize them :(
      The nice thing with this Ampere chip is it has 128 lanes, and almost all are usable on this motherboard! Still always want more, for more IO :)

    • @arof7605
      @arof7605 Před 6 měsíci +5

      Still run a 4790k on my seedbox due to this. Haven't found a non-server mobo with 10 on-board sata slots for spinning drives since that generation for any other CPU I bought.

    • @shanent5793
      @shanent5793 Před 6 měsíci +9

      128 PCIe 4.0 lanes is plenty; that's 512 GB/s, more than enough to saturate 6 channels of DDR-3200 with only 154 GB/s half-duplex bandwidth. It's up to the motherboard or backplane designer to allocate them

    • @NicolaiSyvertsen
      @NicolaiSyvertsen Před 6 měsíci +7

      The issue is that the PCIe lanes are used for M.2 slots and other onboard functions that didn't exist on boards 12 years ago. Back then those PCIe lanes mostly went to actual PCI slots.

    • @GSBarlev
      @GSBarlev Před 6 měsíci +2

      ​​​@@arof7605My 4790k was a beast-even though I was never able to overclock it, it ran my main computer for over half a decade, and its core performance was *never* the bottleneck.
      But I'm surprised you're still using it-how do you live with a mere 32GB of RAM? (asked half-jokingly)

  • @danagoyette7932
    @danagoyette7932 Před 6 měsíci +46

    Something to note about NVIDIA's ARM binary drivers: they have driver library files for x86-64 and aarch64, but they don't have armhf driver libs for software running under box86. That is, box86 converts 32-bit Intel into 32-bit ARM, not into 64-bit ARM. For i386 games, you'd likely need to use an AMD GPU -- Polaris (RX5xx) or older.
    One game I find very useful for checking the performance of GPUs on ARM is Veloren. It uses Metal on MacOS, Vulkan on Linux, and Vulkan or DX12 on Windows (though there's no ARM Windows build).

    • @leonpano
      @leonpano Před 6 měsíci

      but why all source game crash on linux
      and i have rtx a5000 and platform is amd64
      it crashed same way like in this video but on amd64 not arm64

    • @frankmoras63
      @frankmoras63 Před 6 měsíci

      Quite a few shooter games with anticheat that failed, might that be the common denominator ?

    • @lucasrem
      @lucasrem Před 6 měsíci +1

      danagoyette7932
      What titles you run good on ARM ? all old DOS titles ?

  • @Gaming_with_Martin
    @Gaming_with_Martin Před 6 měsíci +65

    ARM is really making huge moves am convinced very soon they will have 6 cores 8 cores and 16 cores lineups for consumers

    • @adamschackart6859
      @adamschackart6859 Před 6 měsíci +5

      Odroid N2+ is 6-core, Orange Pi 5 is 8-core, both of which can be purchased today for relatively dirt cheap!

    • @GustinJohnson
      @GustinJohnson Před 6 měsíci +19

      RISC-5 is jumping into the fray. I am looking forward to getting my 64 core dev board in December. I am so happy to have this level of competition in the market again.

    • @Pasi123
      @Pasi123 Před 6 měsíci +7

      @@adamschackart6859 But they aren't something you'd put in a tower case and don't have a socketed CPU and memory or PCIe slots

    • @DavidTMSN
      @DavidTMSN Před 6 měsíci +2

      That's great for those using them for production but are they going to be able to be clocked at the kind of speeds we're seeing currently?

    • @ultimatedarkkiller7215
      @ultimatedarkkiller7215 Před 6 měsíci +1

      Actually, processors on smartphones are ARM, and they are usually 6-8 cores, so yes that already happened years ago lol

  • @23lkjdfjsdlfj
    @23lkjdfjsdlfj Před 6 měsíci +9

    I appreciate the effort you make to provide lots of details.

  • @Insightfill
    @Insightfill Před 6 měsíci

    1:47 LOVE the "18 minute pickup" at Microcenter. I've built both of my kids' gaming towers by picking out the parts, hitting "buy" and driving right over. Even picked up Dell XPS 13s for each of them the same way.

  • @orlie_dev
    @orlie_dev Před 6 měsíci +16

    but can it run crysis

  • @Daggenthal
    @Daggenthal Před 6 měsíci +30

    This is so fucking sick man, I love the development that ARM desktop / server cores have been making! I know we have other Architectures as well (RISC-V) and it's awesome that they're all making strides, but to see this amount of progress now? Fuck yeah!
    I remember watching your older videos where you literally couldn't detect the GPU or even push anything out to the frame buffer, but now look at it :D

  • @digitalsparky
    @digitalsparky Před 6 měsíci +1

    Exciting to see ARM gaining! Fantastic for servers (specifically high thread/process count web servers), etc.

  • @denvera1g1
    @denvera1g1 Před 6 měsíci +12

    11:00 I think this has been a problem in Cinebench since inception, originally it was only an issue for very niche 4 and 8 socket systems, but with EPYC, Threadripper and Xeon Platium (cascade lake) with up to 56-64 cores per socket and 2-4 sockets, many cores started going un-used in and after 2019

    • @lucasrem
      @lucasrem Před 6 měsíci

      Den Verga
      intel is NOT ARM levels !
      You need apple, UNIX !

    • @denvera1g1
      @denvera1g1 Před 6 měsíci

      @@lucasrem we're not talking about perfromance, only core count, IIRC when i built my 12 core dual processor Xeon X5690 desktop the current-at-the-time version of cinebench only supported 16 threads, not 24

  • @juleast
    @juleast Před dnem

    Finally someone who prespreads their thermal compound! 😃
    I've always just seen so many people just leave it to squish itself but I learned from my dad who touched computers for 20+ years that prespreading is better.

  • @BaiFangLu
    @BaiFangLu Před 6 měsíci +3

    Great video an components and benchmark. Looks like you also have a lots data on DIMMs and waiting for a new video on them too.

    • @JeffGeerling
      @JeffGeerling  Před 6 měsíci +1

      We'll see; right now most of the data is spread across some GitHub issues. I may do at least a blog post on it at some point.

  • @MattStevens9824
    @MattStevens9824 Před 6 měsíci +1

    This is soooo cool! I can definitely use this for my MS Excel worksheets!

  • @fxrisxmxli
    @fxrisxmxli Před 6 měsíci +2

    I wish we had something like Micro Centre where I'm from. Tech heaven

  • @alexanderulyev4651
    @alexanderulyev4651 Před 6 měsíci

    Great stuff, thank you, Jeff!

  • @bryanteger
    @bryanteger Před 6 měsíci +20

    Really cool. Btw Jeff I found a way easier way to connect LTE modems via, Modem Manager and Network Manager. No need to install QMI libraries, they're already in Debian 12.

  • @Flargenyargen
    @Flargenyargen Před 6 měsíci +16

    I admire it so much that you are able to work around such unusual circumstances. I can't even get a Linux graphics driver fully working in an ideal setup.

    • @robkam643400
      @robkam643400 Před 6 měsíci +3

      Just buy hardware for linux, instead of the other way around.
      Buy all AMD. It'll all work out of the box if it's over a year or so old.

    • @leonbishop7404
      @leonbishop7404 Před 6 měsíci

      @@robkam643400 I understand why you would want to buy AMD gpu for linux, but what's the point of swapping Intel CPU for AMD one?(unless you mean Intel ME, but it's works the same with with Windows)

  • @Karthig1987
    @Karthig1987 Před 6 měsíci +1

    Good video. Easy to understand information.

  • @AwareOCE
    @AwareOCE Před 6 měsíci +2

    Awesome video! ARM is a fascinating architecture, I cant wait to see where it goes in the near future!

  • @pgriggs2112
    @pgriggs2112 Před 6 měsíci +2

    I wish I had a local Micro Center…

  • @garciajero
    @garciajero Před 6 měsíci +1

    HELL of a video Jeff !

  • @NicoDsSBCs
    @NicoDsSBCs Před 6 měsíci +7

    Nice. They really make amazing stuff. Too bad I can't afford it. Would love an Ampere workstation so much. But I'm happy with RK3588 and my pc when I need it.

  • @aliyuabba4575
    @aliyuabba4575 Před 6 měsíci +5

    Keep them vidoes coming please. This will greatly help Windows on ARM development going forward before the X Elite drops.

    • @JoeSpeed
      @JoeSpeed Před 6 měsíci

      … and after, Ampere multi-core performance is in another league

    • @lucasrem
      @lucasrem Před 6 měsíci

      aliyuabba4575
      Xcode, apple. Do it better ????

    • @Teluric2
      @Teluric2 Před měsícem

      ​@@JoeSpeedampere is crap for video and cfd.

  • @spinthma
    @spinthma Před 6 měsíci +1

    Really amazing, did not know that there are already ampere cpu workstation in the field!

  • @LollosoSiTV
    @LollosoSiTV Před 6 měsíci +8

    Hey Jeff, running the bedrock edition, especially the mobile version as you did, is a far too easy challenge for your rig.
    I suggest running and comparing the latest java version and a specific modded version: Faboulously Optimized.
    To get any architecture incompatibilities out of the way, consider using a launcher that comes as a JAR file, such as the Technic launcher
    Make sure to use the latest jre (20-21) and set the proper JVM flags
    Additional bonuses: shaders, resource pack with parallax mapping + physics mod pro (then grab a fire extinguisher)
    Looking forward to hearing from you!

  • @STEELFOX2000
    @STEELFOX2000 Před 6 měsíci +2

    What I learned here .... Its Amazing but still not work yet!!!! Great job BTW! I loved this video!!

  • @Stealthmachines
    @Stealthmachines Před 3 měsíci

    Very informative, thanks!

  • @lavavex
    @lavavex Před 6 měsíci +1

    Just went to micro center yesterday to pick up the EVA asus parts. Love seeing my hometown micro center here, STL rep!

  • @pendragonscode
    @pendragonscode Před 6 měsíci +1

    Awesome content as always!!!!

  • @Chris_Cable
    @Chris_Cable Před 6 měsíci +52

    Could you try spinning up hundreds or (thousands?!?) of docker containers with Kubernetes? With all those CPU cores it's gotta be really fast to ramp up the instances.

    • @Megabean
      @Megabean Před 6 měsíci +6

      Thats what its in part for. However like Jeff was saying, you do run into memory bandwidth limitations, meaning that per core you can't expect linear performance curve based on the number of cores you have. I have an expectation that a lot of customers who are running off the shelf applications will probably benefit more from the lower core sku but if you design your application around the server the 128 core will probably be worth it.

    • @thewiirocks
      @thewiirocks Před 6 měsíci +3

      @@MegabeanI used to have a workload that was shared-nothing, buffered data by thread, was computationally heavy, had fairly small unit sizes of data (commonly

    • @Megabean
      @Megabean Před 6 měsíci +1

      @@thewiirocks Thats cool, sadly not enough background to fully understand. I do 3D rendering though, I use a java application called Chunky, it's a voxal type renderer (might be using the wrong term). It does photorealistic rendering. I've been able to saturate my server with it, with 64 cores and 128 threads. Idk how much memory plays into it, outside it as using every bit of memory you allocate to it.

    • @thewiirocks
      @thewiirocks Před 6 měsíci +1

      @@Megabeanthe biggest thing you need to consider for memory is how long you're keeping data in the L1 and L2 caches, and whether or not you're unnecessarily evicting data and then asking for it back.
      A very common pattern in modern software is to perform one operation at a time (e.g. an addition of two values) across a large collection of data, looping over the data separately for each operation. This is _terrible_ for the cache as the CPU is forced to evict each record to make room for the next record in the collection, thereby reducing your throughput to the memory bandwidth and making your caches useless.
      This can be hard to detect as test data sets tend to be small enough to fit within L3 and therefore exceed memory bandwidth. It's only once the data sizes are scaled that the true limits of the memory bandwidth are hit. Worse yet, the CPU will look busy to the operating system even though it's spending most of its time doing nothing.
      What you really want to do is to bring in a record of data, perform all operations you possible can on it, then be done with it for that computational cycle. That maximizes the amount of time a record can be held in CPU caches. If done correctly you may be able to operate entirely out of the L1 cache, which can easily provide an order of magnitude performance improvement.

  • @DukeBoy82
    @DukeBoy82 Před 6 měsíci +1

    I would love to see a video/more information on the opensource LLM you used in this build. That looks super interesting.

  • @user-pu7mv2tu3m
    @user-pu7mv2tu3m Před 6 měsíci +1

    First time watcher, awesome video so far! Subd ofc. There was this guy that printed custom ducting for his fans directly over the cpu etc. Got good results. Oh and a guy that cooled with dry ice and got 0°c. For someone who doesn't know about computers, does that mean you can run it all infinitely fast lol?

  • @olavaaf2218
    @olavaaf2218 Před 6 měsíci +4

    Nice video!
    Almost getting one myself! Is it the 2,8Ghz version of the CPU Ampere will ship?
    Regarding the Mac, let’s not forget M2 Max and M3 Max have tremendous memory bandwidth, 400GB/s.. quite much more so than a DDR4 system I believe. That makes them maybe faster in memory bandwidth limited problems, such as several types of simulations etc with low flops per byte ratio.
    AmpereOne has the DDR5 memory system support. However, I have not seen it easily available like this CPU is.
    With only 3 out of 4 memory channels being connected, maybe the 96 core version is a “better fit” as the amount of bandwidth per core will be quite better, for anything bandwith sensitive that is.

  • @jbucata
    @jbucata Před 6 měsíci +1

    For at least one of those games, the text console had an error message about "out of thread IDs". Presumably it's trying to spin up one thread per core or per SMT. If you can artificially limit the number of cores that the OS sees, or that it shows to userspace programs, you might have a shot at getting these to work...
    Does ARM have SMT? Turning that off would be interesting too.

  • @ianperkins8812
    @ianperkins8812 Před 6 měsíci +1

    Dang. Getting a type 1 hypervisor on that thing would be SWEEEEET

  • @idtyu
    @idtyu Před 6 měsíci +2

    I would try it on Fedora, which has vanilla and almost edge Linux kernel, plus they have proper nvidia support with wayland now, and with its special ram config (which requires no swap now). Things might run better. And I always use the flatpak version of steam, runs quite good

  • @kevinm3751
    @kevinm3751 Před 6 měsíci

    O yaaaa! Micro Center... Inconveniently located for 90% of ALL OF US!

  • @user-um9sl1kj6u
    @user-um9sl1kj6u Před 6 měsíci +1

    I tried box 86 and 64 a Long time ago:-/
    It’s nice to see someone else having better luck

  • @jrshaul
    @jrshaul Před 6 měsíci +9

    It sounds like the real bottleneck here is DDR5 support. Which is supported by the upcoming Ampere revision.
    Which is even faster.
    This is a surprisingly effective workstation for a development board, and further software support should improve it even further. I could see Blackmagic integrating one with a pile of their PCIe cards to build a behemoth video switching workstation capable of real-time effects - and driver support is a lot easier when you make the cards!

    • @lucasrem
      @lucasrem Před 6 měsíci

      jrshaul
      What DDR u used, Ampere ?
      ARM is not needing more than 6000 DDR 5 !

  • @PrinceWesterburg
    @PrinceWesterburg Před 6 měsíci +1

    Thermal Paste: Forget what LTT says, its a physical junction that transfers heat, the larger the contact the more heat can move across it.
    So you are completely right to spread the thermal paste out. Physics!

  • @JCtheMusicMan_
    @JCtheMusicMan_ Před 6 měsíci +3

    The machine specs gives me the same feelings as when I saw and heard a monster truck performing in person for the first time! 🔥🤯

  • @PurpleKnightmare
    @PurpleKnightmare Před 4 měsíci

    Yeah, wish there was a Micro Center near Seattle.

  • @totem168
    @totem168 Před 6 měsíci

    damn one of my dream server thank you to review it Jeff

  • @TechnoTim
    @TechnoTim Před 6 měsíci +3

    I'm ready for (another) ARM desktop!

  • @robbin4022
    @robbin4022 Před 6 měsíci +1

    Upgraded my laptop's monitor to 4K and with 100% scaling I can read the text on your screen at 0:55
    With any other scaling the text becomes more blurry and if I right click on the video and click stats for nerds, the resolution of the viewport changes with the scaling.
    Also, on my win 10 laptop I can't just hover over the speaker icon in the taskbar and scroll to change the volume, which win 11 does.
    Sorry for the unrelated comment but hey, good to see you are doing well and are in good health!

  • @dzltron
    @dzltron Před 6 měsíci +2

    I really miss living near a Micro Center. They really is the best PC store I've ever been to. Please come to the PNW!

  • @kudu9
    @kudu9 Před 6 měsíci +4

    I thought this gonna cost a kidney and a heart but the price is actually really good

  • @boolightningstudios
    @boolightningstudios Před 6 měsíci

    Wish we had a Micro Center

  • @110gotrek
    @110gotrek Před 6 měsíci +1

    Please more Ampere content

  • @sethbessinger2025
    @sethbessinger2025 Před 6 měsíci +6

    That’s really cool! Imagine if we could a RISC-V CPU to game on Linux!

  • @ProjectPhysX
    @ProjectPhysX Před 6 měsíci +1

    2:49 That RAM cannot keep up is not only the case for server CPUs. Even for these super fast data-center GPUs, 2TB/s VRAM bandwidth cannot keep up, because compute Tflops is still so much larger. They could cut the GPU die size in half and the software would still perform thr same. Nearly all compute software is bandwidth-bound nowadays.

  • @FranciscoMonteiro25
    @FranciscoMonteiro25 Před 6 měsíci

    sounds good, should be able to run opensource 7b&13B LLM locally, i need to check if available in Austria

  • @SilentCtrl_
    @SilentCtrl_ Před 6 měsíci +1

    I'm running a VM on Hetzner that uses Ampere and it performs great for the price.

  • @theyoutubes4249
    @theyoutubes4249 Před 6 měsíci +1

    Would be great to know if you eventually manage to get Llama to use the GPU on the ARM system.

  • @kxuydhj
    @kxuydhj Před 6 měsíci +2

    9:30 this makes me disproportionally happy as a linux fanboy. finally the tables have turned.

  • @PodcastUbuntuPortugal
    @PodcastUbuntuPortugal Před 6 měsíci +2

    We approve your usage of SuperTuxKart!

  • @Rostol
    @Rostol Před 6 měsíci +1

    they are not out yet, but those large new threadripper pros at 5gz look SWEET too (for sure over 5k per cpu tho)

  • @Jeditilt
    @Jeditilt Před 6 měsíci

    Awesome video. Thanks!

  • @eDoc2020
    @eDoc2020 Před 6 měsíci +2

    Bandwidth not keeping up with compute power has long been an issue. One amusing statistic is that standard floppy disks are faster than a typical NVMe drive (compared to capacity). You can read a 1440k floppy in about 45 seconds but a Samsung 990 PRO 2TB will take over 4 and a half minutes. Even the IOPS per megabyte is a bit faster on the floppy. With a slow step rate of 8ms you'd have a worst case of 840ms access time or .82 IOPS/MB. The 980 is 1.4 million IOPS best case which comes out to 0.7 IOPS/MB.

  • @garyhuntress6871
    @garyhuntress6871 Před 6 měsíci +4

    I'm really interested in the LLM and machine learning aspect of this. I'm about to upgrade my old dual 24 core Xeon (w/ 512GB of ECC) to a modern high core count plus high end GPU. This is absolutely on my radar now. Do you have specific motherboard recommendations?

    • @JeffGeerling
      @JeffGeerling  Před 6 měsíci +3

      If you're serious about the LLM aspects, the best options would be some of the server builds, from Gigabyte, Supermicro, Asus, or one of those vendors. ServeTheHome has some interesting reviews of GPU-heavy Ampere machines used for the purpose.

  • @kovacspis
    @kovacspis Před 6 měsíci

    Did you try manually setting the number of threads inside Cinebench? File menu, settings, put the tick in, set desired number of threads.

  • @blisphul8084
    @blisphul8084 Před 6 měsíci +2

    One large advantage to the Apple RAM is that it's unified, meaning you essentially have 192GBs of VRAM, which is useful for machine learning tasks.

    • @darinrosse1621
      @darinrosse1621 Před 6 měsíci

      Exacto

    • @Teluric2
      @Teluric2 Před měsícem

      192 gb of ram for gpu and cpu will use air for working?

  • @DarrylAdams
    @DarrylAdams Před 6 měsíci +9

    Could you run virtual machines in this hardware? Could QEMU/KVM emulate raspberry pi, Mac OS or even X86 os? Imagine running a virtual cluster of pi? And while quickemu can run MacOS, running the latest apple sillicone version could be very useful.

  • @zachzimmermann5209
    @zachzimmermann5209 Před 6 měsíci +1

    Thanks for sharing this with us, Jeff! I wasn't really aware just how compatible things were with ARM on Linux. I have to admit though, the LLM performance was actually rather poor. A used RTX 3090 (maybe $750?) could run that llama 2 13b model at 10x the inference speed. I'll be very interested once the GPU support with ARM is worked out as that seems like the main issue.

    • @JeffGeerling
      @JeffGeerling  Před 6 měsíci

      Yeah; honestly I think it just needs a little more twiddling and you could get GPUs to do the inference a bit faster.

  • @spurdo6747
    @spurdo6747 Před 6 měsíci +1

    just curious did you try x265 cpu encoding? it gives nice quality for bitrate and you have the cores

  • @augustinolarian
    @augustinolarian Před 6 měsíci +2

    Hello. How about using it as a web server and virtualization (ESXi and windows server with hyper-v)?
    Can you do some tests for these? and maybe compare with some xeon processors? how fast mysql is on those cpus?
    I really look at these ARM CPUs and i see they might change the servers world and i really think of getting an ARM server.

  • @geekaholic88
    @geekaholic88 Před 6 měsíci

    This video is so fscking awesome!

  • @montecorbit8280
    @montecorbit8280 Před 6 měsíci +2

    At 2:43
    Ubuntu and Windows for ARM....
    Did you try any other Linux distro?? Just curious on that....
    I have been coming down here to suggest ChimeraOS because it runs steam very well, but then I remembered it may not have an ARM flavor....if it does that might be a good way to go!! Manjaro apparently has the ability to act like SteamOS since both of them are based on ARC Linux....
    Hope you have an excellent day!!

  • @aloysiushettiarachchi4523
    @aloysiushettiarachchi4523 Před 6 měsíci +1

    Hello, how does it compare with a cisc machine in matrix handling?. This is most important in scientific work. M1. M2, etc are for simple arithmetic in raytracing, I belive.

  • @StarcoreLabs
    @StarcoreLabs Před 6 měsíci

    Micro Center is the best! Great video.

  • @utfigyii5987
    @utfigyii5987 Před 6 měsíci

    The time of the pc2 is coming!!

  • @PremierPrep
    @PremierPrep Před 6 měsíci +1

    Linux is killing it on ARM! Great video!!

  • @ricardocontente
    @ricardocontente Před 6 měsíci +1

    I would love to see proxmox with a Hackintouch VM on it.

  • @camjohnson2004
    @camjohnson2004 Před 6 měsíci +2

    Just a quick Question Jess, Does the board have the ability to disable the ASPEED GPU??? Just asking because you said that the BIOS goes through the ASPEED iGPU. With servers boards that i own with ASPEED BMC/GPU, if you disable the GPU portion then the BIOS is able to be shown/Accesses through a discrete GPU. Just a thought as i am unfamiliar with the Ampere boards and how they work compared to x86

    • @JeffGeerling
      @JeffGeerling  Před 6 měsíci +1

      Right now it seems like no. not sure if that will change.

    • @camjohnson2004
      @camjohnson2004 Před 6 měsíci +1

      Rodger. I'm hoping get get a similar setup to try ampre

  • @Space_Reptile
    @Space_Reptile Před 6 měsíci +8

    to put your Cinebench 24 score into X86 context, a Intel i9 13900KS at 5.6ghz scores 2379 Multi and 142 single while a AMD 7950X3D at 4.5ghz scores 1829 multi and 111 single
    single core is obviusly lacking on that 128 core, but the multicore for sure aint

    • @JeffGeerling
      @JeffGeerling  Před 6 měsíci +4

      It's one of those "well one ain't good enough, let's just throw ALL THE CORES in there" problems :)
      I really want to see the single core specs on AmpereOne. Or see Apple create a 128 core monster M2 Ultra Ultra Supreme :D

    • @jrshaul
      @jrshaul Před 6 měsíci +1

      @@JeffGeerling The upcoming revision supports DDR5. Assuming twice the memory bandwidth and adequate driver support, perhaps a 4K+ Cinebench score is in the cards?

  • @michaelsopunov
    @michaelsopunov Před 6 měsíci +1

    Hello Jeff, why not video production? Davinci Resolve not yet compatible with Ampere?

  • @davidspagnolo4870
    @davidspagnolo4870 Před 6 měsíci +1

    There's a lot of low-end ARM options like the various *Pi boards, and it seems like there's more and more high-end ARM options with Ampere and the like, but I'm really waiting for some mid range options.

    • @cambrown5777
      @cambrown5777 Před 6 měsíci

      Nvidia has you covered. Announced PC ARM chips set for 2025 release.

    • @TheBacktimer
      @TheBacktimer Před 6 měsíci +2

      Apple? 😂

    • @Dave102693
      @Dave102693 Před 6 měsíci

      @@TheBacktimerfor PC users

  • @mori7423
    @mori7423 Před 3 měsíci

    I'm really wanting on arm to finally come to the desktop scene, especially for gaming and some lite office work or programming. I switched from x86 laptop to base model m1 mba and I'm so impressed with the power efficiency and battery life, got me through a huge machine learning project. Too bad apple hates their users and we'll have to wait for someone with more sense to come to desktop market with arm computers that can compete with Apple Silicon

  • @hannescampidell
    @hannescampidell Před 6 měsíci +4

    Minecraft Java edition for linux (through an unofficial launcher) should run well on this beast (on the Nintendo Switch with Linux installed it is playable)

  • @JSON_bourne
    @JSON_bourne Před 6 měsíci +1

    God I wish there was a microcenter near me

  • @user78405
    @user78405 Před 5 měsíci

    i pretty confident nvidia arm cpu gonna have built in x86 emulation hardware layer to run customize vkd3d-proton

  • @stumblinguponbliss
    @stumblinguponbliss Před 6 měsíci

    Hi Jeff, awesome video but I wasn't able to understand if this build would be better then a Mac m2 for video editing. That was the main reason I was looking at the Mac versus the PC which is what I mostly use.
    I am also evaluating a build for Docker. I was thinking of a VMware solution

    • @JeffGeerling
      @JeffGeerling  Před 6 měsíci +1

      For video, I'd still stick with Mac. (Or Windows in some circumstances). There's no editors on Linux that can really match the workflow for serious editing.
      Though you can do a lot of the basics with Kdenlive and other OSS editors.

    • @secondskins-nl
      @secondskins-nl Před 6 měsíci

      @@JeffGeerlingeach it's own but you mention how much is accelerated using GPU these days, like NVidia NVENC that's also true for a lot video editing, effects and such. Bit weird to stick with Mac for video other than being used to it's pace. If you get paid by the hour it's perfectly ok though.

    • @JeffGeerling
      @JeffGeerling  Před 6 měsíci

      @@secondskins-nl Apple still has top-class video workflow support, from Adobe, Blackmagic, and Apple (along with practically all the production/cinema vendors), and being able to edit with a mac running dozens of 4K streams with processing on top in full preview res with no fan noise is a blissful thing.
      I have a PC running Windows 11 with a 4090 and Ryzen 9 7900X. It does the same thing and can chew through tons of 4K or a few 8K with real-time processing at full preview res, it just burns 6x more watts and sounds like a hurricane doing it ;)
      To each their own!

  • @denvera1g1
    @denvera1g1 Před 6 měsíci +4

    12:19 one of the problems you've likely ran into with Valve and halo games is anti-cheat, both VAC and Easy DO NOT like emulation, however WINE seems to have been made compatible lately, u guess to support the Steamdeck, which doesnt use emulation, just the proton translation layer, which the predecessor used to get you banned from CS:GO if i recall.

  • @tito_me_doe676
    @tito_me_doe676 Před 4 měsíci

    I just saw this, but Minecraft Java runs on ARM, with shader support-just use Prism Launcher, and you’ll need to install a specific ARM JDK version for the version of Minecraft you intend to run. The entire process is identical to using Prism Launcher in any OS on any CPU, and there are many guides on how to do it.
    I highly recommend when you create an instance, rather than choosing a vanilla game, go straight to the mods page, and search for and select “fabulously optimized” from the mod search menu. Then you’ll want to install Iris, and then go to the resource packs tab and search for shaders

  • @vigamortezadventures7972
    @vigamortezadventures7972 Před 6 měsíci +1

    Would be awesome to see this for nvidia Arm chips to continue to support gaming computer enthusiasts..

    • @lucasrem
      @lucasrem Před 6 měsíci

      vigamortezadventures7972
      You have skills ? port titles on ARM ? run android on it ?

  • @TT-it9gg
    @TT-it9gg Před 6 měsíci

    Thanks for the video.
    One question, the GL mark is 10260. Is that by CPU or 4070Ti? The Jetson Nano can do 2000+

  • @user-ix6ig6jm1f
    @user-ix6ig6jm1f Před 6 měsíci

    Did you install the cuda libraries? IIRC they don't come with a normal GPU driver install. might be why Blender could not use them.

  • @dawidmx
    @dawidmx Před 6 měsíci +3

    Looks like all the games that ran using Steam were made in Unity: Super Hot, Horizon Chase and Kerbal Space Program.

  • @insanesicsix6
    @insanesicsix6 Před 4 měsíci +1

    WOW. Increible proyecto. Podrias probarlo con la mineria de tokens para esa computadora..

  • @davidrobertsson7640
    @davidrobertsson7640 Před 6 měsíci

    Thank you for this video. Have been looking at that dev kit for a while now. But hesitated to buy - much to the lack of information and the "dead" forum threads.
    The ram sticks you went with - what specs did you go with?
    Do you have any recommendations or "beware of" when it comes RAM- modules?
    Hopefully this system will rock with FreeBSD! Placed my order today =D

    • @JeffGeerling
      @JeffGeerling  Před 6 měsíci +1

      Almost any ECC DDR 3200 SO-DIMMS will work okay, I chose Samsung as I tested them and Transcend and found the Samsungs to be consistently faster.

    • @davidrobertsson7640
      @davidrobertsson7640 Před 4 měsíci

      @@JeffGeerling can tell you that Samsung - DDR4 - module - 128 GB - LRDIMM 288-pin doesnt seem to work =D

  • @shanent5793
    @shanent5793 Před 6 měsíci +4

    1.3TFLOPS is at least double what the RTX 4070 Ti can do. The CPU can access much more memory with lower latency so there's no comparison.
    "Ti" is an abbreviation of "Titan" so it's pronounced like the first syllable. Titan never made sense anyways because the Titans lost to the Olympians, so Ti was just a face-saving compromise. The company is still named after one of the seven deadly sins, which shows that they can't let go of something that sounds cool

    • @TheBackyardChemist
      @TheBackyardChemist Před 6 měsíci +1

      Are you sure you are not just citing the effect of the abysmal FP64 performance of the card? A 4070 Ti really ought to be able to do much better than 1.3 in single-precision aka. FP32. The CPU would also be faster but only by a factor of 2x, but I would expect at least ~6 TFLOPS from the 4070 Ti in FP32.

    • @shanent5793
      @shanent5793 Před 6 měsíci +1

      @@TheBackyardChemist Compute TFLOPS is traditionally FP64. Ada has a 1:64 FP32/64 ratio so it's around 40 TFLOPS FP32 on the GPU. You could emulate 64 bit math and get the ratio down to 1:4 but it's not IEEE compliant. CPUs should be 1:2 but they run into power limits and drop the clock speed if it's too much work. Of course these are all peak theoretical figures, branching code and sparse access won't allow the GPU to reach its maximum performance

  • @nayrpc
    @nayrpc Před 5 měsíci +1

    yey u tested ksp! best game ever

  • @harryragland7840
    @harryragland7840 Před 6 měsíci +3

    When your Microcenter is also Jeff Geerling's Microcenter....hey, did you leave anything for me Jeff?

  • @xaytana
    @xaytana Před 6 měsíci

    Hey Jeff, I've been curious about something for awhile but haven't really seen A-B testing of it anywhere, and I wasn't sure if you had any industry contacts that would know considering you're easily one of the biggest proponents of ARM desktops on the platform. Do ARM cores actually prefer the tight timings of DDR like x86 CPUs, or do ARM cores prefer raw speed and bandwidth like GPUs. I remember seeing some theoryposting forever ago stating that GDDR might be better for the ARM ecosystem, especially as core counts scale higher. Unless one of these companies have decent public research into it, I guess the best of A-B testing is whenever consoles adopt ARM CPUs and their PC board counterparts (with DDR slots) exist, as even the x86 consoles choose GDDR modules as the shared memory. If ARM cores don't care about timings and latency and prefer speeds and bandwidth, seeing GDDR tested on ARM cores could be extremely interesting, especially if it does push for a spec change that has modular GDDR.

  • @ErikS-
    @ErikS- Před 6 měsíci

    I reminded myself of the AMD Opteron CPU's of around 2005.
    Those CPU's also reached similar clock speeds.
    You start to wonder if we really can only move forward in other ways than increasing clock speeds...

    • @Looser_23
      @Looser_23 Před 6 měsíci

      well combustion engines have also gotten way more powerful in the last decades, but not by increasing rpm.