ASRock Rack Made An AMD EPYC Server Like No Other

Sdílet
Vložit
  • čas přidán 30. 05. 2024
  • ASRock Rack sent us a 1U AMD EPYC server. When we opened it, it was totally different from what we expected, and other servers on the market. In this video, we take a look at the ASRock Rack 1U2N2G-ROME/2T and see what makes this AMD EPYC 7002 "Rome" and EPYC 7003 "Milan" server with GPU support so different.
    STH Main Site Article: www.servethehome.com/asrock-r...
    STH Top 5 Weekly Newsletter: eepurl.com/dryM09
    ----------------------------------------------------------------------
    Become a STH YT Member and Support Us
    ----------------------------------------------------------------------
    Join STH CZcams membership to support the channel: / @servethehomevideo
    STH Merch on Spring: the-sth-merch-shop.myteesprin...
    ----------------------------------------------------------------------
    Where to Find STH
    ----------------------------------------------------------------------
    STH Forums: forums.servethehome.com
    Follow on Twitter: / servethehome
    ----------------------------------------------------------------------
    Where to Find The Unit We Purchased
    Note we may earn a small commission if you use these links to purchase a product through them.
    ----------------------------------------------------------------------
    - Crucial P5 Plus 2TB NVMe SSD: amzn.to/3xKSpMk
    - NVIDIA A40: amzn.to/3SuZmsP
    - NVIDIA A4500: amzn.to/3DLTp6K
    ----------------------------------------------------------------------
    Timestamps
    ----------------------------------------------------------------------
    00:00 Introduction
    01:12 ASRock Rack 1U2N2G-ROME/2T Hardware Overview
    04:59 ASRock Rack ROMED4ID-2T mITX Nodes
    12:50 IPMI
    13:21 4-channel EPYC Performance
    16:21 Power Consumption
    17:55 Key Lessons Learned
    20:45 Wrap-up
    ----------------------------------------------------------------------
    Other STH Content Mentioned in this Video
    ----------------------------------------------------------------------
    - AMD EPYC 4-channel memory optimization: • AMD EPYC 7002 Rome CPU...
    - ASRock Rack ROMED4ID-2T motherboard review: www.servethehome.com/asrock-r...
    - AMD EPYC 7003X Milan-X review: • Crazy! AMD's Milan-X D...
    - AMD EPYC 7003 Milan review: • AMD EPYC 7003 Milan Pe...
    - AMD EPYC 7002 Rome review:
    - NVIDIA A4500 review: www.servethehome.com/pny-nvid...
    - NVIDIA A40 review: www.servethehome.com/nvidia-a...
  • Věda a technologie

Komentáře • 153

  • @JeffGeerling
    @JeffGeerling Před rokem +113

    I always love the "hang from the ceiling" server demos on this channel.
    It makes me want to run a few servers in that orientation... I mean hot air rises, it would be great for cooling!

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Před rokem +24

      Except it is the wrong direction for airflow hung like that :-)

    • @JeffGeerling
      @JeffGeerling Před rokem +20

      Well *this one* at least

    • @tokiomitohsaka7770
      @tokiomitohsaka7770 Před rokem +17

      @@ServeTheHomeVideo But at least it will be easier to lift it when it is turned on with the THRUST from the fans.

    • @JeffGeerling
      @JeffGeerling Před rokem +32

      @@tokiomitohsaka7770 Haha vendors would have to give weight in two metrics: "weight at rest" and "weight at 100% fan load"

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Před rokem +13

      Immersion cooled are usually hung vertically but without fans :-/

  • @tommihommi1
    @tommihommi1 Před rokem +76

    Asrock rack are total memers and I'm all for it. Nobody makes as crazy motherboards as them.

    • @DLain
      @DLain Před rokem +10

      We know Intel, I was once talking to a rep in Taiwan, and the topic was how Intel tries to keep regular users from using their Xeon parts.
      What did ASRock do? Release the Fatal1ty E3V5 Performance Gaming/OC. They are mad lads. Yes, they had to pull these motherboards from the shelves, sadly.
      They also release a X299 Mini Itx (The only X299 mini itx) just for the lulz.
      I really really like ASRock and I think they are underrated AF. My wife games on a X299 Taichi XE, I've had several ASRock boards and I always like to see them selling more products.

    • @post-leftluddite
      @post-leftluddite Před rokem +1

      I've liked asrock since they were willing to do things differently with consumer mobos, glad they do the same with enterprise

  • @jeremybarber2837
    @jeremybarber2837 Před rokem +11

    This server is super cool. I immediately thought of using this as dedicated racked development boxes for software engineers that need either a GPU or an FGPA. A quick search of the inter webs shows that there are PCIe risers that will fit a single m.2 in a 1U box. Sabrent has one but that might put the m.2 on the wrong side…It’s a really niche and neat server, thanks for the review/overview!

  • @R1chiGGard
    @R1chiGGard Před rokem +6

    Greetings from ASRock Rack Taiwan! Such a detailed and objective review, so glad you liked the uniqueness of our product!

  • @morosis82
    @morosis82 Před rokem +7

    Versus a dual socket this makes total sense in dedicated hosting if you were to have two different people using those servers bare metal. As the provider you get some of the advantage of shared resource, but from a customer's point of view they have a totally dedicated physical machine... sort of.
    Maintenance would be a pain there, but otherwise it makes a lot of sense.

  • @gsuberland
    @gsuberland Před rokem +6

    I notice that each board has a lot more PCIe breakouts than the chassis supports, but a lot of dead space in the center where the power cabling runs. Would be cool to see someone 3D print a bracket for an M.2 NVMe breakout there, especially if folks are using these for ML stuff where you'd want fast storage for the training corpus.

    • @btudrus
      @btudrus Před rokem

      I though the same thing.
      Also it may be possible to get those PCIe lanes (or maybe some SATA lanes) out of the chassis if only a passive GPU with one slot-width would be used and the second would be used for the connection out. Or even as a holder for an U.2 SSD...

  • @DrivingWithJake
    @DrivingWithJake Před rokem +6

    There are a few problems with this type of setup.
    1. Network ports in the front of the unit makes it waste 1u space to run network cables into the front of the rack.
    2. Networks these days are 10G+ fiber
    3. Lack of drive spots.
    Making this quite limited on usage. Still kind of cool to see something cheap.

    • @aztracker1
      @aztracker1 Před rokem

      My first thought is GPU optimized distributed workloads. Render or AI farms. Very high GPU count per rack this way and better optimization of ram and CPU power GPU.
      This work is meant to be easy to route to many nodes and if one goes down, the rest keep rolling.

  • @I4get42
    @I4get42 Před rokem +9

    Hi Patrick! I can totally see this for VDI where the company doesn't want to send a workstation home with somebody. A challenge that stands out to me is that with only two NICs per system, and forced external storage, you'll have to decide between one NIC for storage and one for user access, or redundancy where you are competing for storage and user access on the same switch ports/NICs and hoping the LACP gods favor you.

    • @MazeFrame
      @MazeFrame Před rokem

      Have not looked at the NICs spec, but maybe you can blast trunks over them.

    • @annieshedden1245
      @annieshedden1245 Před rokem

      any sane OS can trunk those links, and use the m.2 for cache.

  • @blackmennewstyle
    @blackmennewstyle Před rokem +19

    This dude energy is out of this world lol
    Have a great weekend and keep on sharing all these interesting hardware with us

  • @excitedbox5705
    @excitedbox5705 Před rokem +4

    This is perfect for dedicated/web hosting companies. Our servers are all 32 cores 128 gb or 256 gb ram and our storage is all networked with local storage for the OS and applications with 1gbit outbound and 10gbit local network. A customer is not going to notice if their email loads a few ms slower and every other feature that is offered in the dedicated server market is available. You can save several thousand per rack and 50% on your DC rent with these. I can see hosting companies like OVH and GoDaddy ordering these by the boatload.
    Anything loadbalanced could benefit as well since the storage may not be in the server you land on and you can add more networking instead of GPU. You could have a few racks of these with several racks of storage.
    The more I think about it, the more I like it. I see use cases across the entire industry where you could save a ton of money using this.

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Před rokem

      Great feedback

    • @noname-gp6hk
      @noname-gp6hk Před rokem

      What happens when the customer who is renting the node next to yours needs service and the whole chassis has to be powered off and removed for service? Seems like you're twice as likely to be impacted by hardware issues with this kind of design. Whole chassis has to come out to service one node.

    • @aztracker1
      @aztracker1 Před rokem

      @@noname-gp6hk true... If course there's always risk when going for lower cost. If the applications are stored on a SAN, then you assign the IP+storage to another system,. See a moment of downtime when the 1u gets pulled.
      Slightly easier if there's a thin hypervisor.

  • @sirjoot
    @sirjoot Před rokem +2

    I think this would be super fun as a his & hers gaming system to go in a home lab. Super compact and efficient and takes away the bulkiness of having a dedicated chassis each. Only limitiation would be the janky way you'd have to connect it up to peripherals.

  • @user-uw7st6vn1z
    @user-uw7st6vn1z Před rokem +3

    always love to see patrick new upload

  • @trockid
    @trockid Před rokem +1

    These would make great compute cluster nodes.

  • @christopherjackson2157
    @christopherjackson2157 Před rokem +5

    The CPU is basically just io for the gpu. Like a giant pch. You're not gonna throw a bunch of storage and memory in there. You don't need much more memory bandwidth than pcie bandwidth to the gpu. The p sku's are exactly what I imagine in this thing.

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Před rokem +1

      Yea. A really cool concept but I have been told people do 64C in these too

    • @christopherjackson2157
      @christopherjackson2157 Před rokem +1

      @@ServeTheHomeVideo interesting. I'd think 64 epyc cores would be pretty memory constrained. I wonder what theyre doing with them? I guess vm's?

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Před rokem +1

      No idea. Just something I heard there is a customer deploying them like that

  • @juanignacioaschura9437
    @juanignacioaschura9437 Před rokem +5

    I don't work in a physical Data Center environment (only remote monitoring), but my gosh I'd be lying if I said I wouldn't want something like this to tinker about. Two 8-Core P-SKU Milan EPYCs, 64GB RAM per node, two 12GB A2000s, and two 2TB Samsung SSDs and I'm done.

  • @realandrewhatfield
    @realandrewhatfield Před rokem +1

    Patrick, today's video made me nostalgic for the old days back when you held the server during the whole episode! ;)

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Před rokem

      Ha! Using steel cables on F34 trussing is safer

    • @realandrewhatfield
      @realandrewhatfield Před rokem +2

      @@ServeTheHomeVideo I guess we have Linus if we want to watch people drop stuff... thank you for the great content!

  • @autarchprinceps
    @autarchprinceps Před rokem +6

    Well, the networking would be on the GPU with a convergent DPU + GPU bundle like the NVIDIA H100 CNX. Technically that would also be able to replace the management controller as well & provide the hypervisor. If you really wanted to commit to this concept you could get rid of networking, storage and management up front completely. Eventually, we are going to see a design like this, where it is basically just grace hopper with memory on one small board and networking going out of it. You could probably get away with half of the depth of this then, or a 4 node system.
    What bugs me a bit about this Asrock one, is maintainability. It really is just two nodes put in a box. Usually, these systems have seperate sleds or the likes to maintain or replace the nodes individually.

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Před rokem +3

      Great point

    • @xud6405
      @xud6405 Před rokem +1

      It is clearly for hyperscale who want to provide instance with dedicated GPU and CPU. You just replace the entire server on site.

    • @autarchprinceps
      @autarchprinceps Před rokem +1

      @@xud6405 Hyperscalers typically use much bigger machines and divide them with VMs for flexibility. They also make a lot of their server designs themselves. Heck AWS uses DC only in the server, has its own DPU and even CPU and accelerator chips, though those two not exclusively of course.
      As he said in the video, this is for as cheap as possible root servers, as you find them by more commodity hosters, where still, yes you can fix it on-site, but bringing down one customers system to fix another’s wouldn’t be my choice. Clearly you need to weight that against the cost for more metal and engineering of separate sleds.

    • @aztracker1
      @aztracker1 Před rokem

      Fair enough. But as mentioned, cost optimized. And if it's a render or AI farm, taking two nodes when maintenance vs 50% or more expensive... It's not a bad deal.

  • @vk5ztv
    @vk5ztv Před rokem

    I can see this as being ideal for video wall applications. With 2x 4 output GPU's we could replace 6x Dell 7920 2RU Precision Rack workstations with 3 of these and save significantly on power connections and rack space.

  • @birdpump
    @birdpump Před rokem +2

    those are some floating power supplies

  • @--JYM-Rescuing-SS-Minnow

    nice! great demo!

  • @Jeppelelle
    @Jeppelelle Před rokem +4

    There is also a Ryzen version of this server, will you take a look at that too? That one should be even more cost optimized, paving way for 1U2N in a homelab/selfhosting for cheap :D

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Před rokem +4

      We looked at those motherboards already but have not gotten the server yet

    • @aztracker1
      @aztracker1 Před rokem

      Yeah... Looking at the LTT house stuff, was one of my initial thoughts as well.

  • @GameCode64
    @GameCode64 Před rokem +2

    This will be totaly nice to create a cloud gaming server, maybe even without nvme and just pxe/iscsi boot and run over the nics :D

  • @VTOLfreak
    @VTOLfreak Před rokem +3

    I've been looking for something like this for a while. I need two nodes in a single 1U chassis to put in a cheap colocation but I'd rather stay away from proprietary motherboard sizes. This can take standard ITX. Just not a fan of those motherboards however with half the memory channels missing.

  • @majstealth
    @majstealth Před rokem +3

    i could think this as a vdi-server for construction-workstation

  • @SB-qm5wg
    @SB-qm5wg Před rokem +1

    The main reason I hate 1Us for home labs is the damn sound of those high RPM fans.

  • @matthewguerra5410
    @matthewguerra5410 Před rokem +2

    The networking is what holds this server back. Better if they had SFP ports

    • @farmeunit
      @farmeunit Před rokem

      10 GB is easy over short distances with copper. To me, this is for high density compute, not much network throughput needed.

    • @matthewguerra5410
      @matthewguerra5410 Před rokem

      @@farmeunit it’s less about throughput and more about connectivity options. In order for the nodes to work with large compute sets they often need to connect to a SAN, etc…. As an example I have 10G,25G,40G,100G in my data center and none of it is RJ-45 copper.

    • @farmeunit
      @farmeunit Před rokem

      @@matthewguerra5410 Then don't buy it. It's not made for you. He gave a perfect example for web hosts. High density VPS. There already options out there for you.

  • @cameramaker
    @cameramaker Před rokem

    I can even imagine a special comb-like 4x4 bifurcated adaptor packing four more M.2 drives per node, if there is a need :)

  • @annieshedden1245
    @annieshedden1245 Před rokem

    perfect for smart HPC, which normally is stateless these days and can often live with 20 gbps.
    competition is probably multinode 2u chassis, which can amortize PSUs even better and have the advantage of bigger, more efficient fans. but they ain't cheap...

  • @skaltura
    @skaltura Před rokem +1

    ASRock Rack makes some cool stuff :) My favorite server parts vendor, got quite a few epyc and ryzen servers on their platforms. DOWNSIDE Tho; No memory XMP profiles on the ryzen mobos

    • @skaltura
      @skaltura Před rokem

      Give it to ASRock Rack to push things forward :) Really like their engineering mindset, min max, bring everything possible out. Much better motherboards than Supermicro for example

  • @concinnus
    @concinnus Před rokem +1

    I realize it's cost conscious, but at least an option for an M.2 on the PCI stick would make sense, and probably only add what, $20 (incl. cable)?

  • @Ro-Bucks
    @Ro-Bucks Před rokem

    would be a cool streaming setup

  • @zactodd3144
    @zactodd3144 Před rokem +1

    Is there anything you guys know of that and link these nodes, like via those PCIE slimline connectors.
    Would be great for HA without having to pass extra data across Ethernet.

    • @noname-gp6hk
      @noname-gp6hk Před rokem

      I don't think you can tie two motherboards together with a PCIe cable unless the PCIe ports are configured for non-transparent bridging. I can't remember if EPYC even supports PCIe NTB, but even if it did, it would have to be enabled at the BIOS level. That isn't a feature that would be designed into this, NTB is very rare and only really implemented in systems designed for HA workloads.

  • @cracklingice
    @cracklingice Před rokem

    My first thought was hmm interesting I wonder if you could throw a couple SAS cards - oh it's only a single 16 lane slot. I suppose having redundant storage servers in a single physical chassis probably wouldn't be the right way to go about it anyway.

  • @atomycal
    @atomycal Před rokem

    Hi Patrick, could you please make a video showcasing the real world performance gains by going SAS vs HDD, and by going SSD vs SAS, in a storage server / NAS ? The literature is everywhere but there are no real world examples / tests / explanations. Thanks!

  • @marcfruchtman9473
    @marcfruchtman9473 Před rokem +2

    Had some interesting cool factor items but the lack of expansion sort of ruined it for me.

  • @abdulmuhaimin5274
    @abdulmuhaimin5274 Před rokem +1

    It looks like a transcoding server. Like a transcode for Adobe Media Encoder.

  • @pkt1213
    @pkt1213 Před rokem +1

    I have an Asrock Rack X570D4U-2L2T. Thankfully my phone has memorized all yhr letters and numbers at this point.

  • @Good_Boy_Red
    @Good_Boy_Red Před rokem +2

    So what would be the use cases for this server? Just curious.

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Před rokem

      Typically GPU accelerated services like transcoding, AI, and some kinds of HPC (but a very narrow HPC workload band)

  • @QuentinStephens
    @QuentinStephens Před rokem +1

    I'm after an EPYC3251D4I-2T and have been told that Asrock is having major trouble sourcing the 10 Gb NICs. Has that been resolved?

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Před rokem +1

      That is an industry-wide problem :-/

    • @pavelsorejs4923
      @pavelsorejs4923 Před rokem

      X550 is the problem - i needed 50pcs of X570D4I-2T and was told by my distributor that they are not available and asrock can't even tell when they will be. That is the reason why X570D4I-NL is a thing. Considering that they now announced X570D4U-2L2T/BCM with broadcom instead of intel, this is not going to get better anytime soon. So now we are considering ROMED8HM or even ROME2D16HM3 (i can fit those into our cases) but it is quiet $$$ and i still don't have availability info on OCP NICs.

  • @skyline8121
    @skyline8121 Před rokem +1

    Thank you, Patrick, Excellent content, 👍 Game cloud server could be?

  • @MasonPollock
    @MasonPollock Před rokem +3

    I feel like this server could be the best render server for low budget ballers

    • @aztracker1
      @aztracker1 Před rokem

      Even if you have more budget... You'd get way more bang for the buck if you're filling a few racks with these.

  • @estudiom142
    @estudiom142 Před rokem

    lovr it!

  • @madmadmal
    @madmadmal Před rokem +1

    I wish there was something mentioned about the price.

  • @aztracker1
    @aztracker1 Před rokem +1

    I can totally see this for render farms and distributed AI. You're probably getting a much better resource distribution perr rack. Typical GPU servers are 4 GPU, 1-2 CPU in 4u... This is 8 CPU, 8 GPU in 4u... And again, better optimized and lower cost CPU/ram to keep GPU workloads going. A bit more labor heavy, but if you can just swap out 1u on a rack and work on it out of band it's not so bad.

    • @amp888
      @amp888 Před rokem

      Also worth mentioning that Supermicro have made a few dual socket 1U GPU servers which support three/four dual slot GPUs for a while now. From the 1028GR-TR/1028GR-TRT (three dual slot GPUs, dual E5-2600 v3/v4) to now the SYS-120GQ-TNRT (4 dual slot GPUs, dual 3rd gen Xeon Scalable, PCIe 4.0, support for Optane PMem).

  • @BunBun420
    @BunBun420 Před rokem +3

    First! :D
    I've never been able to comment this before.

  • @gg-gn3re
    @gg-gn3re Před rokem +1

    can you buy those PCB for the PSU redundancy anywhere? separately I mean..

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Před rokem +1

      I think there are power distribution boards you can buy, but I am not sure if ASRock Rack sells them

    • @gg-gn3re
      @gg-gn3re Před rokem

      @@ServeTheHomeVideo I found some breakout board on some crypto mining sites, none seem to have redundancy though =[ thx

  • @jfkastner
    @jfkastner Před rokem +2

    Interesting server, BUT if the memory of CPU 1 is 'full' you can not just draw some from CPU 2 since there is no connection like in dual socket boards

    • @aztracker1
      @aztracker1 Před rokem

      True. But a huge cost advantage if you're doing a lot of GPU heavy distributed workloads that may not need more than 256gb ram to keep full.

  • @FrancescoCarucci
    @FrancescoCarucci Před rokem +1

    Is it already being sold?

  • @a.j.haverkamp4023
    @a.j.haverkamp4023 Před rokem +1

    Big nightmare to cable this server in your rack, cables coming out off the front. Cables blocking airflow for other servers. It’s a weird one.

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Před rokem

      I generally agree, but in this server, you can cable to either side or in the middle and be fine. Many places use racks where this is not a big deal.

  • @computersales
    @computersales Před rokem +1

    I've kind of wondered why somebody hasn't made a solution for systems that have single nvme slots. Since nvme is just PCI Express you would think you could split it out to multiple drives. Make something where you could stack them on top of each other on an assembly that converts multiple drives to a single nvme slot.

    • @aztracker1
      @aztracker1 Před rokem

      They have those... 1-4 nvme in a pcie x8/16 physical. Using one in my desktop as it's easier to access than the onboard nvme covered by the video card.

    • @computersales
      @computersales Před rokem

      @@aztracker1 I meant one that works in a nvme slot though.

    • @computersales
      @computersales Před rokem

      Looks like something similar to what I am thinking about exists. The amfeltec AngelShark Carrier board. It would be nice to see one that stacks the drives vertically though.

    • @aztracker1
      @aztracker1 Před rokem +1

      @@computersales gotcha... Not sure. NVME drive interfaces are pretty dumb, 4 channel PCIe, not sure if they'd run in 1x mode or not. I know I couldn't run a 4x pcie3 drives in my early m.2 motherboard with a 2x m.2 slot.a few years ago. So would expect compatibility issues.

  • @bridgetrobertson7134
    @bridgetrobertson7134 Před rokem

    oh, each node limited to 500W on 120v power. So, most people would have to call an electrician out out to prep for use at home. I don't even have 100 amp circuits in my house. Nevermind 240v.

  • @leyasep5919
    @leyasep5919 Před rokem

    Nice ! Where/how do I get one ?

  • @matt-ui5bz
    @matt-ui5bz Před rokem

    That is cool

  • @samithaqi2379
    @samithaqi2379 Před rokem +1

    i can imagen using this server for PROXMOX Cluster

  • @MatthewGP
    @MatthewGP Před rokem +1

    Great video Patrick! Also, really appreciate the Q3 2022 Client M.2 NVMe SSD Buyers Guide on the site! Do used enterprise sata SSDs count as client drives??? 🙂

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Před rokem

      We are hopefully going to have more segments in different Q4 guides. Will actually started that one size months ago, also we wanted to get the first version out. Stay tuned Matthew. Will has like 10.2 drives benchmarked and we have done like 5+ 4-8 M.2 solutions in the last week as well!

  • @AvengeTheTECH
    @AvengeTheTECH Před rokem

    Where can I buy it

  • @stephenreaves3205
    @stephenreaves3205 Před rokem +1

    If they made this without the GPU section I'd buy it for my homelab lol

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Před rokem +1

      Yea. I would love this with 1-2 more M.2 and dual half-length slots for NICs but in a shorter chassis

  • @kwinzman
    @kwinzman Před rokem

    I was assuming all servers these day have at least 25gbit/s, maybe 100.

  • @PyromancerRift
    @PyromancerRift Před rokem

    600w to dissipate in a 1u form factor. You better put some good ear protections. KEK

  • @stevenmerlock9971
    @stevenmerlock9971 Před rokem +1

    Cost?

  • @DenUil
    @DenUil Před rokem +1

    Too bad they didn't make a Mirror Y version, than the front would be symmetrical and the pci lanes for the GPU would also be the same :-)

    • @noname-gp6hk
      @noname-gp6hk Před rokem

      It doesn't make sense to have unique left and right motherboards. You'd split your manufacturing volume in half for each side, you'd need to double your RMA inventory, as a customer you would need to stock twice as many spares. You would consume twice as much R&D for two separate motherboard projects, have twice as many bugs to fix during development. Development cost would be twice as high, and that would be baked into the final server price.

  • @MarekKnapek
    @MarekKnapek Před rokem +2

    Not using LTT screwdriver?

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Před rokem +1

      I was chatting with Jake the other day. I need to see if he can get me one when I see him next month.

  • @liamhotspur9182
    @liamhotspur9182 Před rokem

    Did I over hear it or may be someone can tell how much this machine cost?

  • @foxfoxfoxfoxfoxfoxfoxfoxfoxfox

    Use it for a redundant firewall.

  • @eleventy-seven
    @eleventy-seven Před rokem

    I want one.

  • @ogxboxdev
    @ogxboxdev Před rokem +1

    Will it work with rtx 3070 fe?

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Před rokem +1

      For consumer cards you need a blower and a dual slot design that is a standard full length card size. Probably you would get an aftermarket card for this

  • @ibelieveinliberty5226

    Price?

  • @iszotope
    @iszotope Před rokem +1

    Try 4x A100's and 2x EPYC's in a 2U..

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Před rokem +1

      Yes. April 2021 we did czcams.com/video/1Mva2Qd5LSQ/video.html

  • @flisboac
    @flisboac Před rokem +1

    I find it interesting how you hang that server like a butcher would hang a cow's carcass.

  • @JimtheITguy
    @JimtheITguy Před rokem +1

    A solution in search of a problem server

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Před rokem +1

      The history is that this was a server that customers had ASRock Rack build, but now it is being sold more broadly

  • @justinjja2
    @justinjja2 Před rokem +1

    Cloud gaming server?

  • @wmopp9100
    @wmopp9100 Před rokem +1

    add 400G NIC instead of GPU and you have a big ass external-DPU

  • @idtyu
    @idtyu Před rokem +1

    Asrock makes all sorts of weird stuff, they even have itx amd server board and itx threadripper board. There's nothing they don't do

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Před rokem

      Totally! I think Will reviewed those AM4 boards for us. So many cool platforms. I love visiting them in Taipei

  • @scudsturm1
    @scudsturm1 Před rokem

    can u game on this?

  • @scudsturm1
    @scudsturm1 Před rokem +1

    its a well hung server not a switch

  • @gork42
    @gork42 Před rokem

    Fucking finally

  • @BusAlexey
    @BusAlexey Před rokem +6

    "asrock" and "unique servers" are not unique 😆

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Před rokem +1

      Yes, so exciting that there are servers even like this available.

  • @timconnors
    @timconnors Před rokem +1

    Kinda look like a Cray blade. When we need to service a node in a Cray blade, we're draining workload from, and shutting off 4 nodes in the blade. Meh, that's only 1/256's of our cluster, so no matter. Don't care about hotswap anything when the entire blade can be hotswapped from the workload. Just work on it offline. Or replace the whole blade.

  • @redtails
    @redtails Před rokem

    Almost seems like these are not production servers.. Way too many screws everywhere, no hot-swap. Seems more like something a developer would use.. Or maybe the "redundancy" lies in the fact that you save on money and you just buy all servers x2 lol. Though, if you're really scraping the low-end of the costs, I wonder why not just go for consumer parts. Some low-end server farms literally will just use consumer platforms in standard cases with standard consumer cooling and thereby probably saving a ton of money on enterprise HW and all the power wasted on 1U cooling screaming all day and night

  • @MaxPower-11
    @MaxPower-11 Před rokem

    I guess if you wanted more storage you could get a PCIe NVMe SSD expansion card and attach to the sideplane.

  • @Google_Does_Evil_Now
    @Google_Does_Evil_Now Před rokem

    I tried to watch your video but there was 2 adverts right at the start. Goodbye.

  • @samiraperi467
    @samiraperi467 Před rokem +3

    AsRack.

  • @danielcubillos1325
    @danielcubillos1325 Před rokem

    I've been working with servers for the last 20 years and let me tell you pal it is just crazy and useless. You must have a very good reason to go full bare metal in orther to adquire one of these. first, each board
    has only four slots of memory so in theory you would only be able to acoomodate 512Gb ram for each one. servers ought to be designed to accomodate the maximum amount of memory that the processor could handle. in this case Epyc processor could handle upto 4TB of ram (latest xeon processors could handle up to 6TB) so being able to only accomodate 512Gb is just not acceptable for today's standards. If i was to adquire a server with epics processors i would go for a Proliant server with a dual socket board where i could accomodate all the memory that could be supported by the processors in this case 8TB ram. And then i could use virtualization for managing all the server's resources wisely. so i do not see why i should get one of those dual servers from asrock.

  • @marcin_karwinski
    @marcin_karwinski Před rokem

    This makes sense only in the eyes of the ASRockRack PR team... if it was trully as cost optimised as you try to paint it, they'd just drop the extra PCIe OCULink ports, cutting on board space to about mITX and consequently costs, for a dual node like this setup. It's just a marketing ploy to upsell their deep mITX-y solution without taking into account the board is kind of limited to begin with - 16x slot1 + 4x M.2 slot + 6x 8x OCUlinks + maybe 4x to intel's dual 10GbE... that's still far off the 128 lanes available on Epycs. To make it more cost efficient, one would need say 6x U.2 or more of local storage availability through said OCUlinks, especially if the drives supported 2x 4x pathing... heck to pack it tight, the 6x8 could be split/combined to 3x16x lanes and then through slim 4 M.2 adapters to 3x4 storage devices, and then the 4th 16x slot could be populated by eg some thin 1 slot accelerator. Or just use the 4x16x lanes from ports to drive 4 accelerator cards eg paired in 2 sets... Maybe a transcoding one or a gpu for vdi or compute infra... Now then this might make sense. Still, the connection to the storage would hamper this in terms of just a compute node/blade purpose on data loading/storing. Heck, this board makes far more sense in something more akin to edge or home server or NAS - where you can start with 16c but grow to 32c or more depending on possible sales or servers depots.disassemblies down the line, similary where you'd start with say 2x or 4x 16GB DIMMs for the lowest yet kind of usable cost of mem setup but have the chance to grow to 4x 64GB if/when the RDIMMs become available for less, a platform/system making good use of the 2 OCUlinks that could be spread/converted to 16x SATA connectors and expose these as 3.5 slots of slow(er) storage to populate over time when costs permit or need arises, and then add the remaining OCUlinks to 8 U.2s for hot tier/caching/metadata device, use the M.2 for OS duties, and the 16x slot for accelerator and/or networking through a splitter/raiser, put this in a neat condense package and you get something not too disimilar to QNAP TS-h1290FX... In my eyes, companies wanting to offer storage solutions could very well utilise such systems as main selling force of this mobo/platform...

    • @btudrus
      @btudrus Před rokem

      It's cost optimized because they are using existing boards which they sell independently from this chassis...

  • @bilexperten
    @bilexperten Před rokem

    Every Asrok motherboard I know about have fail premature. No Asrock for me Ever!

  • @BryanSeitz
    @BryanSeitz Před rokem

    I love the content but not when Patrick speaks