Titanic Tyan: Up to 256 Core Server Chassis - 2U/4S Epyc Transport CX TN73B8037

Sdílet
Vložit
  • čas přidán 4. 06. 2021
  • **********************************
    Thanks for watching our videos! If you want more, check us out online at the following places:
    + Website: level1techs.com/
    + Forums: forum.level1techs.com/
    + Store: store.level1techs.com/
    + Patreon: / level1
    + KoFi: ko-fi.com/level1techs
    + L1 Twitter: / level1techs
    + L1 Facebook: / level1techs
    + L1/PGP Streaming: / teampgp
    + Wendell Twitter: / tekwendell
    + Ryan Twitter: / pgpryan
    + Krista Twitter: / kreestuh
    + Business Inquiries/Brand Integrations: Queries@level1techs.com
    IMPORTANT Any email lacking “level1techs.com” should be ignored and immediately reported to Queries@level1techs.com.
    -------------------------------------------------------------------------------------------------------------
    Intro and Outro Music By: Kevin MacLeod (incompetech.com)
    Licensed under Creative Commons: By Attribution 3.0 License
    creativecommons.org/licenses/b...
  • Věda a technologie

Komentáře • 194

  • @InvadersDie
    @InvadersDie Před 3 lety +71

    Man, looking at Epyc CPUs displayed like a trading card collection, it's something special

  • @TheNefastor
    @TheNefastor Před 3 lety +6

    "Quad Damage and not from FEDEX !"... as a recurring FEDEX victim I felt that one, Wendell ! -_-

  • @3vil8unny
    @3vil8unny Před 3 lety +33

    I would love to just spend a day walking thru the level one headquarters

    • @amessman
      @amessman Před 3 lety +3

      Same, I'd love to see all those servers

    • @charleshein5991
      @charleshein5991 Před 3 lety +1

      With you there with a meet and greet dinner!!

  • @ethix_ru
    @ethix_ru Před 3 lety +83

    This video lacks the "Quad damage" sound from Quake.

    • @acubley
      @acubley Před 3 lety +1

      czcams.com/video/TxzhpEbbnKk/video.html

    • @ericneo2
      @ericneo2 Před 3 lety +1

      OVERKILL: Here are the giblets of what once was Intel...Or at least what we could wipe off the floor...

  • @ask_carbon
    @ask_carbon Před 3 lety +6

    Dyaum Wendell looks so happy with his toys here.

  • @andrekz9138
    @andrekz9138 Před 3 lety +29

    Wendell may be physically stronger than he looks, hoisting 94lbs like it's a simple desktop, but that's nothing compared to the strength of his pun game. "Appeeeeeeeeeeeaaaling"

  • @ZachFBStudios
    @ZachFBStudios Před 3 lety +22

    This man has more Epic CPUs than I have cores and more RAM than I have storage

  • @MrSidiox
    @MrSidiox Před 3 lety +21

    Kinda sounds like a perfect way to consolidate a full proxmox + ceph cluster to a single chassis. I could easily run my whole virtualization + ceph storage stack on it

  • @Level1Techs
    @Level1Techs  Před 3 lety +33

    *ocp2 pcie3 somehow became ocp3. It's ocp2, not ocp3 just fyi. Still 10 and 25g is no problem

    • @peppybocan
      @peppybocan Před 3 lety

      ...tell me why, tell me why, Wendell? What are you doing with all these chips?

    • @Level1Techs
      @Level1Techs  Před 3 lety +15

      @@peppybocan the kilothread server is inbound. I'm just the silver surfer heralding the arrival

    • @peppybocan
      @peppybocan Před 3 lety +1

      @@Level1Techs I could tell you where I would like to use them ... oooh so many places where to use them. You could build a CI/CD pipeline that compiles and tests Chromium completely end-to-end! That's just a wild example...

    • @InvadersDie
      @InvadersDie Před 3 lety

      @@Level1Techs hahaha and I though 3:49 was the moneyshot! Kilothread server hahaha!

    • @nathanlowery1141
      @nathanlowery1141 Před 3 lety

      @@peppybocan or a minecraft server.... for the entire country

  • @spiralout112
    @spiralout112 Před 3 lety +17

    256 cores in 2u, sweet jeebus! It's almost hard to wrap your head around.

    • @creker1
      @creker1 Před 3 lety +6

      That's not even the densest system. Supermicro has 2U 4 node system with 2 sockets each. 512 cores in 2U.

    • @Dylan-xc8yz
      @Dylan-xc8yz Před 3 lety

      @@creker1 man that would get hot

  • @-FAFO-
    @-FAFO- Před 3 lety +1

    This is relegated as computer porn at this point, but glad someone is covering things in this sector. Sure this is helping guide people in more important walks of life. Love your enthusiasm and chance to look at this kind of stuff

  • @iMadrid11
    @iMadrid11 Před 3 lety +5

    This server chassis form factor is a modern update to blade servers.

  • @madkvideo
    @madkvideo Před 3 lety +2

    Man, this is the golden age for local hosted servers. Thanks AMD!

  • @nidiahk
    @nidiahk Před 3 lety +2

    Damn this thing is cool! Always fun to see you do cool stuff with exciting hardware :D

  • @knoppix87710
    @knoppix87710 Před 3 lety

    Thank you so much for this review, darn amazing! Reminds me of the power systems from IBM. Edit: One usecase might be a high density deployment of citrix or horizon nodes in smaller DCs at regional centers. Cuts down on latency across large WANs.

  • @fromearth6282
    @fromearth6282 Před 3 lety

    I know just a little about servers/networking, but I have never regretted subscribing! 🤘☺️

  • @FarmerKlein
    @FarmerKlein Před 3 lety

    A really cool chassis for sure. Its almost like a DIY Dell FX2 chassis. We got a pair of them at work as Hyper-V clusters.

  • @Squinoogle
    @Squinoogle Před 3 lety +1

    I feel like this could be the ideal starting point for offering modular... modules... so you can choose to have the 2U chassis and mix-n-match between server modules like these and storage modules or expansion modules. An internal bridge seems pretty straightforward - keep your server(s) on the left and storage on the right, make use of some of those wasted PCIe lanes...

  • @xerox445
    @xerox445 Před 3 lety +5

    Dude those epics in the trays lol INSANE

    • @Level1Techs
      @Level1Techs  Před 3 lety +7

      thats how we roll at LEVEL1TECHS sub bell comment engage WOOOOOOO

    • @Outland9000
      @Outland9000 Před 3 lety +1

      @@Level1Techs _Engage..._ 👉

  • @EldaLuna
    @EldaLuna Před 3 lety

    future tech in a really vintage building vid. really wonderful pairing ahaha. just love the look of that place just so fitting for things like this.

  • @SpuriousECG
    @SpuriousECG Před 3 lety +3

    Bringing 4 times the blades "2U", cool stuff :)

  • @Im_Ninooo
    @Im_Ninooo Před 3 lety

    looking forward to the next videos!

  • @mtartaro
    @mtartaro Před 3 lety +1

    I strongly recommend that you get your hands on Nutanix block!

  • @AndreKK-
    @AndreKK- Před 3 lety +1

    Awesome system! Please add the Quad Damage sound on the next videos

  • @rocknrollajohnnyquid876
    @rocknrollajohnnyquid876 Před 3 lety +1

    Wendall needs a mad scientist channel

  • @elvara872
    @elvara872 Před 3 lety

    I like watching those server videos so you can see what will come to consumer market later on.

  • @squeaksallan8195
    @squeaksallan8195 Před 3 lety +1

    to protect and to rock "Love it"

  • @Jdmorris143
    @Jdmorris143 Před 3 lety

    Loved the pun.

  • @user-yy6ph2lu9o
    @user-yy6ph2lu9o Před 3 lety

    Pretty big blade server;) so cool!!!

  • @sprtn1o69
    @sprtn1o69 Před 3 lety

    The FX2 chassis without the overcomplicated fabric and IO modules. And Epyc of course. I'd like one of these with a couple HBA's out the back and some storage enclosures to make a 2 node HA makeshift SAN

  • @wizard-uk1xh
    @wizard-uk1xh Před 3 lety

    Fujitsu has had a CX model running dual socket Intel Xeon CPU's in each of the nodes, and 4 nodes per 2U box, Although the Fujitsu model is fairly deeper than most servers and can sometimes have space issues if the rack system isn't deep enough. However it is still running up to 8, 26 core CPUs in the 2U chassis.

  • @nikolaj5054
    @nikolaj5054 Před 3 lety +1

    That is so cool

  • @michaeltimmerman2130
    @michaeltimmerman2130 Před 3 lety

    excited for Tinkerbell video!

  • @TrueThanny
    @TrueThanny Před 3 lety +11

    08:00 The P variants all have exactly the same specs as their non-P counterparts. Literally the only difference is the lack of dual-socket support.

    • @bernds6587
      @bernds6587 Před 3 lety

      that's... what he basically said?

    • @Level1Techs
      @Level1Techs  Před 3 lety +3

      what I was trying to say was that you won't find a P variant clocked like the F series. Because the point of the P series is to be cheaper for 1s systems, not be the fastest. Hence the "odd" recommendation that sometimes F cpus in 1s servers still makes sense even given P series cheaper alternatives.

    • @TrueThanny
      @TrueThanny Před 3 lety

      @@Level1Techs Fair enough.

  • @ScubaSteveTXST
    @ScubaSteveTXST Před 3 lety

    Impressive blade

  • @GameCyborgCh
    @GameCyborgCh Před 3 lety +1

    I would really like to see more proxmox content. And since it already supports it out the box, Ceph

  • @taiiat0
    @taiiat0 Před 3 lety

    for my world, that looks like a useful density option for Game Server Companies.
    or i guess just general Datacenter through and through - Web Hosting, anything that really just needs CPU/RAM capabilities and an Internet Connection.

  • @sstrohkorb
    @sstrohkorb Před 3 lety

    It would be fun to run some massive spark queries on this cluster, it would process everything so quickly

  • @paulgray1318
    @paulgray1318 Před 3 lety +1

    What would I use it for, hmmmm - heat a medium sized office in the winter.
    Thing that struck me was the layout, if you have memory that runs hot, that without any thermal zoning might effect the CPU and equally vice versa.
    Be interested in seeing some heavy memory/CPU workloads and thermals - can you pull the temps of the individual memory slots? As wonder how hot those sticks next to the CPU will run.
    But dam, that's some fun lego you have there.

  • @marcusaurelius6607
    @marcusaurelius6607 Před 3 lety +1

    ordered it for home dev tasks. thanks!

  • @1myfriendjohn
    @1myfriendjohn Před 3 lety

    I have never wanted something something that I do not need so much in my life before...

  • @jannikmeissner
    @jannikmeissner Před 3 lety

    I'd love to see Flatcar Linux and Kubernetes on these - actually planning to test any hardware I can get my hands on for Flatcar Container Linux and help build out the HCL

  • @GeoffSeeley
    @GeoffSeeley Před 3 lety

    @3:52 this shot is epyc!

  • @dermothoyne2393
    @dermothoyne2393 Před 3 lety

    That pkg is hysterical- Kinda like 4 oversized blades
    Quake in bkgd: *HOLY SH!T ! ?*

  • @Catchgate
    @Catchgate Před 3 lety

    You are my density...

  • @Phynix72
    @Phynix72 Před 3 lety

    4:02 voices are coming from IT ops section at Asgard.

  • @marekbarycz4397
    @marekbarycz4397 Před 3 lety +1

    Ryzen was epic CPU for everyday user but Epyc is revolution is servers. High core density due to chiplet design for low cost. Boys and girls AMD is winning not by being better at high performance, they are better because this design is extremely smart and well thought. This show how companies can cut costs at insane rates. In same server room you can put more than before.

  • @MrVayolence
    @MrVayolence Před 3 lety

    Very appealing indeedd😂😂

  • @KizerKazeATLive
    @KizerKazeATLive Před 3 lety +1

    Up to the 3rd dad joke
    *OK, ENOUGH!*

  • @declanmcardle
    @declanmcardle Před 2 lety

    Does Lionel Hutz practice in #42 too? "The Lawyers of Madison County".

  • @thatLion01
    @thatLion01 Před 3 lety +2

    That's a sweet server. 4 nodes in 2u. I didn't tyan was making these again. Can you upgrade to 10gbe?

  • @bullettube9863
    @bullettube9863 Před 3 lety

    If you ever wondered what kind of hardware your company IT department was using this behind the scenes video should help!

  • @mathyoooo2
    @mathyoooo2 Před 3 lety +2

    Quad damage indeed

  • @denvera1g1
    @denvera1g1 Před 3 lety

    I cant wait for this 2 socket 4 node to hit the $200 mark like that Dell CloudEdge C6100 i ALMOST bought like 3 years ago
    Was made in 2010, at the time had the best processors you could get with the new 6 core Xeon X5675, each node could hold i beleive up to 192GB of RAM
    Edit, i do like how the Tyan nodes have the drive controllers and cage assemblies as part of the node instead of a backplane, but i wish it was a little bit more dense. I'd love to see a new standard for NVMe hot swap that uses enclosures for a 110mm nvme and then just uses a USB-C connector
    I know this speciffic adapter wouldnt be suited well but its external design would be great, the SSK "SHE-C325" has edges that can be used to guide it into a rail quite well. There are several internal changes i would make to the design, speciffically making it tooless, a thermal pad behind the NVMe, and a door that closes onto the top of the NVMe with thermal pad instead of sliding it inside of a tube(scraping off the thermal pad most of the time)

  • @neosmith166
    @neosmith166 Před 3 lety

    The video thumbnail looks as if he is holding the prototype of the BFG gun!

  • @Demodude123
    @Demodude123 Před 3 lety

    Kubernetes and ceph/rook for sure with 4 nodes

  • @ChuckNorris-lf6vo
    @ChuckNorris-lf6vo Před 3 lety +1

    Yeees let's dooo iiittt !!!!!!!!!

  • @Gowan08
    @Gowan08 Před 3 lety

    Looking at VXRAIL and other style solutions, this is truly the future. the only think that concerns me is the density of storage. this is great for general purpose VMs, any monster VM with tons of storage don't fit this mold...However a file system with NFS backing larger VMs seems like an appropriate method of resolving that issue. So interesting to see the density changes. I am hoping this continues to compete with cloud and help remove marketecture meetings.

  • @pkt1213
    @pkt1213 Před 3 lety

    Check out Jeff from Craft Computing. I think the knife he's using to open hard drives would be inspirational for Wendel

  • @DespoBryant
    @DespoBryant Před 3 lety

    Boiler Snake Merch!

  • @MrOne2watch
    @MrOne2watch Před 3 lety

    cluster? yes please ! :)

  • @Luscious3174
    @Luscious3174 Před 3 lety +2

    F@H CPU slots would be a good try for this, since those scale well on multiple of 2,3 and 5. You could try 30, 60 and even 90 threads if you have a 64 core processor lying around and see what kind of PPD they bring to the table.
    Personally though I'd stick with a 1U server and shove four A100 cards in there for the highest density. Expensive AF? Sure. But after 12-24 months of mining the costs could be recouped. Most servers last 5 years easy and even go beyond when the warranty expires. The great thing about passively cooled CPU's and GPU's is the fan replacements are easy, and if you've got good air conditioning in the room, will last beyond that 50,000 hour MTBF
    All that said, I am curious what your power bills are LOL Do you have solar?

  • @hr31gtr
    @hr31gtr Před 3 lety

    I would love to see this running Nutanix

  • @madnesssoft2012
    @madnesssoft2012 Před 3 lety

    Raise your hand if you were one of the Tyan K8WE owners 15 years ago for quad core. *raises hand*

  • @jamesunknown6016
    @jamesunknown6016 Před 3 lety

    I see you’re the TF2 Bot God Running Servers 24/7

  • @charleshein5991
    @charleshein5991 Před 3 lety

    I would love to deploy this as a family vm server but I want to be able to extend high level graphics and play local LAN style and group wide area games like Fortnight across from 2 to 4 terminals or more because that leads me to other ideas like small overhead easily deployed tournament "vlan" style control where everyone is on exactly the same playing field as far as hardware or extremely low latency virtual hardware. Especially if deployed thru these more powerful NUC style micro PCs to literally everything with an hdmi port. Very exciting indeed!!

    • @charleshein5991
      @charleshein5991 Před 3 lety

      Sorry geeked out for a sec. But really very awesome!

  • @Minitomate
    @Minitomate Před 3 lety

    I would like to see how far can go in terms of getting these monstrous machines to it's limits.

  • @lasbrujazz
    @lasbrujazz Před 3 lety

    Ah, I originally thought this would be 2 systems of 2 sockets.

  • @amateurwizard
    @amateurwizard Před 3 lety

    Casually glosses over the robot-spider

  • @mritunjaymusale
    @mritunjaymusale Před 3 lety +3

    this is literally the closest thing to Liqid's dream just cpus and ram in one rack and at the back it should only have power and pci fabric ports that's it rest is all liqid's fabric sauce

  • @bw_merlin
    @bw_merlin Před 3 lety

    For me this I would be keen to deploy this as a Microsoft Azure HCI stack. A few extra drive bays would be nice.

  • @itsdeonlol
    @itsdeonlol Před 3 lety

    This machine is impressive!!! Imagine just having 4TBs of memory!!!

    • @Teluric2
      @Teluric2 Před 3 lety

      There are smaller blade servers with 12TB ram.

  • @pcb7377
    @pcb7377 Před rokem

    Hello! Cool videos! I really liked the piece of iron! It’s a pity I did not show how the unification of power supplies works! Power distribution board - very interesting! Do you have the opportunity to make a detailed video How to arrange a power distribution board? 2u / 4s Epyc Transport CX TN73B8037 / Transport CX TN73-B8037-X4S / TransPort CX TN73-B8037-X4S /
    2U4n-F/C621-M3/2U4N-F/ROME-M3
    Or something similar to these chassis!

  • @stephenreaves3205
    @stephenreaves3205 Před 3 lety

    You guys should take a look at Openshift

  • @guydurand6270
    @guydurand6270 Před 3 lety

    What about the Supermicro Twin series? They've been around for a very long while. The AMD G34 socket Twin servers from Supermicro are similar if I'm not mistaken. Could you do a comparison if there are CPU equivalents from both companies?

  • @Veyron640
    @Veyron640 Před 3 lety

    Level 1T:
    Can you do a review of a server set up for the following...
    - 15 Drafter, using Revit 2020
    - 2 managers that also need to be on that server.
    Autodesk Revit, the nature of it is has a "Central model" That resides on a separate central computer.. and multiple users, sync up their work to that system
    - 15 drafter, are wasting 25-30 of there working time on "Syncing."...
    What system would be ideal for a good review. on here. that you can cover for type of environment with this issue.
    let me know.
    Ty

  • @TheKev507
    @TheKev507 Před 3 lety

    Would love to see Tinkerbell and also K3s

  • @TheGuruStud
    @TheGuruStud Před 3 lety +6

    Intel engineers are crying in the corner when they see multiple AMD socket servers.

  • @SleeperJohns
    @SleeperJohns Před 3 lety +3

    How dare you use Quad Damage without the Quake community's permission!
    Ah, what the heck. It's not like we own it, though we own people with it.

  • @pkt1213
    @pkt1213 Před 3 lety

    Must be nice. My recent conversation summarize.
    Me: I need to plan and purchase a new server to replace my one from 2012.
    IT: We're going to the cloud.
    Me: Great. Can I get implementation guidance and pricing so I can budget.
    IT: We don't have that.
    Me: I need a new server.
    IT: We're going to the cloud.
    🤦‍♂️

  • @accrevoke
    @accrevoke Před 3 lety

    Hmm, I would love to run lutanist AHV on those instead.

  • @thomasesr
    @thomasesr Před 3 lety +1

    How about Supermicro's A+ Server 2124BT-HNTR
    With 4 nodes with 2 AMD Epycs on each node on 2U.
    512 cores 1024 threads on 2U.

  • @DeeGeeFi
    @DeeGeeFi Před 3 lety +3

    How many times did Wendel carry that server from the hallway? It was filmed from at least 3-4 different directions? :D

    • @lucidnonsense942
      @lucidnonsense942 Před 3 lety

      It's almost like there were FOUR units?

    • @iMadrid11
      @iMadrid11 Před 3 lety +1

      Once. You put 3-4 cameras on a tripod. The video editor would stitch and select the best videos from each camera angle.

  • @HERETIC529
    @HERETIC529 Před 3 lety

    I want to see this used as a multi node mainframe for data scientists

  • @TrueThanny
    @TrueThanny Před 3 lety

    05:10 You can buy 256GB DIMM's right now. They just cost about $3K a piece. That's 2TB per socket.

  • @QuentinStephens
    @QuentinStephens Před 3 lety

    How about collaborating with Jeff from Craft Computing and doing a Proxmox cluster with iSCSI Freenas data hosts?

  • @Quarky_
    @Quarky_ Před 3 lety

    What is the advantage of 4 single socket, as opposed to say 2 dual socket boards? I would imagine the latter would be cheaper overall, fewer duplicated components (e.g. power rails), also more room for expansion slots, without sacrificing on density.

    • @mdd1963
      @mdd1963 Před 3 lety

      Some clusters require at least 3 nodes....

  • @Kurukx
    @Kurukx Před 3 lety

    RACK EM UP :)

  • @hillppari
    @hillppari Před 3 lety

    did i hear a @Jeff Geerling reference at 10:22

  • @warren_r
    @warren_r Před 3 lety

    I wonder what the price point of this is like compared to two of Tyan's 2-socket 1U Epyc chassis.

  • @andarvidavohits4962
    @andarvidavohits4962 Před 3 lety +2

    Make into a Proxmox cluster. Kthxbai.

  • @kazriko
    @kazriko Před 3 lety +1

    So, kind of like a 2u Blade style server then, but with a little less shared stuff?

    • @stephen1r2
      @stephen1r2 Před 3 lety +1

      Yes, but in this case they are only sharing PSUs, so total (mostly) independence

  • @Lukedagama
    @Lukedagama Před 3 lety

    barebones

  • @MrPunkassfuck
    @MrPunkassfuck Před 3 lety

    94 pounds? Did they make it out of rocks instead of sand?
    Looking at those rails, makes me wonder how many times Wendell has gotten his skin on fingers 'uninstalled' on those.

  • @Dirkadin
    @Dirkadin Před 3 lety

    Definitely a Kubernetes cluster or HA database even.

  • @TheEVEInspiration
    @TheEVEInspiration Před 3 lety

    I wonder how redundant 2 x 2000W power supplies would be enough when one fails.
    The 4 systems alone without expansion will consume close to 2000W, is sit not?
    Does it just enter a power limited mode for the CPUs?
    Will some expansion slots just stop working?
    Or is there sufficient overhead in one power supply to have it carry the load of 3000W combined (assuming expansion across both servers is close to 1000W combined)?
    It still looks great to me as many uses will not need heavy power consumption in the slots, just extra IO of some kind.

    • @Level1Techs
      @Level1Techs  Před 3 lety +2

      the fully loaded load is closer to 1250-1300w +/- so one psu has plenty of margin. But modern chassis are smart enough to be aware of the overall power budget, too.

  • @fbifido2
    @fbifido2 Před 3 lety +1

    How about a 4 node Proxmox cluster /w Ceph storage
    running portainer
    for docker and Kubernetes workloads ?????

    • @creker1
      @creker1 Před 3 lety

      Ditch proxmox and portainer and you got yourself nice k8s cluster.

  • @KD_Puvvadi
    @KD_Puvvadi Před 3 lety

    would love kub cluster with proxmox hypervisor

  • @Agent.J
    @Agent.J Před 3 lety

    how do you cool the cpus on these?

  • @toddhetrick615
    @toddhetrick615 Před 3 lety

    Proxmox + Ceph + HA +10gb net. Run some loads and test out the HA. What actually happens when you down a node. Most people never take Proxmox this far on YT