Titanic Tyan: Up to 256 Core Server Chassis - 2U/4S Epyc Transport CX TN73B8037
Vložit
- čas přidán 4. 06. 2021
- **********************************
Thanks for watching our videos! If you want more, check us out online at the following places:
+ Website: level1techs.com/
+ Forums: forum.level1techs.com/
+ Store: store.level1techs.com/
+ Patreon: / level1
+ KoFi: ko-fi.com/level1techs
+ L1 Twitter: / level1techs
+ L1 Facebook: / level1techs
+ L1/PGP Streaming: / teampgp
+ Wendell Twitter: / tekwendell
+ Ryan Twitter: / pgpryan
+ Krista Twitter: / kreestuh
+ Business Inquiries/Brand Integrations: Queries@level1techs.com
IMPORTANT Any email lacking “level1techs.com” should be ignored and immediately reported to Queries@level1techs.com.
-------------------------------------------------------------------------------------------------------------
Intro and Outro Music By: Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 3.0 License
creativecommons.org/licenses/b... - Věda a technologie
Man, looking at Epyc CPUs displayed like a trading card collection, it's something special
"Quad Damage and not from FEDEX !"... as a recurring FEDEX victim I felt that one, Wendell ! -_-
I would love to just spend a day walking thru the level one headquarters
Same, I'd love to see all those servers
With you there with a meet and greet dinner!!
This video lacks the "Quad damage" sound from Quake.
czcams.com/video/TxzhpEbbnKk/video.html
OVERKILL: Here are the giblets of what once was Intel...Or at least what we could wipe off the floor...
Dyaum Wendell looks so happy with his toys here.
Wendell may be physically stronger than he looks, hoisting 94lbs like it's a simple desktop, but that's nothing compared to the strength of his pun game. "Appeeeeeeeeeeeaaaling"
This is the way.
This man has more Epic CPUs than I have cores and more RAM than I have storage
Kinda sounds like a perfect way to consolidate a full proxmox + ceph cluster to a single chassis. I could easily run my whole virtualization + ceph storage stack on it
*ocp2 pcie3 somehow became ocp3. It's ocp2, not ocp3 just fyi. Still 10 and 25g is no problem
...tell me why, tell me why, Wendell? What are you doing with all these chips?
@@peppybocan the kilothread server is inbound. I'm just the silver surfer heralding the arrival
@@Level1Techs I could tell you where I would like to use them ... oooh so many places where to use them. You could build a CI/CD pipeline that compiles and tests Chromium completely end-to-end! That's just a wild example...
@@Level1Techs hahaha and I though 3:49 was the moneyshot! Kilothread server hahaha!
@@peppybocan or a minecraft server.... for the entire country
256 cores in 2u, sweet jeebus! It's almost hard to wrap your head around.
That's not even the densest system. Supermicro has 2U 4 node system with 2 sockets each. 512 cores in 2U.
@@creker1 man that would get hot
This is relegated as computer porn at this point, but glad someone is covering things in this sector. Sure this is helping guide people in more important walks of life. Love your enthusiasm and chance to look at this kind of stuff
This server chassis form factor is a modern update to blade servers.
Man, this is the golden age for local hosted servers. Thanks AMD!
Damn this thing is cool! Always fun to see you do cool stuff with exciting hardware :D
Thank you so much for this review, darn amazing! Reminds me of the power systems from IBM. Edit: One usecase might be a high density deployment of citrix or horizon nodes in smaller DCs at regional centers. Cuts down on latency across large WANs.
I know just a little about servers/networking, but I have never regretted subscribing! 🤘☺️
A really cool chassis for sure. Its almost like a DIY Dell FX2 chassis. We got a pair of them at work as Hyper-V clusters.
I feel like this could be the ideal starting point for offering modular... modules... so you can choose to have the 2U chassis and mix-n-match between server modules like these and storage modules or expansion modules. An internal bridge seems pretty straightforward - keep your server(s) on the left and storage on the right, make use of some of those wasted PCIe lanes...
Dude those epics in the trays lol INSANE
thats how we roll at LEVEL1TECHS sub bell comment engage WOOOOOOO
@@Level1Techs _Engage..._ 👉
future tech in a really vintage building vid. really wonderful pairing ahaha. just love the look of that place just so fitting for things like this.
Bringing 4 times the blades "2U", cool stuff :)
looking forward to the next videos!
I strongly recommend that you get your hands on Nutanix block!
Awesome system! Please add the Quad Damage sound on the next videos
Wendall needs a mad scientist channel
This *is* a mad scientist channel.
I like watching those server videos so you can see what will come to consumer market later on.
to protect and to rock "Love it"
Loved the pun.
Pretty big blade server;) so cool!!!
The FX2 chassis without the overcomplicated fabric and IO modules. And Epyc of course. I'd like one of these with a couple HBA's out the back and some storage enclosures to make a 2 node HA makeshift SAN
Fujitsu has had a CX model running dual socket Intel Xeon CPU's in each of the nodes, and 4 nodes per 2U box, Although the Fujitsu model is fairly deeper than most servers and can sometimes have space issues if the rack system isn't deep enough. However it is still running up to 8, 26 core CPUs in the 2U chassis.
That is so cool
excited for Tinkerbell video!
08:00 The P variants all have exactly the same specs as their non-P counterparts. Literally the only difference is the lack of dual-socket support.
that's... what he basically said?
what I was trying to say was that you won't find a P variant clocked like the F series. Because the point of the P series is to be cheaper for 1s systems, not be the fastest. Hence the "odd" recommendation that sometimes F cpus in 1s servers still makes sense even given P series cheaper alternatives.
@@Level1Techs Fair enough.
Impressive blade
I would really like to see more proxmox content. And since it already supports it out the box, Ceph
for my world, that looks like a useful density option for Game Server Companies.
or i guess just general Datacenter through and through - Web Hosting, anything that really just needs CPU/RAM capabilities and an Internet Connection.
It would be fun to run some massive spark queries on this cluster, it would process everything so quickly
What would I use it for, hmmmm - heat a medium sized office in the winter.
Thing that struck me was the layout, if you have memory that runs hot, that without any thermal zoning might effect the CPU and equally vice versa.
Be interested in seeing some heavy memory/CPU workloads and thermals - can you pull the temps of the individual memory slots? As wonder how hot those sticks next to the CPU will run.
But dam, that's some fun lego you have there.
ordered it for home dev tasks. thanks!
I have never wanted something something that I do not need so much in my life before...
I'd love to see Flatcar Linux and Kubernetes on these - actually planning to test any hardware I can get my hands on for Flatcar Container Linux and help build out the HCL
@3:52 this shot is epyc!
That pkg is hysterical- Kinda like 4 oversized blades
Quake in bkgd: *HOLY SH!T ! ?*
You are my density...
4:02 voices are coming from IT ops section at Asgard.
Ryzen was epic CPU for everyday user but Epyc is revolution is servers. High core density due to chiplet design for low cost. Boys and girls AMD is winning not by being better at high performance, they are better because this design is extremely smart and well thought. This show how companies can cut costs at insane rates. In same server room you can put more than before.
Very appealing indeedd😂😂
Up to the 3rd dad joke
*OK, ENOUGH!*
Does Lionel Hutz practice in #42 too? "The Lawyers of Madison County".
That's a sweet server. 4 nodes in 2u. I didn't tyan was making these again. Can you upgrade to 10gbe?
If you ever wondered what kind of hardware your company IT department was using this behind the scenes video should help!
Quad damage indeed
I cant wait for this 2 socket 4 node to hit the $200 mark like that Dell CloudEdge C6100 i ALMOST bought like 3 years ago
Was made in 2010, at the time had the best processors you could get with the new 6 core Xeon X5675, each node could hold i beleive up to 192GB of RAM
Edit, i do like how the Tyan nodes have the drive controllers and cage assemblies as part of the node instead of a backplane, but i wish it was a little bit more dense. I'd love to see a new standard for NVMe hot swap that uses enclosures for a 110mm nvme and then just uses a USB-C connector
I know this speciffic adapter wouldnt be suited well but its external design would be great, the SSK "SHE-C325" has edges that can be used to guide it into a rail quite well. There are several internal changes i would make to the design, speciffically making it tooless, a thermal pad behind the NVMe, and a door that closes onto the top of the NVMe with thermal pad instead of sliding it inside of a tube(scraping off the thermal pad most of the time)
The video thumbnail looks as if he is holding the prototype of the BFG gun!
Kubernetes and ceph/rook for sure with 4 nodes
Yeees let's dooo iiittt !!!!!!!!!
Looking at VXRAIL and other style solutions, this is truly the future. the only think that concerns me is the density of storage. this is great for general purpose VMs, any monster VM with tons of storage don't fit this mold...However a file system with NFS backing larger VMs seems like an appropriate method of resolving that issue. So interesting to see the density changes. I am hoping this continues to compete with cloud and help remove marketecture meetings.
Check out Jeff from Craft Computing. I think the knife he's using to open hard drives would be inspirational for Wendel
Boiler Snake Merch!
cluster? yes please ! :)
F@H CPU slots would be a good try for this, since those scale well on multiple of 2,3 and 5. You could try 30, 60 and even 90 threads if you have a 64 core processor lying around and see what kind of PPD they bring to the table.
Personally though I'd stick with a 1U server and shove four A100 cards in there for the highest density. Expensive AF? Sure. But after 12-24 months of mining the costs could be recouped. Most servers last 5 years easy and even go beyond when the warranty expires. The great thing about passively cooled CPU's and GPU's is the fan replacements are easy, and if you've got good air conditioning in the room, will last beyond that 50,000 hour MTBF
All that said, I am curious what your power bills are LOL Do you have solar?
I would love to see this running Nutanix
Raise your hand if you were one of the Tyan K8WE owners 15 years ago for quad core. *raises hand*
I see you’re the TF2 Bot God Running Servers 24/7
I would love to deploy this as a family vm server but I want to be able to extend high level graphics and play local LAN style and group wide area games like Fortnight across from 2 to 4 terminals or more because that leads me to other ideas like small overhead easily deployed tournament "vlan" style control where everyone is on exactly the same playing field as far as hardware or extremely low latency virtual hardware. Especially if deployed thru these more powerful NUC style micro PCs to literally everything with an hdmi port. Very exciting indeed!!
Sorry geeked out for a sec. But really very awesome!
I would like to see how far can go in terms of getting these monstrous machines to it's limits.
Ah, I originally thought this would be 2 systems of 2 sockets.
Casually glosses over the robot-spider
this is literally the closest thing to Liqid's dream just cpus and ram in one rack and at the back it should only have power and pci fabric ports that's it rest is all liqid's fabric sauce
For me this I would be keen to deploy this as a Microsoft Azure HCI stack. A few extra drive bays would be nice.
This machine is impressive!!! Imagine just having 4TBs of memory!!!
There are smaller blade servers with 12TB ram.
Hello! Cool videos! I really liked the piece of iron! It’s a pity I did not show how the unification of power supplies works! Power distribution board - very interesting! Do you have the opportunity to make a detailed video How to arrange a power distribution board? 2u / 4s Epyc Transport CX TN73B8037 / Transport CX TN73-B8037-X4S / TransPort CX TN73-B8037-X4S /
2U4n-F/C621-M3/2U4N-F/ROME-M3
Or something similar to these chassis!
You guys should take a look at Openshift
What about the Supermicro Twin series? They've been around for a very long while. The AMD G34 socket Twin servers from Supermicro are similar if I'm not mistaken. Could you do a comparison if there are CPU equivalents from both companies?
Level 1T:
Can you do a review of a server set up for the following...
- 15 Drafter, using Revit 2020
- 2 managers that also need to be on that server.
Autodesk Revit, the nature of it is has a "Central model" That resides on a separate central computer.. and multiple users, sync up their work to that system
- 15 drafter, are wasting 25-30 of there working time on "Syncing."...
What system would be ideal for a good review. on here. that you can cover for type of environment with this issue.
let me know.
Ty
Would love to see Tinkerbell and also K3s
Intel engineers are crying in the corner when they see multiple AMD socket servers.
How dare you use Quad Damage without the Quake community's permission!
Ah, what the heck. It's not like we own it, though we own people with it.
Must be nice. My recent conversation summarize.
Me: I need to plan and purchase a new server to replace my one from 2012.
IT: We're going to the cloud.
Me: Great. Can I get implementation guidance and pricing so I can budget.
IT: We don't have that.
Me: I need a new server.
IT: We're going to the cloud.
🤦♂️
Hmm, I would love to run lutanist AHV on those instead.
How about Supermicro's A+ Server 2124BT-HNTR
With 4 nodes with 2 AMD Epycs on each node on 2U.
512 cores 1024 threads on 2U.
How many times did Wendel carry that server from the hallway? It was filmed from at least 3-4 different directions? :D
It's almost like there were FOUR units?
Once. You put 3-4 cameras on a tripod. The video editor would stitch and select the best videos from each camera angle.
I want to see this used as a multi node mainframe for data scientists
05:10 You can buy 256GB DIMM's right now. They just cost about $3K a piece. That's 2TB per socket.
that's like 15 more chrome tabs
How about collaborating with Jeff from Craft Computing and doing a Proxmox cluster with iSCSI Freenas data hosts?
What is the advantage of 4 single socket, as opposed to say 2 dual socket boards? I would imagine the latter would be cheaper overall, fewer duplicated components (e.g. power rails), also more room for expansion slots, without sacrificing on density.
Some clusters require at least 3 nodes....
RACK EM UP :)
did i hear a @Jeff Geerling reference at 10:22
I wonder what the price point of this is like compared to two of Tyan's 2-socket 1U Epyc chassis.
Make into a Proxmox cluster. Kthxbai.
So, kind of like a 2u Blade style server then, but with a little less shared stuff?
Yes, but in this case they are only sharing PSUs, so total (mostly) independence
barebones
94 pounds? Did they make it out of rocks instead of sand?
Looking at those rails, makes me wonder how many times Wendell has gotten his skin on fingers 'uninstalled' on those.
Definitely a Kubernetes cluster or HA database even.
I wonder how redundant 2 x 2000W power supplies would be enough when one fails.
The 4 systems alone without expansion will consume close to 2000W, is sit not?
Does it just enter a power limited mode for the CPUs?
Will some expansion slots just stop working?
Or is there sufficient overhead in one power supply to have it carry the load of 3000W combined (assuming expansion across both servers is close to 1000W combined)?
It still looks great to me as many uses will not need heavy power consumption in the slots, just extra IO of some kind.
the fully loaded load is closer to 1250-1300w +/- so one psu has plenty of margin. But modern chassis are smart enough to be aware of the overall power budget, too.
How about a 4 node Proxmox cluster /w Ceph storage
running portainer
for docker and Kubernetes workloads ?????
Ditch proxmox and portainer and you got yourself nice k8s cluster.
would love kub cluster with proxmox hypervisor
how do you cool the cpus on these?
Proxmox + Ceph + HA +10gb net. Run some loads and test out the HA. What actually happens when you down a node. Most people never take Proxmox this far on YT