100 Gig Networking in your Home Lab

Sdílet
Vložit
  • čas přidán 19. 05. 2024
  • Mikrotik 100G Switch - amzn.to/4d5pFkG
    Connectx5 - amzn.to/3Wau6om
    40G Upgrade Video - • 40 Gig LAN - Why did I...
    -------------------------------------------------------------------------------------------
    🛒 Amazon Shop - www.amazon.com/shop/raidowl
    👕 Merch - www.raidowlstore.com
    🔥 Check out today's best deals from Newegg: howl.me/clshD8fv8xj
    -------------------------------------------------------------------------------------------
    Join the Discord: / discord
    Become a Channel Member!
    / @raidowl
    Support the channel on:
    Patreon - / raidowl
    Discord - bit.ly/3J53xYs
    Paypal - bit.ly/3Fcrs5V
    My Hardware:
    Intel 13900k - amzn.to/3Z6CGSY
    Samsung 980 2TB - amzn.to/3myEa85
    Logitech G513 - amzn.to/3sPS6yv
    Logitech G703 - shop-links.co/cgVV8GQizYq
    WD Ultrastar 12TB - amzn.to/3EvOPXc
    My Studio Equipment:
    Sony FX3 - shop-links.co/cgVV8HHF3mX / amzn.to/3qq4Jxl
    Sony 24mm 1.4 GM -
    Tascam DR-40x Audio Recorder - shop-links.co/cgVV8G3Xt0e
    Rode NTG4+ Mic - amzn.to/3JuElLs
    Atmos NinjaV - amzn.to/3Hi0ue1
    Godox SL150 Light - amzn.to/3Es0Qg3
    links.hostowl.net/
    0:00 Intro
    0:23 The Mikrotik CRS504-4XQ-IN 100G Switch/Router
    1:02 Why I bought this
    3:48 40/100G is cool
    4:11 RouterOS interface and setup
    6:04 Let's run a speed test!
    6:29 Why isn't it fast?
    7:09 I have a problem...
    7:44 100G networking!
    9:10 Was it worth it?
    10:29 Okay I want better performance...
    11:17 Let's set up SMB Direct and RDMA
    12:22 Conclusion
  • Věda a technologie

Komentáře • 224

  • @JeffGeerling
    @JeffGeerling Před 27 dny +154

    8:31 time to upgrade your workstation, then!

    • @RaidOwl
      @RaidOwl  Před 27 dny +51

      Don’t do this to me, Jeff…

    • @JeffGeerling
      @JeffGeerling Před 27 dny +72

      @@RaidOwl Imagine, if you buy a modern Threadripper, you'd have enough PCIe for 400 Gbps...

    • @RaidOwl
      @RaidOwl  Před 27 dny +42

      @@JeffGeerling no...plz...

    • @bluesquadron593
      @bluesquadron593 Před 27 dny +17

      @@RaidOwlYou know you must do it. You can always say the views will pay for it 😂

    • @shephusted2714
      @shephusted2714 Před 27 dny +5

      this is what everybody, even red shirt jeff, needs - overall a pretty good effort - pci lanes seems to be the limiting factor overall - the ws is the weak link. this video is a good one for biz - since they don't use gpu so much they could forego the switch and just do point to point from ws to dual nas - if you must have gpu in the equation then you need a server level ws/editing box with more lanes - what i got from this video - generally good content but raid owl does have to followup on this and optimize fully so he can enjoy more speed, save time, and be more productive, he may have to goto nvme arrays also

  • @TheRealClutch1010
    @TheRealClutch1010 Před 27 dny +51

    I now have a way to describe my hobbies, crippling inability to be content with what I have.

    • @CraigMcIntosh
      @CraigMcIntosh Před 25 dny +3

      @TheRealClutch1010 I know what you mean, just starting to build a home lab and home nas

  • @Bytional
    @Bytional Před 27 dny +58

    40G is a bit awkward right now, homelab users mostly not there yet but enterprise users are upgrading from it to 100G already.

    • @nadtz
      @nadtz Před 27 dny +3

      People have been buying second hand 40gb enterprise switches for a while now. I first remember reading about that on the servethehome forums several years ago in regard to brocade and arista switches. It's 'obsolete' for the enterprise which just means you can snap them up cheap now second hand for homelabbers but you definitely need to do some research to not get a switch that sounds like a jet taking off and theres always power consumption to worry about.

    • @rezenclowd3
      @rezenclowd3 Před 27 dny +12

      Enterprise is already at 400g...

    • @TrTai
      @TrTai Před 27 dny

      They're nice if you can use them, but it gets a bit annoying because, yeah it's 40G but it's really 4x10G, as long as your tasks are multi threaded it can push it but I honestly struggle to push more than 2 saturated links and a bit of extra traffic on the other two. I'm sure some people can but it's really not as easy as you'd think. it does work nice for uplinks though still.

    • @Darkk6969
      @Darkk6969 Před 26 dny +2

      @@nadtz Careful on used enterprise gear as some of them need an active license to make use of the features.

    • @kristopherleslie8343
      @kristopherleslie8343 Před 26 dny

      @@rezenclowd3even beyond that in labs

  • @torgrimt
    @torgrimt Před 27 dny +34

    looking forward to your review of the threadripper in a week ;)

    • @russgifford
      @russgifford Před 27 dny +1

      Just came here to say that ...

    • @BillLambert
      @BillLambert Před 27 dny +6

      Threadripper owner here: you still need RDMA to reach any meaningful file transfer speeds. Network packets are typically 1492 to 1500 bytes, or 9000-ish if using jumbo frames. That's between 10 and 66 million packets per second for 100GBE, which is a massive amount of overhead for the CPU. RDMA instead takes megabyte or even gigabyte-sized chunks of data and "packetizes" it right on the NIC instead of your CPU, and it does so at line speed with dedicated silicon. Well, assuming your RAM and PCIe bus can keep up.

  • @juniornunes
    @juniornunes Před 27 dny +42

    the "see i fixed the saggy servers!!!" gesture was great lol

    • @dagamore
      @dagamore Před 27 dny

      by adding in a 2x4 or something like that ..... like I did at my home lab it worked.

    • @juniornunes
      @juniornunes Před 23 dny +1

      @@dagamore hey ive done a shelf and when th shelf didnt work i used zip ties it aint stupid if it works only if it fails

  • @TrTai
    @TrTai Před 27 dny +7

    As a network engineer.... Just use the switch. The point to point to bridge is possible but it brings nothing but pain.

  • @codyspradlin1296
    @codyspradlin1296 Před 27 dny +18

    For the record, FS doesn't just "seem" like a good store, they are pretty much THE store for fiberoptic networking. They make EVERYTHING for fiber, and they're just about *the* go-to for third-party transceivers for the entire market. Their switches are used in enterprise as well, just at a much lower volume than their optics and accessories.

    • @RaidOwl
      @RaidOwl  Před 27 dny +6

      Cool! I’m not in touch with the enterprise market at all so this is good to know.

  • @TDCIYB77
    @TDCIYB77 Před 27 dny +23

    iSCSI should be the best option to improve speed for editing. And supported by both TrueNAS and Windows.

  • @CassegrainSweden
    @CassegrainSweden Před 27 dny +27

    This must fall under the category: do I need it, hell no, do I want it, hell yeah 😂

    • @RaidOwl
      @RaidOwl  Před 27 dny +9

      That’s 99% of my life

    • @DeNNiiiable
      @DeNNiiiable Před 27 dny +2

      Yup. How I end up with most my stuff

    • @ajpenninga
      @ajpenninga Před 27 dny +1

      This is @RaidOwl's YT channel slogan

  • @jeremybarber2837
    @jeremybarber2837 Před 26 dny

    Good stuff right here. Greatly appreciate the ride along as you learn.

  • @RandomTechWZ
    @RandomTechWZ Před 27 dny +9

    10gb networking seems to be the sweet spot for home labs and your wife sounds exactly like mine when I tell her changes I made to my home lab lol.

  • @iroesstrongarm
    @iroesstrongarm Před 27 dny +4

    That "oh god" from you wife definitely had undertones of her full awareness that too much money was spent. 😂

  • @gigabit9823
    @gigabit9823 Před 27 dny +2

    By far the funniest Homelab channel on YT. Never change my Texan neighbor.

  • @cjchico
    @cjchico Před 27 dny +3

    I use this switch for vSAN, breaking out to 2x 25GbE per each Dell server. Works great and haven't had any issues yet. RouterOS is very odd and definitely takes some getting used to.

  • @phillipmartinez583
    @phillipmartinez583 Před 27 dny +10

    When's the threadripper video coming out?!

    • @RaidOwl
      @RaidOwl  Před 27 dny +11

      When AMD sponsors me

  • @ajhieb
    @ajhieb Před 27 dny +1

    I had a similar upgrade path, where I started with 3 servers connected via 3 dual port ConnectX-3 cards, but wanted to connect to some other stuff, so I found a Mellanox SX-6036 40Gbe switch on eBay for about $150 and I've been thrilled with that thing. It's not "sit next to it" quiet, but it's not even close to being the loudest thing in my rack. If you're not averse to used equipment, it's a great option.

  • @AndrewWells527
    @AndrewWells527 Před 27 dny +5

    9:30 I'm disappointed that the next cut after "that would be stupid, right?" wasn't with those PC parts on the table. Current tech really isn't designed for single transfer speeds at that level. That switch was probably designed to be at the core layer of a network. Still fun to play with though.

    • @RaidOwl
      @RaidOwl  Před 27 dny +4

      Yeah if I really wanted to build out an expansive RDMA-enabled network that would require a different switch…or a bunch more NICs lol

  • @kristopherleslie8343
    @kristopherleslie8343 Před 26 dny

    Good video buddy cheers on the next beer

  • @PrimalNaCl
    @PrimalNaCl Před 18 dny

    I have 2 of those Mikrotik switch and an Arista 48 100G qsfp28 ports and several intel e810-cqda2 cards.
    Genreral desktop machines will be pcie lane constrained. I can eek out about 50G. You need something that has the full 16 lanes of gen4 pcie to get the "love". Windows has a larger perf hit than linux and wsl2 blows all the goats.
    100Gbit is a beautiful thing!

  • @MM-vl8ic
    @MM-vl8ic Před 15 dny

    I've been using the ConnectX-3 40/56Gbs for 5 plus years with a Mellanox SX6036G ("G" = Ethernet)..... I found when testing, it's best to use 100G+ ram disc on each machine to eliminate storage device issues..... also being a bit retro... Asus X99 workstation MBs with Xeon e5-1660/1680 v3 over clocked 4.2Ghz with 2666 ECC ram 40 PCIe lanes or more with the PLX switch chip, other MBs are Supermicro X10 series... Remember those Enterprise cars need air flow.... I use retro desktop cases with a side fan to cool NICs, Raid/HBA cards and NVMe drives....

  • @NightHawkATL
    @NightHawkATL Před 27 dny +1

    I am still trying to realize 10G on my servers and you're over here with 100G lol. Great video and break-down of what was involved.

  • @Jinix64
    @Jinix64 Před 18 dny

    If you make a Filesystem that host the SMB run in Async mode on the Truenas it will go way faster. However, do note that this is not advised for sensitive data, as you can lose data on power failure. But it will fly network benchmark wise.

  • @kjeldschouten-lebbing6260

    Its a super nice switch, as you can have:
    - Redudant 100g uplinks
    - 4x25g for servers and workstations
    - 4x10g for legacy equipment or 1g switches with 10g uplinks

  • @chaosfenix
    @chaosfenix Před 27 dny +2

    I love this video. Yes I know there isn't a ton of use for it now but I think there really could be if it was more common. I would love to see some sort of direct video output over the network solution. HDMI 2.1 is only 48Gbps and Displayport maxes out at 80Gbps. Currently remote video is handled by encoding the video signal with something like h.264, sending it on the network, and then decoding the video before presenting the picture on your display. I would love for there to be an option where we could skip those encoding/decoding steps as those add extra latency and can introduce compression artifacts, and instead just send the entire video stream on the network with some overhead left over for USB peripherals. Again, I know this is a stupid upgrade now, but I would love to see this tech better utilized in the future. We have been stuck on 1Gbps for so long that it limitation itself has impacted what we can do with the bandwidth.

    • @inkprod
      @inkprod Před 18 dny +1

      There kinda is already! In professional broadcasting they use the SMPTE 2110 standard, which is essentially raw video (SDI) over IP. That stuff doesn't exactly come cheap though and you need things like PTP aware switches to support it.

    • @chaosfenix
      @chaosfenix Před 17 dny

      @@inkprod That is awesome. I tried finding something but all I was finding was boxes that looked like they still encode and decode the signal. This looks like exactly what I want. I wonder how open the standard is. Like are we talking open like AV1, restricted but not bad like h.264, or god awful like h.265? You don't actually need that much bandwidth either. I said 80Gbps for Displayport but it really depends on your resolution and refresh rate. A 4k120 direct stream is only 26Gbps and a 4k240 is 55Gbps. Sure there is going to be some overhead for encapsulation but you still have enough for a full 40Gbps thunderbolt 4/USB4 connection right along side it. If you actually had this networked through your home you could have a single virtualization server in your network closet and simply have cheap endpoints everywhere else. Even if you were remote it wouldn't be worthless. I am a lucky one and have 10Gbps symmetrical home internet. I could use either a 1080p144 or a 1440p85 video stream with that. Sure, you are depending on your remote site also having that kind of a connection but a 1080p60 stream is only 3.2Gbps. Give it a few years and that may actually be doable.

  • @thesussypupper
    @thesussypupper Před 18 dny

    Switches are actually a good idea. it allows more granular control. layer 2 or layer 3. Each installation has different requierments. not to mention any changes without a switch brings..... longer change windows, roll backs and other problems as well. not to mention the right switch can do routing to offload the work from a main firewall or router. better speed and versatility in the end. Switches allow better scaling and future growth with minimal cost and planning as well. in a mission critical network like 911 for example; you need complex enough to be secure and work at scale, but simple enough to maintain and have "eyes" on the complete end to end system.

  • @phucnguyen0110
    @phucnguyen0110 Před 27 dny

    Love the content, Brett! I guess I have to give Colten aka Hardware Haven a shoutout for me discovering your channel and learning about home lab/home server in general :D

  • @adrian32772
    @adrian32772 Před 26 dny

    we know it's a PCI bandwidth issue, but make sure you are also using iperf3 with WSL on windows since the windows compiled versions are still using a translation layer for how it's complied. @technotim just went over this in his latest Unifi video too.

  • @byronservies4043
    @byronservies4043 Před 27 dny

    The only reason I am upgrading to 10gb at home is that I am finally terminating the OM1 fiber I installed during a remodel. That was in the year 2000. Also re-terminating the cat 5e while I'm at it.
    Lots of 100g at work, though.

  • @drubizzy
    @drubizzy Před 27 dny

    I have to wonder if some of those GPUs we've been seeing with NVMe slots are a preview to seeing more types of IO bundled into the GPU package to take advantage of the over-allocation of PCIe lanes to the top x16 slot.

  • @tommybronze3451
    @tommybronze3451 Před 20 dny

    If you want better speed than samba, than maybe try NSF ? another option is iSCSI but I'm not convinced it would outperform NFS ... but hey, try and see.

  • @Twitch_Blade
    @Twitch_Blade Před 24 dny

    would be a cool setup for prox mox cluster

  • @Prime0pt
    @Prime0pt Před 20 dny

    What is really interesting will this mikrotik work under full load? We had few mikrotik switches and had to wait two years before mikrotik made SwOS for them. On routeros they just weren't work under the load.

  • @mikealthomas1
    @mikealthomas1 Před 27 dny +3

    I wish Mikrotik made a 8-Port * 100 Gig switch.
    But overall, a really good 4-Port switch

    • @Darkk6969
      @Darkk6969 Před 26 dny +2

      They will eventually. I have several of their switches and they're great!!

  • @SB-qm5wg
    @SB-qm5wg Před 26 dny

    Mikrotik has been good to me with the value to price

  • @PeaceIndustrialComplex

    I've been wanting to get my hands on one of these switches just because 100Gbit sounds so insane

  • @rdsii64
    @rdsii64 Před 24 dny

    Yes, its worth an all flash server just to get that good RDMA throughput.

  • @coletraintechgames2932

    You crazy....thanks! 😂

  • @mrkdosmil2879
    @mrkdosmil2879 Před 27 dny

    Not sure if it's just on my end but it seems like you need some sound dampeners. I can hear a bit of room reverb in your audio.

  • @computersales
    @computersales Před 25 dny

    I'm disappointed you didn't cut to the new threadripper build after talking about it. 😂 Also yes all flash NAS is totally worth it. As long as you don't do it the dumb way mine is set up for now. 😅

  • @krisclem8290
    @krisclem8290 Před 27 dny +2

    "That would be stupid, right" seems like forshadowing to me.

  • @acenio654
    @acenio654 Před 27 dny

    Recently a listing popped up on my country's main used market website for a 16 port qsfp28 switch for just under $700. Tempting

  • @computerenthusiast402
    @computerenthusiast402 Před 24 dny

    What network cards and switch do you recommend for a PC network ?

  • @seethruhead7119
    @seethruhead7119 Před 27 dny

    Been coveting this switch for a while. Haven't pulled the trigger but still thinking about it.

  • @lpgsk
    @lpgsk Před 15 dny

    The power supplies seem hot swappable, are they? Cheap networking gear is notorious for not having hot swappable power.

  • @galvesribeiro
    @galvesribeiro Před 24 dny

    I’m curious how did you setup physically the RDMA mode. I mean, if you are passing thru that switch, it would have failed. AFAIK, the RDMA support requires all parts involved in the path like network cards, switches, routers etc to be RDMA compatible and properly configured. The Mellanox cards and the switch you are using are the same as mine and Mellanox use RoCE for RDMA which requires a bunch of features from the switch that Mikrotik doesnt have implemented on any of their products. Would love to see more details on that RDMA setup. Thanks!

    • @RaidOwl
      @RaidOwl  Před 24 dny

      It was direct from each machine. Didn’t go through the switch

  • @JPEaglesandKatz
    @JPEaglesandKatz Před 27 dny +4

    LOL... Wish I had the money to spend on a rack or on the stuff you are getting... To just blow it on stuff I don't need.... Nice video... :) Had a good laugh as well (all positive tho)

  • @msvaughan
    @msvaughan Před 19 dny

    100G network is one of those things that I feel is overkill for a homelab. 40G is ideal considering that you probably won't have media that goes that fast even over SMB connections.
    Personally, I have a 1G connection that is more than adequate for what I have. Thought about 10G which I have in the future but I am not using the bandwidth I have at the moment.

  • @SmalltimR
    @SmalltimR Před 27 dny +1

    Can't wait to see your Threadripper update :p

  • @cameronfrye5514
    @cameronfrye5514 Před 27 dny

    Well, considering you found a path through RDMA to achieve full network speeds in your Windows workstation, when does your new motherboard ship?

  • @christianponopp8756
    @christianponopp8756 Před 17 dny

    Do you use the highest SMB Version in Windows and deactivate SMB 1.0 in Windows? Windows might fall back to SMB 1.0 if it's still active

  • @perjernstrom5178
    @perjernstrom5178 Před 20 dny

    How about RDMA GPU? Sort of next level for “cloud” (read home lab) gaming.

  • @ws_stelzi79
    @ws_stelzi79 Před 27 dny

    So next there will be a new Threadripper build to get enough PCI-E lanes to have a "proper" 100G networking to your NAS for editing videos! 😉😏

  • @inmy30s
    @inmy30s Před 25 dny

    it keeps escalating .. you need more sponsors 😂

  • @GregBrantUK
    @GregBrantUK Před 27 dny +5

    "If you're still watching you're Infiniband"... I can't tell if that's a complement or not

  • @davysprocket
    @davysprocket Před 27 dny +1

    Give SMB multichannel a try, you can enable it in TrueNAS now in the GUI

    • @RaidOwl
      @RaidOwl  Před 27 dny +1

      From my understanding SMB multichannel only helps by aggregating multiple nics

    • @ruojautuma1
      @ruojautuma1 Před 20 dny

      @@RaidOwl it also enables RSS for single nic use case, which adds multithreading in file transfer and can potentially slightly speed things up. Ultimately the best solution would be using SMB direct.

  • @kwith
    @kwith Před 27 dny

    Recently upgraded to 10GB at home and now my bottle neck is the damn hard drives! hahaha

  • @geesharp6637
    @geesharp6637 Před 27 dny

    What about NFS instead of SMB? I've seen some videos recently about NFS on Windows performance is great.

  • @guy_autordie
    @guy_autordie Před 27 dny +2

    I'm MAD and angry at the PCIE lane limitation. Before, we could have all the PCI goodness we wanted. Now if you don't get a server grade MB&Proc, forget your cheap homelab form your older Gaming stations in which you could add an add-in card for that stuff you wanted to do.

    • @Darkk6969
      @Darkk6969 Před 26 dny +1

      One of the reason why I am keeping an eye on a decent deal for used AMD Eypc server CPU paired with used SuperMicro motherboard and ram. They're getting to be cheap on e-bay.

  • @LiLBitsDK
    @LiLBitsDK Před 27 dny

    love videos like this but atm just on 1gig... planning to swap to 2.5gig "soon"TM

    • @RaidOwl
      @RaidOwl  Před 27 dny

      2.5G will be nice 👍🏼

    • @DeNNiiiable
      @DeNNiiiable Před 27 dny

      10gb second hand gear is real cheap SFP+ and ethernet end up similar but 10baseT enternet is more flexible. switch is more but cables cost less

  • @ryanamberger
    @ryanamberger Před 27 dny

    Should be able to remove ether1 from your default bridge and still use it to power the switch with poe.
    Just remove from default bridge and get a 48v/48w Ubiquiti POE Adapter and use that POE output. Should work. May verify the adapter is (4,5)+/(7,8)- to be safe. Some of ubiquitis use 4 pair. May be fine, Mikrotiks datasheet should show exactly what poe it wants. That is, 2 pair or 4 pair and the watts required. I'm too lazy to look.
    Just POE port on adapter to POE in on CRS. Leave adapter LAN unpopulated.

  • @michaelsims7728
    @michaelsims7728 Před 27 dny

    LOL what's Networking without over kill ;) . Love your video & thanks for sharing!

  • @marcinkrasowski7841
    @marcinkrasowski7841 Před 27 dny

    Z790 supports pcie bifurcation in 8/8. So you split your main x16 PCIE 5.0 into two slots: one for the GPU and one for the NIC. You would need a passive adapter to go from single x16 to dual x8 physical connections but you wouldn't be bottlenecked to 32Gbps without buying a new system

    • @RaidOwl
      @RaidOwl  Před 27 dny

      Not all z790 boards. For example, the one I have…lol

  • @DefconUnicorn
    @DefconUnicorn Před 17 dny

    It doesnt have to run routerOS you can reboot it into SwitchOS mode. Also try MTU 9000 and research jumbo frames.

  • @TheComputerNerd248
    @TheComputerNerd248 Před 27 dny

    Are there 2 of the new UDM Pros with the 2 HDD slots in your rack?

  • @canonwright8397
    @canonwright8397 Před 27 dny +1

    "Hey babe, one hundred Gib a second." 😎
    Slap!
    "Ok? Maybe I should have said a thousand Gib???" 😏
    Keep up the good work, Raid.

  • @Stony5438
    @Stony5438 Před 27 dny

    I like your style. Gonna need to subscribe for more content

  • @Georgio_TheChef
    @Georgio_TheChef Před 25 dny

    I just set up my 1 billion gig network, it's so dope. I can dowload the universe in 4 minutes

    • @RaidOwl
      @RaidOwl  Před 25 dny +1

      That’s a lot of recipes Georgio

  • @oscarcharliezulu
    @oscarcharliezulu Před 27 dny

    Addiction to tech is not a joke people. Brett needs our compassion, not our derision.

  • @mosquito7450
    @mosquito7450 Před 20 dny

    run the gpu on fewer pci lanes. Linus tested this. No performance hit on 4 lanes.

  • @Rostol
    @Rostol Před 19 dny +1

    the "problem" is the QSFP modules you are using. there are 2 kinds of 40gb modules. SR4 and BD (Bidi)... the 1st kind uses the 12 fiber connection 100gb fiber uses (MTP for multi pair). the BD one uses "nornal" 10g-20g fiber (2 pairs, normally LC terminated) that kind has 2 20gb channels. so the maximum per connection is 20gb or rather it can work at 20gb for 2 simulatenous connections totalling 40g.
    those 100gb cards need server airflow, install a noctua on them. they wll die. i killed my 1st 40gb card like that.

  • @bradleystannard7875
    @bradleystannard7875 Před 27 dny +1

    Mikrotik just go "how many gigs can we ram in under $700" and then this was born

  •  Před 27 dny

    can you not put switch os on them?

  • @bendertube254
    @bendertube254 Před 27 dny

    I think, to have proper rdma through RoCE you should have a RoCE compatible switch.
    Probably with Mikrotik you should go with iWarp, but not everyone network card support iWarp.

    • @RaidOwl
      @RaidOwl  Před 27 dny

      Right. I just did a direct connection for testing.

    • @AfroJewelz
      @AfroJewelz Před 27 dny

      at same time , it' seems most rdma roce NIC is only proprietary OS compatible and lack of open source fabric driver support. those proprietary shit are expensive like hell

  • @Th3K1ngK00p4
    @Th3K1ngK00p4 Před 27 dny

    I get SFPs from FS regularly, definitely a reputable store.

  • @studioxxswe
    @studioxxswe Před 23 dny

    well I work in the broadcast industry, SMB 3.0 and beyond is for sure not limited to 10G by any means. Might be Samba or whatever TrueNAS is using that causes that, not the protocol

    • @FunkyKong
      @FunkyKong Před 21 dnem

      Yeah, SMB is definitely capable. You should not be seeing a nearly 1/10th performance hit. There is something else wrong. RDMA is only going to work point-to-point in your setup since the Mikrotik has no support for it.

  • @CyberSquatch007
    @CyberSquatch007 Před 18 dny

    I'm surprised how many people use their networks for just one purpose... I would absolutely use 100Gbps speeds with all the different things running in my network stack.

  • @alexwhitehouse1958
    @alexwhitehouse1958 Před 27 dny

    With your crystalkdisk benchmark showing as exactly 10gb read and write that seems a little too coincidental to me. You’d expect reads to be faster than writes. Are you sure there isn’t a bottleneck or something like a port still in a 10gb mode?

    • @RaidOwl
      @RaidOwl  Před 27 dny +1

      Nah I only had the single interface enabled which is the same one I did the iperf tests with. Could be that my pool isn’t properly optimized

  • @GotWire
    @GotWire Před 27 dny

    man and I thought 10g for my home lab was overkill lol wish ubiquiti made a affordable 100g switch. I use there 10g aggregation switch right now. Love the video by the way! Your wife is like IDC lol

    • @RaidOwl
      @RaidOwl  Před 27 dny +1

      Lol yeah she was not impressed

  • @DeNNiiiable
    @DeNNiiiable Před 27 dny

    NFS shares faster? SMB is slow but I think your bottleneck might be the array but not sure. Maybe setup a nvme single disk and repeat test. Regarding the lanes your graphic card is unlikely to be much slower on 8 lanes. It's the enough ram for Arc available and is the nas CPU the bottleneck?

  • @DarkNightSonata
    @DarkNightSonata Před 27 dny

    awesome video. I really like how chasing performance make us learn new things. one question, while the switch is 100gb, how does the router impact the performance of the network, aren't packets sent to the router and back or are they communicated directly between each other from the switch itself ? if directly, then how are packets filtered or firewall rules apply ? thanks.

    • @RaidOwl
      @RaidOwl  Před 27 dny +1

      As long as they’re on the same VLAN then the switch will route by MAC address so it won’t need to go back to the router.

  • @theiaminu5375
    @theiaminu5375 Před 19 dny

    Cool , when MS borks your system with an automatic update , it will occur that much faster !!!

  • @BrunodeSouzaLino
    @BrunodeSouzaLino Před 27 dny

    For those interested. To convert from bps to B/s, you divide the value by 8 (100 Gbps gives you a theoretical max speed of 12.5 GB/s). To convert from B/s to bps, you multiply by 8.
    8:16 That would be 8GB/s, which would be ideal to run 4 10GbE connections, but you'd need at least x8 for 100GbE. Though you could also go for a PCIe 2.0 x16 slot, which has the same speed.

    • @RaidOwl
      @RaidOwl  Před 27 dny

      At 8:16 I mentioned it’s x4 of gen3 speeds since the card is gen3. But yes x8 of pcie4 would suffice.

    • @BrunodeSouzaLino
      @BrunodeSouzaLino Před 27 dny

      @@RaidOwl 8 GB/s is the max transfer speed for PCIe 3.0 x4.

  • @AngryGibberish
    @AngryGibberish Před 26 dny

    Ok, SMB Direct is cool. I had the thoughts of: a) It would be cool if Unraid supported this. to b) There's probably no way for Unraid to support this lol

  • @Girgoo
    @Girgoo Před 27 dny

    Would NFS help with the performance issue?

  • @davidsonboy
    @davidsonboy Před 24 dny

    top !

  • @rrsf4i
    @rrsf4i Před 27 dny +2

    use a pcie bifurcation x8/x8 since your gpu doesnt use those 16lanes (trust me, 1060 had 3 less frames on 4 lanes vs 16 on pcie gen 3, google your card and see how many lanes it actually needs). i do the same, invent reasons or hide behind semi legitimate ones but the truth is I WANT A FAST SW vs i need it

    • @RaidOwl
      @RaidOwl  Před 27 dny +1

      Yeah looked into this but mb doesn’t support it

    • @rrsf4i
      @rrsf4i Před 27 dny +2

      pls dont take it the wrong way, i have a soho/home lab and i have 3 mikrotiks with fibre runs between rooms or DACs and a 4th one ordered. 😂

    • @DeNNiiiable
      @DeNNiiiable Před 27 dny

      ​@@RaidOwlcan't you plug GPU in other slot? As even 4x gen4 is prob ok or 8x gen3

    • @codyspradlin1296
      @codyspradlin1296 Před 27 dny

      @@RaidOwlI'd be kinda surprised if it really doesn't. Usually it's automatic, without an option, if there are multiple greater-than-x4 slots on the board. If you plug ANYTHING into the second, it downgrades the first to x8. What's the model?

  • @JTM_djg
    @JTM_djg Před 26 dny

    Bar Harbor?! Visit or live about? Central Maine here!

    • @RaidOwl
      @RaidOwl  Před 26 dny +1

      My mommy got it for me when she visited

  • @jannikmeissner
    @jannikmeissner Před 27 dny

    I went way crazier when I got two Mellanox / Nvidia SN2410 switches… BUT I use GPU Direct at least.

    • @RaidOwl
      @RaidOwl  Před 27 dny

      Boy you WILD

    • @jannikmeissner
      @jannikmeissner Před 27 dny

      @@RaidOwl Can't wait when the new 800G stuff gets tossed out at work… 48 months, I am already counting!

  • @moe85moe85
    @moe85moe85 Před 26 dny

    Time for a new workstation and an all flash windows server so!

  • @guyfeldman4697
    @guyfeldman4697 Před 27 dny

    It’s not time to get one new system, it’s time to get 2 more hl15s to build a ceph cluster and test against an nvme rbd pool

  • @fwiler
    @fwiler Před 27 dny

    Is it worth it? Do you really need to ask the question? It's always worth it. I would be curious about power consumption difference between 10,25,40,100 including the nic and the switch.

  • @axtran
    @axtran Před 27 dny

    I have an array of 10x NVMe disks in a RAIDZ2 for times like these. lol

  • @tyler5888
    @tyler5888 Před 27 dny

    Nothing is worth the hassle of running Windows Server in a Home lab. You'll save more time with the slower network speed vs troubleshooting and Windows updates with Server 2022.

  • @LegionInfanterie
    @LegionInfanterie Před 27 dny

    homelab is never finished. there is always something we can upgrade :-)

  • @pewpewpew8390
    @pewpewpew8390 Před 27 dny +2

    its sfp28, not sfp+ ;)

  • @greenprotag
    @greenprotag Před 27 dny

    Can you get full RDMA access with a ZFS set up? I guess if so you are limited by client speed? Could you use a ram disk on client side? Or reading an NVMe raid zero array? ...hmmm RDMA wants windows on both ends? ok.... hmm. So is the transfer direct from memory to memory? so as long as your memory pool is large enough?

  • @CeleronS1
    @CeleronS1 Před 18 dny

    Although it's very cool. I would say that it is extremely wasteful. I would rather install large capacity nvme drives on all clients and then allow some kind of caped speed (like half a gig) sync between all of them. I think it is just better way to do it. You got instant access with no network bottleneck on your local drive while any updates you make gets synced slowly over time. Centralised stuff is cool, but you will always get congestion. I think If spend like $1.2k on ssd between all systems, I would become victorious over user experience.

  • @YHK_YT
    @YHK_YT Před 27 dny

    ONE MORE SWITCH!! NEXT ONE IS THE BEST ONE!!

  • @icebalm
    @icebalm Před 26 dny

    The biggest issue with 10+GbE is that without RDMA the CPU has to process packets, and unfortunately these Mikrotik switches don't support DCE or RoCE at all... With that you would get vastly superior performance with SMB Direct.

    • @RaidOwl
      @RaidOwl  Před 26 dny

      Correct, but that FS switch does 😉

  • @ILike2Reed2
    @ILike2Reed2 Před 27 dny

    I think it's absolutely worth it for YOU to make an all flash server to saturate that juicy througput for no other reason than that we can watch and live vicariously through you