40 Gig LAN - Why did I even do this...

Sdílet
Vložit
  • čas přidán 22. 05. 2024
  • Definitely glad I did this...but I prob won't be telling my friends to do the same lol
    Mellanox Connectx-3
    ASUS HyperX 4x NVME Card - amzn.to/3Ry7Akj
    QSFP+ Fiber Cable - amzn.to/3O442n8
    -------------------------------------------------------------------------------------------
    🛒 Amazon Shop - www.amazon.com/shop/raidowl
    👕 Merch - / raidowl
    -------------------------------------------------------------------------------------------
    🔥 Check out this week's BEST DEALS in PC Gaming from Best Buy: shop-links.co/cgDzeydlH34
    💰 Premium storage solutions from Samsung: shop-links.co/cgDzWiEKhB8
    ⚡ Keep your devices powered up with charging solutions from Anker: shop-links.co/cgDzZ755mwl
    -------------------------------------------------------------------------------------------
    Join the Discord: / discord
    Become a Channel Member!
    / @raidowl
    Support the channel on:
    Patreon - / raidowl
    Discord - bit.ly/3J53xYs
    Paypal - bit.ly/3Fcrs5V
    Affiliate Links:
    Ryzen 9 5950x - amzn.to/3z29yko
    Samsung 980 2TB - amzn.to/3myEa85
    Logitech G513 - amzn.to/3sPS6yv
    Logitech G703 - shop-links.co/cgVV8GQizYq
    WD Ultrastar 12TB - amzn.to/3EvOPXc
    My Studio Equipment:
    Sony FX3 - shop-links.co/cgVV8HHF3mX / amzn.to/3qq4Jxl
    Sony 24mm 1.4 GM -
    Tascam DR-40x Audio Recorder - shop-links.co/cgVV8G3Xt0e
    Rode NTG4+ Mic - amzn.to/3JuElLs
    Atmos NinjaV - amzn.to/3Hi0ue1
    Godox SL150 Light - amzn.to/3Es0Qg3
    links.hostowl.net/
    0:00 Intro
    0:43 Why I'm Upgrading
    1:11 Parts
    2:03 Hardware installation
    2:47 10 hours later...
    4:58 It works! Speed tests
    5:35 Another issue...
    6:08 Final speed test
    6:28 Final thoughts
  • Věda a technologie

Komentáře • 162

  • @thaimichaelkk
    @thaimichaelkk Před rokem +26

    You may want to check your cards for heat. Many of the cards expect a rack mount case with high airflow, I believe your card requires Air flow: 200 LFM at 55° C, which the desktop case does not provide. You can strap a fan on the heat sink to provide the necessary cooling (Noctua has 40mm or 60mm which should do the trick nicely currently waiting for 2 to come in). I have a Mellanox 100GB nic and a couple Chelsio 40GB NICs(I would go with the 100GB in the future even though my current switch only supports 40GB) though they definitely need additional airflow after 5 minutes could cook a steak on them. Mikrotik CRS326-24S+2Q+RM is a pretty nice switch to pair with them for connectivity.

  • @jamesbyronparker
    @jamesbyronparker Před rokem +44

    you probably want to run something in the private ip ranges on the nics, the odds of it causing an issue long term are low with just 2 ips but its not great practice

  • @dragonheadthing
    @dragonheadthing Před rokem +5

    4:03 Thank you for showing the command you used, and where you have it saved. Many times in a video where someone talks about a command that's all they do. "I set up a config file to change that." and then that's all they say about it. But they never show what the file looks like leaving a Linux noob like me not learning anything at all.

  • @7073shea
    @7073shea Před rokem +4

    Thanks owl! The “transceivers gonna make me act up” bit had me dying

  • @jeffnew1213
    @jeffnew1213 Před rokem +5

    I've been running 10Gbit for everything that I could pop a 10G card into for a good number of years. The better part of a decade, actually. I started with a Netgear 8-port 10G switch. A few years ago I replaced that with an off-lease Arista Networks 48-port 10G switch (loud, hot, and power hungry). Last year, I replaced that with the new Ubiquiti 10G aggregate switch. That device has four 25G ports.
    I have two 12th generation Dell PowerEdge servers running ESXi and two big Synology NASes, both of which are configured to, among lots of other things, house VMs. There are about 120 VMs on the newer of the two NASes, with replicas and some related stuff on the older box. Both of the PowerEdge servers and both NASes have Mellanox 25G cards in them with OM3 fibre in-between. ESXi and Synology's DiskStation Manager both recognize the Mellanox cards out of the box. So, now, I have a mix of 1G, 10G and 25G running in the old home lab. Performance is fine and things generally run coolly. Disk latency for VMs is very low.

  • @jeffer8762
    @jeffer8762 Před rokem +24

    I did the similar thing with 10gbps home networking but little did I know the speed was capped by my SSD

    • @ryanbell85
      @ryanbell85 Před rokem +3

      NVME drives are definitely the way to go

    • @ejbully
      @ejbully Před rokem +5

      Spinning rust on zfs... don't listen to fools who blindly say nvme....
      Storage layout is important... there will always be a bottleneck - it's almost certain... defeat it

    • @ryanbell85
      @ryanbell85 Před rokem +1

      @@ejbully Why can't NVME on ZFS be an option?

    • @ejbully
      @ejbully Před rokem

      @@ryanbell85 it is an option. I think its better for caching then for data i/o as you will not be able to achieve or maintain those
      advertised throughput speeds.
      Value wise IDENTICAL spinning rust (7200rpm and better)on a mirrored -not to wide vdev - and preferably SAS drives with the correct jbod controller will net you great speeds and your wallet will thank you.
      Of course standard disclaimer- depends on your io workload will vary the results

    • @ryanbell85
      @ryanbell85 Před rokem +1

      @@ejbully "I think its better for caching then for data i/o as you will not be able to achieve or maintain those
      advertised throughput speeds." Are you familiar with the iSCSI and NFS protocols? Do you have any data to backup your claim that ZFS NVME is only suitable for caching? JBODs full of SAS drives definitely have their place and you would be greatly mistaken if you think that NVME drives are only suitable for caching.

  • @ntgm20
    @ntgm20 Před rokem

    Crontab to make the setting persistent - That is also how I keep my MSI B560M PRO set so it can wake on lan. I did a short video on it too.

  • @esra_erimez
    @esra_erimez Před 4 měsíci +1

    I can't wait to try this myself. I ordered some ConnectX-3 Pro EN cards

  • @JasonPVermeulen
    @JasonPVermeulen Před rokem +13

    In this use case when working of your server, and for the according price, this is definitely a worthy upgrade.
    On the topic of things that people in general think are overkill, maybe a homebuilt router next that is low on energy consumption but still be able to route a Wireguard VPN at a minimum of 1Gig? As more and more people and places get a fiber connection
    (I know you have an awesome Ubiquity setup, but it could be a fun project with some old server gear)

    • @RaidOwl
      @RaidOwl  Před rokem +8

      Heck yeah man I’m always looking for projects. Many of the things I do aren’t necessarily the ‘best’ or even useful for most people but at least it’s fun lol.

    • @JasonPVermeulen
      @JasonPVermeulen Před rokem +2

      @@RaidOwl Well, at least your video's are really inspiring and the way you explain the matter with your humor makes it easy digestible and "honest" instead of some youtubers that put some "fake-sauce" layer on their video's. Keep up the good work!!

  • @tinkersmusings
    @tinkersmusings Před rokem +2

    I run a Brocade ICX6610 as my main rack switch. I love that it supports 1Gb, 10Gb, and 40Gb all in one. I also run a Mellanox SX6036 for my 40Gb switch. It supports both Ethernet (with a license) and Infiniband through VPI mode. You can assign the ports that are Ethernet and Infiniband. Both are killer switches and I connect the SX6036 back to the Brocade via two of the 40GbE connections. Most of my machines in the rack now either support 40Gb Ethernet or 40/56Gb Infiniband. I have yet to run 40Gb lines throughout the house though. However, with 36 ports available, the sky is the limit!

    • @DavidVincentSSM
      @DavidVincentSSM Před 8 měsíci +1

      do you know what the cost of the license for ethernet be?

    • @tinkersmusings
      @tinkersmusings Před 8 měsíci

      @@DavidVincentSSM I'm not sure NVIDIA still sells the licenses to this switch, but there's good info on ServeTheHome on the SX6036.

  • @GanetUK
    @GanetUK Před rokem +1

    Which edition of windows are you using?
    As I understand it RDMA helps speeds when getting to 10G+ on windows and is only available in enterprise edition or pro for workstation (that's why I upgrade to enterprise).

  • @markpoint1351
    @markpoint1351 Před rokem +1

    don't know if id do this lol... but thanks to your videos i think about networking more and more!!! keep the videos coming!!!

  • @IonSen06
    @IonSen06 Před rokem +1

    Hey, quick question: Do you have to order the Mellanox QSFP+ cable or will the Cisco QSFP+ cable work?

  • @fanshaw
    @fanshaw Před 2 měsíci

    I've got the Chelsio 40G cards with truenas. 25G SFP+ is probably a better option for home than QSFP which is x4 cabling. It all runs very hot, but if you have more than one nvme ssd, 10G won't cut it. Either get a proper server chassis or at least use something in a standard case that you can pack with fans - those SSD's don't run cool either. Don't forget you'll need to exhaust the whole thing somewhere - putting it into a cupboard will probably end badly.
    Also bear in mind that the transceivers are usually tailored to the kit into which they plug. You may not get a cheap cable off ebay if you don't have a common setup.

  • @Darkk6969
    @Darkk6969 Před rokem +1

    I need to point out that iperf3 is single threaded while iperf is muilti-threaded which makes a difference in throughput. It's not by a wide margin but figured best way to saturate that link.

  • @paulotabim1756
    @paulotabim1756 Před rokem +2

    I use Connectx-3 pro cards between linux machines and a Mikrotik CRS326-24S+4Q+RM.
    Could achieve transfer rates of 33-37Gb/s between the directly connected stations.
    Did you verify the specs of the pcie slots used ? to achieve 40Gb/s must be pcie 3.0 x8. 2.0 x8 will limit at 26Gb/s .

  • @prodeous
    @prodeous Před rokem

    I'm slowly working on getting my 10gb setup.. but 40 being 5x faster.. hmmm... lol. jokes aside thanks for sharing, seems like i'll stick to 10gb for now. Though I have cards iwth dual 10gb, so maybe i shoudl try for 20gb setup.
    I know Unix/Linux/etc have such capabilitty, but Widnows 10 Pro doesn't.. any recomendations on how to link the two ports together?

  • @asmi06
    @asmi06 Před rokem +4

    You've gotta try getting your hands on Mikrotik's flagship 100G gear for even more insanity 😜 I'm hopelessly behind as I just recently upgraded to Netgate 6100 and 10G core switch with leafs still at 2.5G (can't afford to do a complete upgrade in one go, so have to do it in stages). I plan to buy a few mini pcs with 5900hx cpu and 64GB of ram to build a microk8s kubernetes cluster - probably over proxmox cluster to make administration easier.

    • @geesharp6637
      @geesharp6637 Před 2 měsíci

      100G, nah. Skip that and just add a 0. Go for 400G. 😜

  • @seanthenetworkguy8024

    what server rack case was that? I am in the market but I keep finding either way too expensive cases or ones that don't meet my needs.

  • @dmmikerpg
    @dmmikerpg Před rokem +1

    I have it in my setup, like you it is nothing crazy, just host to host; namely from my TrueNAS system to the backup NAS.

  • @jamescox5638
    @jamescox5638 Před 9 měsíci

    I have a Windows server and a juniper EX 4300 switch that has the QSFP+ ports on that back. I have only seen them used in a stack configuration with another switch. Would I be able to buy one of these cards and used the QSFP+ ports the switch as a network interface to have 40G connection with my server? I ask cause I am not sure if these QSFP+ ports on my switch is able to be used as a normal network port like that others.

  • @mpsii
    @mpsii Před rokem

    Would like to know if you could Infiniband the cards and see what is involved with that. Totally needed out on this video.

  • @Alan.livingston
    @Alan.livingston Před rokem

    Doing shit just because you can is a perfectly valid use case. Your home lab is exactly for this kind of thought project.

  • @Nathan_Mash
    @Nathan_Mash Před rokem

    I hope you and your servers stay nice and cool during this heatwave.

  • @TorBruheim
    @TorBruheim Před rokem

    My recommendation described in 4 important things to prepare before you use 40GbE: 1) Enough PCIe lanes 2) Use a motherboard with a typical server chipset 3) Don't use an Apple MAC system 4) In windows set high priority to the background services instead of applications. Good luck!

  • @charlesshoults5926
    @charlesshoults5926 Před rokem

    I'm a little late to the game on this thread, but I've done something similar. In my home office, I have two Unraid servers and two Windows 11 PCs. Each of these end points have Mellanox ConnectX-3 cards installed, connected to a CentOS system acting as a router. While it works, data transfer rates are nowhere near the rated speed of the cards and DAC cables I'm using. Transferring from and to NVMe drives, I get a transfer rate of about 5Gbps. A synthetic iper3 test, Linux to Linux, shows about 25Gbps of bandwidth.

  • @jackofalltrades4627
    @jackofalltrades4627 Před rokem +1

    Thank for making this video. Did your feet itch after being in that insulation?

    • @RaidOwl
      @RaidOwl  Před rokem +1

      Nah but I got some on my arms and that sucked

  • @bradbeckett
    @bradbeckett Před 18 dny

    40 gigE + Thunderbolt FTW!

  • @UncleBoobs
    @UncleBoobs Před rokem +1

    im doing this with the card in infiniband mode, using the IP over infiniband protocol (IPoIB) running openSM as the subnet manager, im getting the full 40g speeds this way

  • @MM-vl8ic
    @MM-vl8ic Před rokem

    I've been using these for a few years....look into running both ports on the cards, auto share RDMA/SMB .... VPI should let you set the cards for 56Gbs ethernet.... test I set up 2 ram disk 100GB and speeds were really entertaining.... Benching marking NVMe gen3 was only a tick slower or the network......

  • @ted-b
    @ted-b Před rokem +1

    Oh it's all fun and games until one of those fast packets has someone's eye out!

  • @marcoseiller8222
    @marcoseiller8222 Před rokem +1

    you have two NICs per card, right? have you tried running them in parallel as a bonded NIC? In theory that should double the speed and would "only" require a second cable run. I think proxmox has an option for that in the UI, no idea how to do that on windows...

    • @RaidOwl
      @RaidOwl  Před rokem

      Def worth looking into but that’s gonna be for future me lol

    • @SurfSailKayak
      @SurfSailKayak Před rokem

      @@RaidOwl Run that second one over to my house :p

    • @logan_kes
      @logan_kes Před rokem

      I don’t think windows consumer versions can do LAG / LACP and it usually requires a switch for true link aggregation. Also not great for single tasks, better for say two 40 gig streams rather than a single 80 gig which would still cap at 40 gig

  • @godelrt
    @godelrt Před rokem +1

    Next video: “I did it again! 100gig baby!” Would I recommend it? NO! Lol nice vid!

  • @sashalexander7750
    @sashalexander7750 Před rokem

    Why did you go with AOC type of cable. 10 meters is not long enough to warrant active optical cable)

  • @jonathanhellewell2756
    @jonathanhellewell2756 Před rokem +1

    ... crawling around your attic in Houston during summer... that's dedication...

    • @RaidOwl
      @RaidOwl  Před rokem

      I was up there for like 10 min and I was dripping by the end...crazy

  • @bopal93
    @bopal93 Před 9 měsíci

    Love your humour

  • @marcin_karwinski
    @marcin_karwinski Před rokem +5

    Frankly, since you're not doing any switching in between the devices, instead opting for direct attached fibres, I'd say go with IB instead... IB typically nets better latencies at those higher speeds, and for directly accessing, as in working of the net disk in production, this might improve the feeling of speed in a typical use. Of course, this might not change a lot if you're only uploading/downloading stuff to/from the server before working locally and uploading results back onto the storage server, as then burst throughput is what you need and IB might not be able to accomodate any increase due to medium/tech max speeds. On the other hand, SMB/CIFS can also be somewhat limiting factor in your setup as on some hardware (as in CPU-bottlenecked) switching to iSCSI could benefit you more due to less abstraction layers in between the client and disks in the storage machine...

  • @ryanbell85
    @ryanbell85 Před rokem +8

    Crazy.... literally did this a month ago using the same 40GB cards linking a TrueNAS (also version 11) and 2 Proxmox servers. It was such a long process to get mlxfwmanager working correctly and setup Proxmox with static routes between each of the servers. I didn't have to passthrough the Mellanox card in TrueNAS but I get 32.5Gb/s in ETH mode. Let me know if I can help.

    • @ryanbell85
      @ryanbell85 Před rokem +1

      Essentially Proxmox itself runs off a single SATA SSD while all the VMs run through the 40Gbs network on NVME drives on TrueNAS via NFS.

    • @RaidOwl
      @RaidOwl  Před rokem

      Impressive, was that 32.5Gb/s in iperf or with file transfers?

    • @ryanbell85
      @ryanbell85 Před rokem +1

      @@RaidOwl It was while using iperf. I haven't tried a file transfer but KDiskMark gets 3.6GB/s read on a VM in this network over the wire.

    • @trakkasure
      @trakkasure Před rokem

      @Ryan Bell : I've had this same configuration for the past 4 months. I put 40g cards in 3 servers. Downloads are not needed to configure/flash these cards. Use mstflint to flash latest firmware and mstconfig to switch modes. There are more tools in the "mst" line to do much more. I also get around 30Gb, but only directly from the host. I only get 22Gb from VM. I believe if I raise the MTU to 9000 I could get a lot more, but I'm having issues getting my switch (Cisco 3016) to pass jumbo packets.

    • @ryanbell85
      @ryanbell85 Před rokem

      @@trakkasure I wish I could justify getting a 40GbE switch like that! A 4-port 40GbE switch would be plenty enough for me if I could find one. I'll have to settle for my mesh network.... at least for now :) MTU at 9000 helped a bit for me.

  • @MK-xc9to
    @MK-xc9to Před 4 měsíci

    Budget Option : HP Connect X 3 Pro cards ( HP 764285-B21 10/40Gb 2P 544+FLR QSFP InfiniBand IB FDR ) , payed 27 Euro for the first 2 and now they ere down to 18 and i bought another 2 as spare parts , they need an Adapter from LOM to PCIe , thats why they are cheap , the Adapter costs 8 -10 Euro (PCIe X8 Riser card for HP FlexibleLOM 2Port GbE 331FLR 366FLR 544FLR 561FLR ) a n d you get the PRO Version of the Mellanox Card = ROCE 2.0 . Besides TrueNAS Scale supports Infiniband now and Windows 11 pro as well = you can use it , its not that much faster but the latency is way lower . Its about 1-2 GB with the 4 x 4 TB NVME Z1 array . HDDS ~ 500MB , smaller Files way less ( as usual )

  • @RemyDMarquis
    @RemyDMarquis Před rokem +1

    I really was hoping that you found a solution for my problems. *sigh*
    That 10GB cap is so damn annoying. I have been trying to find a way to get it to work but it just doesn't work with vrtio for me. If you check the connection speed in terminal "sorry I forgot which command" it will show that the connection is at 40gb. But no matter what I do I can't get the virtio to run at that speed.
    One tip: If you want the DHCP server to give it an IP, do what I do. Bridge a regular 1gb lan with a port on the card and just use that bridge in the VM and connect your workstation to the same port. It will give you IPs for both machines from the DHCP server and you don't have to worry about the IP hassle. Of course you will be limited to the virtio 10gb but it is a piece of mind I'm taking until I can find a solution for that 40gb virtio nonsense.
    And please hear my advice and don't even bother trying Infiniband. Yes it is supposed to be a better implementation and runs at 56gb but don't believe anyone that says it is plug and play, IT IS NOT. Any tiny adjustment you do to the network, it won't work anymore and you have to reboot both machines. I even bought a Mellanox switch and I gotta say, it is horrible.
    I don't know about the modern implementations of it like on CX5 or CX6 but I don't believe it is really ready for the market as it is believed to be. Just stick to regular old Ethernet.

  • @JavierChaparroM
    @JavierChaparroM Před rokem

    Re-visiting a video I once thought I would never be able to re-visit, haha Im trying to set a Proxmox cluster with network storage, and oddly enough in 2023 40gbps stuff is almost as cheap as 10gbps stuff

  • @YHK_YT
    @YHK_YT Před rokem +5

    40Gb/s is actually at least 1.4x faster than 10Gb/s

  • @shephusted2714
    @shephusted2714 Před rokem +9

    you should go to 100gbe - the procedure is mostly the same and price is not all that much more - mikrotik has nice 100g switches now also - 2023 will see more smb and soho goto 100gbe in lieu of 25/40 - you can get breakout cables that split 100 to 4 25gbe - can be a major time saver for people that move around a lot of big data and lower cluster overhead also

    • @XDarkstarXUnknownUnderverse
      @XDarkstarXUnknownUnderverse Před rokem

      I love Mikrotik! They have so much value and flexibility!

    • @timramich
      @timramich Před rokem

      100 gig is too expensive yet if you want real enterprise switching.

    • @shephusted2714
      @shephusted2714 Před rokem

      @@timramich 100g mikrotik switch is less than 100 bucks now - compare cost per port 2.5 vs 100g and you will see 100g is actually cheap - don't leave all that performance on the table

    • @timramich
      @timramich Před rokem +1

      @@shephusted2714 Less than one hundred dollars? No.

    • @shephusted2714
      @shephusted2714 Před rokem

      @@timramich i meant to say 800 - sorry - per port 100g is still a bargain when compared to 2.5 you can use 25g in breakout cables - try ebay - lots of surplus and refurb fiber cards - it is way to go for smb and if you value your time

  • @ajv_2089
    @ajv_2089 Před rokem

    Wouldnt SMB Multichannel also be able to accomplish these speeds?

  • @moellerjon
    @moellerjon Před rokem

    Seems like you’d get better speeds with less overhead doing thunderbolt direct-attach-storage over optical

  • @Veyron640
    @Veyron640 Před 8 měsíci

    you know ... there is a saying.. right?
    "there is NEVER enough speed"
    so... give me 40
    Give me fuel
    Give me fire..
    ghmm
    the end.

  • @LampJustin
    @LampJustin Před rokem

    V2.0 would be using SRIOV to pass through a virtual function to the VM ;)

  • @sashalexander7750
    @sashalexander7750 Před rokem +2

    Here is a good switch for 40g/10g setup Brocade ICX6610 48 port

  • @Nelevita
    @Nelevita Před rokem

    i can give you 2 tipps for youre 40gbit networkcards. 1 Use NFS for filetransfer its posibil easy to activate it in windows only the drive mounts must be every restart used as a startup thing. 2 if you realy realy realy need on youre local SMB use the Pro version of "Windows for Workstations" and use SMB Direct/Multicannel witch the cpu dosent get hit by network traffic there are some good tutorials out there even for linux.

  • @vincewolpert6166
    @vincewolpert6166 Před rokem +1

    I always buy more hardware to justify my prior purchases.

  • @Bwalston910
    @Bwalston910 Před 7 měsíci

    What about thunderbolt 4 / USB4?

  • @levelnine123
    @levelnine123 Před rokem +1

    try crossflash lätest firmware on both card fix some problems for me @ last. What i remeber full 40gbe single port you get only in IB mode.

    • @RaidOwl
      @RaidOwl  Před rokem

      Yeah IB wouldn’t play nice with Proxmox tho. Def worth looking into at some point.

  • @inderveerjohal7218
    @inderveerjohal7218 Před 6 měsíci

    any way to do this for Mac? Off an UNRAID server?

  • @MrBcole8888
    @MrBcole8888 Před rokem

    Why didn't you just pop the other card into your Windows machine to change the mode permanently?

  • @Maine307
    @Maine307 Před rokem

    WHAT ISP COMPANY PROVIDES THAT TYPE OF SPEED?? Here i am, just a few months into StarLink, having HughesNet for 8 years. ..I get 90s MPS download reliable now and i feel like i am king! How and who provides that much speed?? wow

    • @RaidOwl
      @RaidOwl  Před rokem

      That’s not the speed through my ISP that’s the just speed I can get from one computer to another in my LAN

  • @RandomTechWZ
    @RandomTechWZ Před rokem

    That Asus Hyper card is so nice and well worth the money.

  • @computersales
    @computersales Před 4 měsíci

    Crazy ro think 100Gb is becoming more common in homelabs now and 10Gb can borderline be found in the trash. 😅

  • @anwar.shamim
    @anwar.shamim Před rokem +1

    its great

  • @jacobnoori
    @jacobnoori Před rokem +1

    Network speeds are like the lift kits of an IT nerd - You're compensating the higher you go. This coming from somebody who recently went to 10G in my home. 🤓

    • @RaidOwl
      @RaidOwl  Před rokem +1

      lol I can agree with that

  • @noxlupi1
    @noxlupi1 Před rokem

    Windows network stack is absolute bs.. But with some adjustments you should be able to hit 35-37Gbit on that card. It is the same with 10gbit, by default it only gives you about 3-4gbit in windows. But you can get it to around 7-9gbit with some tuning.
    It is also dependent on the version of windows. Windows server is doing way better than home and pro.. And workstation is better, if you have RDMA enabled on both ends.
    Good places to start are: Frame size / MTU (MTU 9000 - jumbo frames is a good idea when working with big files locally) Try Turning "large send Offload" off, on some systems the feature is best left on, but on others it is a bottleneck. Also interrupt moderation is on by default. On some systems, this can be good to avoid dedicating too much priority to the network, but on a beefy system, it can often boost network performance significantly, if turned off. If
    If you want to see your card perform almost at full blast, just boot your PC on an Ubuntu USB, and do an iperf to the BSD nas.

  • @jumanjii1
    @jumanjii1 Před rokem

    I can't even get 10gb to work on my LAN, let alone 40gb.

  • @cyberjack
    @cyberjack Před rokem

    network speed can be limited by drive speed

  • @SirHackaL0t.
    @SirHackaL0t. Před rokem +1

    What made you use 40.x.x.x instead of 10.x.x.x?

    • @RaidOwl
      @RaidOwl  Před rokem +3

      Easy to remember since it’s 40G and wanted it easily distinguishable from my regular subnet.

    • @kingneutron1
      @kingneutron1 Před 5 měsíci

      @@RaidOwl possibly others have mentioned this but you'd be better off using 10.40 or 172.16.40 private address range ;-)

    • @RaidOwl
      @RaidOwl  Před 5 měsíci +1

      @@kingneutron1 yeah I've since changed it

  • @liquidmobius
    @liquidmobius Před rokem +4

    Go big or go home!

  • @ronaldronald8819
    @ronaldronald8819 Před rokem

    No gone stick to 10Gb happy with that

  • @SilentDecode
    @SilentDecode Před rokem

    Why the strange subnet of 44.0.0.x? Just why? I'm curious!

    • @RaidOwl
      @RaidOwl  Před rokem

      Cuz I picked a random one for the sake of the video lol. No real reason.

    • @draskuul
      @draskuul Před rokem +3

      @@RaidOwl Please, please do yourself (and everyone else) a favor by using a proper private IP space (192.168/16, 10/8, 172.16/12). I worked at a place pre-internet days that used the SCO UNIX manual examples, which turned out to be public IP space, for all servers. Once we got internet-connected across the board it was a real pain to deal with later. The unknowing users may make the same mistake using your examples.

    • @RaidOwl
      @RaidOwl  Před rokem +1

      @@draskuul Yeah, its been updated since

  • @nyanates
    @nyanates Před rokem

    Because you can.

  • @MrBrutalmetalhead
    @MrBrutalmetalhead Před rokem +2

    That is awesome. Its getting so much cheaper now for 40G

    • @logan_kes
      @logan_kes Před rokem

      Ironically it *used to* be even cheaper in 2017 ish… prices of used server gear has increased dramatically over the past 3 years. Look at Linus tech tips video on a similar years ago, I want to say he got his for less than half the price that they sell for now. I got some back then for like $35 a card for the same cards.

  • @ChristopherPuzey
    @ChristopherPuzey Před rokem +1

    Why are you using public ip addresses on your LAN?

    • @RaidOwl
      @RaidOwl  Před rokem

      Those have been changed to private since then

  • @cdurkinz
    @cdurkinz Před 2 měsíci

    It's sad that you go from 10G to 40G and only double your speed I am just looking into this and seems to be normal at least while using windows file copy.

    • @RaidOwl
      @RaidOwl  Před 2 měsíci

      Def diminishing returns

  • @Veyron640
    @Veyron640 Před 8 měsíci

    I have Ferrari..
    But, would I want you to have it??
    Absolutely Not! lol
    Thats kind of the tone of this vid on the receiving end.

  • @urzalukaskubicek9690
    @urzalukaskubicek9690 Před rokem

    How come you have 40.0.0.x addresses on your local network?

    • @RaidOwl
      @RaidOwl  Před rokem

      It’s my lucky number. But yeah it’s not in my subnet so I just picked something.

    • @urzalukaskubicek9690
      @urzalukaskubicek9690 Před rokem

      @@RaidOwl i mean.. you can do that? i dont understand netwoks im more on the developer side of things, so networks are like dark magic for me :) so i am just surprised, i would expect something like router to complain or something..

    • @RaidOwl
      @RaidOwl  Před rokem +1

      @@urzalukaskubicek9690 Yeah it's because there is no router in that setup. It's just a direct connection between computers :)

    • @pg6525
      @pg6525 Před měsícem

      @@RaidOwl 40 like 40Gbit... :D

  • @jereviitanen6883
    @jereviitanen6883 Před rokem +1

    Why not use NFS?

    • @RaidOwl
      @RaidOwl  Před rokem +1

      Worth a shot I guess.

    • @LampJustin
      @LampJustin Před rokem

      But be sure to have a look at pNFS and NFS + RDMA...

  • @psycl0ptic
    @psycl0ptic Před rokem

    These cards are also no longer supported in vmware.

  • @meteailesi
    @meteailesi Před rokem

    Sound got noise , u can clear the sound.

  • @jonathanmayor3942
    @jonathanmayor3942 Před rokem +1

    Good video but, please clean up to cable in your nas God please pardon him

    • @RaidOwl
      @RaidOwl  Před rokem +1

      Lmao yeahhhh I’ve been doing some upgrades so cable management will come when that’s finished

  • @notafbihoneypot8487
    @notafbihoneypot8487 Před rokem

    Based

  • @donnyferris5521
    @donnyferris5521 Před rokem

    It seems like the only reason ANYONE does this is because they can. Transfer a file in .01 seconds vs .04 seconds? No thanks. It’s like modding a car for more, more, more horsepower when you almost never get to put all those horses to work. I, personally, wouldn’t spend the extra money on anything above 1Gb.

    • @RaidOwl
      @RaidOwl  Před rokem +1

      I agree and even said that this is dumb even for my use case. This belongs in enterprise solutions where you NEED that bandwidth, not in a home setup.

    • @donnyferris5521
      @donnyferris5521 Před rokem

      @@RaidOwl I know you did and didn’t mean to be critical of you or this video. I understand and appreciate why you did it; I’m just saying that - in general - spending the money on anything above 1Gb is foolish. May as well spend it on hookers and blow…

    • @logan_kes
      @logan_kes Před rokem

      @@wojtek-33 for home use, I agree, which is why the shift is to 2.5 g rather than 10g. However 10g or more has its place, a single HDD can typically saturate a 1 gigabit link, which should show how slow it truly is. A single SSD even a crappy sata one on a NAS could saturate 4x 1 gigabit links. So anyone wanting to host a VM on shared storage is gonna cry when they try to do it over 1 gig

  • @jfkastner
    @jfkastner Před rokem

    40Gbps looks like a 'dead end', if you look at industry projections RE port quantity sold its 10, 25, 100, 400

    • @ryanbell85
      @ryanbell85 Před rokem +2

      The dual port 40GbE cards are cheaper than 10GbE dual port cards on eBay right now. Why pay more for a point-to-point connection?

    • @jfkastner
      @jfkastner Před rokem

      @@ryanbell85 Many times when a manufacturer declares a product 'obsolete', 'legacy' etc active driver development stops or slows down to a crawl

    • @ryanbell85
      @ryanbell85 Před rokem +1

      @@jfkastner most home labs are full of unsupported, legacy, and second-hand equipment. It's just part of the fun to figure it out and stay on budget.

  • @XDarkstarXUnknownUnderverse

    My goal is 100GB...because why not and its cheap (I use Mikrotik).

  • @minedustry
    @minedustry Před rokem

    Take my advice, I'm not using it.

  • @JasonsLabVideos
    @JasonsLabVideos Před rokem

    You don't need 40gig in your home lab. Show that you can saturate the 10g.

    • @RaidOwl
      @RaidOwl  Před rokem +1

      I agree. That was the whole point of the video lol

    • @repatch43
      @repatch43 Před rokem

      Did you actually watch the video?

    • @JasonsLabVideos
      @JasonsLabVideos Před rokem

      ​ @Raid Owl Exactly kinda of my point, i could have worded it better. You could do a 10g video and show that 40 isn't needed too..

    • @ryanbell85
      @ryanbell85 Před rokem

      10g would have cost more.

    • @JasonsLabVideos
      @JasonsLabVideos Před rokem

      @@ryanbell85 2 10g cards 50$ 1 DAC cable 20$

  • @R055LE.1
    @R055LE.1 Před rokem

    People need to stop saying "research". They've been researching for hours. No you haven't. You've been studying. You didn't run real scientific experimentation with controls and variables, you read stuff online and flipped some switches. Most people have never conducted research in their lives. They study.

    • @RaidOwl
      @RaidOwl  Před rokem

      I used the scientific method. I also had a lab coat on…and nothing else 😉

    • @R055LE.1
      @R055LE.1 Před rokem

      @@RaidOwl i heavily respect this reply 🤣

  • @SimonLally1975
    @SimonLally1975 Před rokem

    Have you looked into Mikrotik CRS326-24S+2Q+RM, I know it is a little on the pricey side, or if you are going to go for this then the 100Gbps with this Mikrotik CRS504-4XQ-IN just for sh!ts and giggles. :)

  • @npham1198
    @npham1198 Před rokem +1

    I would change that 40.x.x.x network into something under the rfc1918 private address space!

  • @johnkristian
    @johnkristian Před 8 měsíci

    calling yourself a tech youtuber and are COMPLETELY clueless about infiniband. LOL