This is just too fast! 100 GbE // 100 Gigabit Ethernet!

Sdílet
Vložit
  • čas přidán 19. 06. 2024
  • This is crazy! Testing 100 GigE (100 Gigabit Ethernet) switches with AMD Ryzen 9 5950X CPU and Radeon RX600XT GPU! Who will win? 100 GbE Network or Gaming PCs with 100 Gig network cards?
    Menu:
    100GbE network! 0:00
    How long will it take to copy 40Gig of data: 0:22
    Robocopy file copy: 1:08
    Speed results! 1:28
    Windows File copy speeds: 1:59
    iPerf speed testing: 2:30
    iPerf settings: 3:20
    iPerf results: 3:42
    100G Mellanox network cards: 5:14
    Jumbo Packets: 6:04
    Aruba switch: 6:26
    Switch configuration: 7:07
    Back to back DAC 100GbE connection:8:52
    iPerf testing using DAC cable: 10:05
    Windows File copy speeds: 11:00
    Robocopy test: 11:30
    =========================
    Free Aruba courses on Udemy:
    =========================
    Security: davidbombal.wiki/arubasecurity
    WiFi: davidbombal.wiki/arubamobility
    Networking: davidbombal.wiki/freearubacourse
    ==================================
    Free Aruba courses on davidbombal.com
    ==================================
    Security: davidbombal.wiki/dbarubasecurity
    WiFi: davidbombal.wiki/dbarubamobility
    Networking: davidbombal.wiki/dbarubanetwo...
    ======================
    Aruba discounted courses:
    ======================
    View Aruba CX Switching training options here: davidbombal.wiki/arubatraining
    To register with the 50% off discount enter “DaBomb50” in the discount field at checkout.
    The following terms & conditions apply:
    50% off promo ends 10/31/21
    Enter discount code at checkout, credit card payments only (PayPal)
    Cannot be combined with any other discount.
    Discount is for training with Aruba Education Services only and is not applicable with training partners.
    ================
    Connect with me:
    ================
    Discord: / discord
    Twitter: / davidbombal
    Instagram: / davidbombal
    LinkedIn: / davidbombal
    Facebook: / davidbombal.co
    TikTok: / davidbombal
    CZcams: / davidbombal
    aruba
    aruba 8360
    aruba networks
    aruba networking
    abc networking
    qsfp
    iperf
    robocopy
    aruba 6300m
    100gbe switch
    25gbe switch
    dac cable
    aruba instant one
    hpe
    hp
    hpe networking
    aruba mobility
    aruba security training
    free aruba training
    clearpass
    clearpass training
    hpe training
    free aruba clearpass training
    python
    wireshark
    mellanox
    mellanox connectx
    mellanox connectx-4
    Please note that links listed may be affiliate links and provide me with a small percentage/kickback should you use them to purchase any of the items listed or recommended. Thank you for supporting me and this channel!
    #100gbe #100gigethernet #arubanetworks
  • Věda a technologie

Komentáře • 411

  • @davidbombal
    @davidbombal  Před 3 lety +9

    Menu:
    100GbE network! 0:00
    How long will it take to copy 40Gig of data: 0:22
    Robocopy file copy: 1:08
    Speed results! 1:28
    Windows File copy speeds: 1:59
    iPerf speed testing: 2:30
    iPerf settings: 3:20
    iPerf results: 3:42
    100G Mellanox network cards: 5:14
    Jumbo Packets: 6:04
    Aruba switch: 6:26
    Switch configuration: 7:07
    Back to back DAC 100GbE connection:8:52
    iPerf testing using DAC cable: 10:05
    Windows File copy speeds: 11:00
    Robocopy test: 11:30
    =========================
    Free Aruba courses on Udemy:
    =========================
    Security: davidbombal.wiki/arubasecurity
    WiFi: davidbombal.wiki/arubamobility
    Networking: davidbombal.wiki/freearubacourse
    ==================================
    Free Aruba courses on davidbombal.com
    ==================================
    Security: davidbombal.wiki/dbarubasecurity
    WiFi: davidbombal.wiki/dbarubamobility
    Networking: davidbombal.wiki/dbarubanetworking
    ======================
    Aruba discounted courses:
    ======================
    View Aruba CX Switching training options here: davidbombal.wiki/arubatraining
    To register with the 50% off discount enter “DaBomb50” in the discount field at checkout.
    The following terms & conditions apply:
    50% off promo ends 10/31/21
    Enter discount code at checkout, credit card payments only (PayPal)
    Cannot be combined with any other discount.
    Discount is for training with Aruba Education Services only and is not applicable with training partners.
    ================
    Connect with me:
    ================
    Discord: discord.com/invite/usKSyzb
    Twitter: twitter.com/davidbombal
    Instagram: instagram.com/davidbombal
    LinkedIn: www.linkedin.com/in/davidbombal
    Facebook: facebook.com/davidbombal.co
    TikTok: tiktok.com/@davidbombal
    CZcams: czcams.com/users/davidbombal
    Please note that links listed may be affiliate links and provide me with a small percentage/kickback should you use them to purchase any of the items listed or recommended. Thank you for supporting me and this channel!

    • @rishiboodoo863
      @rishiboodoo863 Před 3 lety

      You and Chuck Are My My Inspiration

    • @wyattarich
      @wyattarich Před 2 lety

      I'd love to see if there's a big difference between explorer transfer speeds and Teracopy transfer speeds

    • @lennyaltamura2009
      @lennyaltamura2009 Před 2 lety

      It's your IO. The bus on your motherboard and CPU, The data is copying but hits the CPU first. What you need to jack up your speed is to add an enterprise-class raid controller card to your machine with Cache (not software raid). Put a few fast ssd's/nvme/m.2/U.2/optane (whatever fast storage media) on there in raid 0 config, and you will achieve what your expectations were original. RAID controller will handle the io straight to the disk eliminating the CPU from the equation. NeoQuixotic is almost correct (talking about the lanes configuration is part of the problem), but to avoid any could be's /should be's, just add the raid controller with the Raid-0 and whoalla; problem solved. BTW, I love your education videos. Your edu vids are Very, very good. I have one of the cybersecurity ones that you teach. I love cybersecurity.

    • @MichaelKnickers
      @MichaelKnickers Před rokem

      @@lennyaltamura2009which raid controller models would you recommended

    • @kailuncheng6912
      @kailuncheng6912 Před rokem

      Move network card to first PCI slot may solved the problem^^

  • @NeoQuixotic
    @NeoQuixotic Před 3 lety +123

    I think your bottleneck is your PCI Express bandwidth. I'm assuming you have X570 chipset motherboards in the PCs you are using. You have 24 PCIe 4.0 lanes in total with the 5950x and X570 chipset. It is generally broken down to 16 lanes for the top PCIe slot, 4 lanes for a NVMe, and 4 lanes to the X570 chipset. However, this is all dependent on your exact motherboard, so I'm just assuming currently. Your GPUs are in the top x16 slot so your 100g NICs are in a secondary bottom x16 physical slot. This slot fits a x16 card, but is most likely electrically only capable of x4 speeds that is going through the x4 link of the chipset. Looking at Mellanox's documentation the NICs will auto-negotiate the link speed all the way down to x1 if needed, but at greatly reduced performance.
    This PCIe 4.0 x4 link is capable of 7.877GB/s at most or 63.016 Gb/s. As other I/O shares the chipset bandwidth you will never see the max anyways. To hit over 100Gb/s you would need to be connected to at least a PCIe 4.0 x8 link or PCIe 3.0 x16 link. There are other factors as what your PCIe bandwidth is, such as if your motherboard has a PCIe switch on some of its slots. You would want to check with the vendor or other users if a block diagram exists. A block diagram will breakdown how everything is interconnected on the motherboard.
    You could try moving the GPUs to the bottom x16 slot and the NICs to the top slot. Also you could confirm in the BIOS of each PC that the PCIe slots are set to auto-negotiate or manually set it if required, assuming that's on option for your BIOS.
    These NICs are more designed to be used in servers than a consumer end CPU/chipset. The HEDT (High End Desktop) and server CPUs from Intel and AMD have much more PCIe bandwidth to allow for multiple bandwidth heavy expansion cards to be installed and fully utilized.
    Being that I believe iPerf by default is memory to memory copying, you should be able to see close to the max 100Gb/s if you put them in the top slot on both PCs. As far as disk to disk transfers reaching that, you would need a more robust storage solution than what would be practical or even possible in a X570 consumer system.

    • @paulsccna2964
      @paulsccna2964 Před 3 lety +1

      I agree. By moving the video card or ensuring the Mellanox card to highest PCI slot speed (possibly manually allocating) that speed to the slot. To ensure he is getting full speed performance for that PCI bus slot. Might be able to get closer to 100 Mbs.

    • @JzJad
      @JzJad Před 3 lety +4

      Yeah wrong configuration for testing the card.

    • @neothermic1
      @neothermic1 Před 2 lety +6

      Looking at the panning of the motherboard at 5:00 this seems to be an Asus ROG Strix X570-F Gaming motherboard. (lack of 2 digit POST readout at the base of the board means it's not the other variants of the Strix X570). The GPU is plugged into PCIEX16_1, and the 100G card into PCIEX16_2 - the documentation suggests that when both these slots are occupied the motherboard goes down to PCIE 4.0 x8 on both slots, so in theory this isn't the issue. The PCIE_X1_2 is occupied by a wifi card, and that, from the documentation, steals lanes from the PCIEX16_3 slot (which runs off the PCIE 3.0 lanes anyway), so that also shouldn't be a problem. I would suggest a trip to the BIOS to ensure that the motherboard is correctly splitting the two PCIE 4.0 x16 slots into two x8s correctly, as your explanation matches up with what might be happening, in that the GPU is negotiating an x8 connection but the card doesn't and gets given a x4 bandwidth; server cards sometimes don't like being forced to negotiate for their slots.

    • @neothermic1
      @neothermic1 Před 2 lety

      That said, no idea what the _other_ computer is using, so I wager that one might be the one constricting down to a x4 lane, and that's the right answer.

    • @davidbombal
      @davidbombal  Před 2 lety +4

      PC Information:
      1 x AlphaSync Gaming Desktop PC, AMD Ryzen 9 5950X 3.4GHz, 32GB DDR4 RGB, 4TB HDD, 1TB SSD M.2, ASUS RX6900XT, WIFI, Mellanox CX416A ConnectX-4 100Gb/s Ethernet Dual QSFP28 MCX416A-CCAT D
      1 x AlphaSync Gaming Desktop PC, AMD Ryzen 9 5900X, 32GB DDR4 RGB, 4TB HDD, 1TB SSD M.2, AMD Radeon RX6800XT, WIFI, Mellanox CX416A ConnectX-4 100Gb/s Ethernet Dual QSFP28 MCX416A-CCAT

  • @simbahunter
    @simbahunter Před 3 lety +32

    Absolutely the best way to predict the future is to create it.

  • @michaelgkellygreen
    @michaelgkellygreen Před 3 lety +1

    Very well explained and interesting video. Technology advances are coming thick and fast. Speeds we didn't even dream of getting 10 years ago are a reality. Keep up the good work David

  • @Paavy
    @Paavy Před 3 lety +6

    100 Gbps is crazy when I'm still amazed by my 1 Gbps. Love the content, appreciate evrything you do :)

  • @69purp
    @69purp Před 2 lety

    This was the craziest setup for my Arch.Thank you ,David

  • @MangolikRoy
    @MangolikRoy Před 3 lety +1

    This video is enriched with some solid information thank you David 🙏
    You are the only that I can comment without any hesitation bcz you always..,... i loss my words

  • @georgisharkov9564
    @georgisharkov9564 Před 3 lety +4

    Another interesting video. Thank you, David

  • @daslolo
    @daslolo Před rokem +5

    The DMI is the bottleneck. Move your NIC to pcie_0, the one linked directly to your CPU and if you can, turn on RDMA.

  • @James-vd3xj
    @James-vd3xj Před 3 lety +22

    I would imagine the limitation also involves the HDD/SSD.
    Please make sure to update us when you solve this as all of us would be interested in upgrading our home networks!
    Thanks for the video, and all your encouraging messages.

    • @davidbombal
      @davidbombal  Před 3 lety +8

      Agreed James. I would have to test it using Linux to see where the limitation is.

    • @samadams4582
      @samadams4582 Před 3 lety +3

      Yes, a top of the line 970 Evo nvme can only push around 25 gigabits/second. A gen 4 nvme may be able to push closer to 50, but this switch isn't geared for servers, it's geared for a network core with many pcs or servers connected at once.

    • @oildiggerlwd
      @oildiggerlwd Před 3 lety +3

      I think LTT had to use threadrippers and honey badger SSD’s to approach saturating a 100gb link

    • @oildiggerlwd
      @oildiggerlwd Před 3 lety

      czcams.com/video/18xtogjz5Ow/video.html

  • @nicholassattaur9964
    @nicholassattaur9964 Před 3 lety

    Awesome and informative video! Thanks David

  • @QuantumBraced
    @QuantumBraced Před 3 lety +5

    Would love a video on how the network was set up physically, what cards, transceivers and cables you used.

  • @lunhamegenogueira1969
    @lunhamegenogueira1969 Před 3 lety

    Great video as always DB. It would be nice to see the speeds that the switch was actually putting out as well without ignoring off course the fact that the OS showed that the card was already running @ 100% of its capacity🧐🧐🧐

  • @bilawaljokhio7738
    @bilawaljokhio7738 Před 3 lety

    Sir david you are such a good teacher i really enjoy your videos i have learned alot from your last video i have started python course love u sir

  • @napm54
    @napm54 Před 3 lety

    The PC is having a hard time processing that big data, that's a big problem :) thanks again David for this hands on video!

  • @gjsatru3383
    @gjsatru3383 Před 3 lety

    Great David this is very great I just wanted this cause I am in part of India where I have to worry for network.Also I am sorry David for being so late I was just looking at some basics of ssi syntax to ss scripting

  • @x_quicklyy6033
    @x_quicklyy6033 Před 3 lety

    That’s incredibly fast. Thanks for the great video!

    • @x_quicklyy6033
      @x_quicklyy6033 Před 3 lety

      @@davidbombal By the way, do you still use Kali? My Kali Linux randomly shut down and now it won’t boot.

  • @vyasG
    @vyasG Před 3 lety +2

    Thank You for this interesting Video. Mouth-watering speeds! I would agree with what James mentioned regarding SSD/HDD. I was thinking how the storage devices will handle such speeds. Do you have an array of storage devices to handle such speeds?

  • @naeem8434
    @naeem8434 Před 3 lety

    This video is crazy as well as informative.

  • @uzumakiuchiha7678
    @uzumakiuchiha7678 Před 3 lety

    When you are a beginner and things are going above your head but David explains so clearly that you think that yes you know this , then you know how much efforts David himself must've put to learn stuffs and teach them to us
    That too for free
    Thank you Sir🙏

  • @danielpelfrey1656
    @danielpelfrey1656 Před 3 lety

    Awesome to see you talk about high performance computing networking topics! (We met at Cisco live in San Diego a few years ago, and we talked about Summit supercomputer and Cumulus Linux) The first bottleneck was your disks.
    The second bottleneck is probably your PCI slot. Is your motherboard pcie gen 2,3 or 4? How many lanes of do you have on your slot and the card?
    What is your single TCP stream performance? If you remove the -P option, do single stream, then do 2,4,8, and so on.

  • @konstantinosvlitakis
    @konstantinosvlitakis Před 3 lety +2

    Hi David, really interesting speed test you demonstrated. Could you possibly repeat the test between two linux machines? It is known that networking stack in *nix like systems is highly optimized. Top quality video as always! Thank you very much!

  • @ImagineIfNot
    @ImagineIfNot Před 3 lety

    Now i just wanna watch videos to give you views and to learn ofc after all the good stuff you're doing to the people...
    You're smart. You are building an actual fanbase that lasts long

  • @Dani-cr7cj
    @Dani-cr7cj Před 3 lety

    Hey David, thank you for the video. Actually, we are using Aruba L3 - 3810M, L2 - 2930F, WLC 7010 and AP 308. As a hardware it is very nice, and their prices can compete with Cisco. The only issue is the technical support - Might be my area. Thank you again.

  • @kungsmechackasher6405
    @kungsmechackasher6405 Před 3 lety +2

    David you're amazing .

  • @MiekSr
    @MiekSr Před 3 lety +1

    Maybe do a file transfer between two server? I'm also interested to see how Intel Desktops handle 100Gb network speeds. Cool vid!

  • @ta1ism4n65
    @ta1ism4n65 Před 2 lety

    Great video, thanks for sharing this. While I dont think the AMD Ryzen has integrated graphics (from a google, don't run AMD anymore), I'd be interested to see what the performance would be like without the GPU attached to the PCIe lanes, for example using an Intel chip with integrated graphics, would that open up enough PCIe bandwidth to get more throughput?

  • @bobnoob1467
    @bobnoob1467 Před 3 lety

    Awesome video. Keep it up.

  • @harounamoumounikomoye9498

    thank for this beautiful tutorial

  • @MihataTV
    @MihataTV Před 3 lety +5

    In the files copy test limitation is from storages speed, you can check it with local file copies.

    • @norbertopace7580
      @norbertopace7580 Před 3 lety

      Maybe the storage have the problema, but windows use cache on memory on this tasks.

    • @BelowAverageRazzleDazzle
      @BelowAverageRazzleDazzle Před 3 lety

      Doubtful... Storage systems and SSDs are measures in MegaBYTES per second, not MegaBITS... The problem is the buss seeds between ram, cou and the nic.

  • @stark6314
    @stark6314 Před 3 lety +5

    Really fast ❤️🔥🔥

  • @ImLearningToTrade
    @ImLearningToTrade Před 3 lety

    Interesting I got the notification for this video on my Workphone but it took a full two minutes for this videos to show up on your channel.

    • @davidbombal
      @davidbombal  Před 3 lety

      Not sure why CZcams did that.... but have been seeing strange stuff happening recently.

  • @channel-ch2hc
    @channel-ch2hc Před 3 lety

    Your are doing a great job. Keep it up

  • @samislam2746
    @samislam2746 Před 3 lety +1

    Thanks for sharing this

  • @fy7589
    @fy7589 Před 3 lety +1

    I haven't tried such a crazy thing like you did so I might be saying something you already tried but have you tried to raid 0 multiple pcie 4.0 ssds directly thru the CPU lanes? Also on a ryzen 5800x rather than a dual CCD CPU. Maybe the way the IO die works may be limiting your case. It may be splitting the io die into two and while dedicating a single CCD to the task, leaving other to everything else while also having the IO die to distribute its resources evenly per die. You might wanna try it on a raid 0 config directly on the CPU lanes.

  • @heathbezuidenhout2551
    @heathbezuidenhout2551 Před 3 lety +2

    The Aruba 8360 switch you using if I heard correctly, that an enterprise core switch for data centers and campuses. Not bad at all for a home network

    • @BelowAverageRazzleDazzle
      @BelowAverageRazzleDazzle Před 3 lety

      No kidding right... I wish I just had a couple of 20,000 dollar swiches laying around too...

  • @russlandry995
    @russlandry995 Před 3 lety +1

    What HDs are you using? Even pcie 4.0 hds top out at about 7.5 gbs of R/W speeds. Doubt you'll be able to get faster by copying, but you should be able to stream (no clue how you could test at that speed) faster than you are coping

  • @tz4399
    @tz4399 Před 3 lety

    Hi David, What SSDs were you using? I'm guessing something in RAID0 at least?

  • @t.b.6880
    @t.b.6880 Před 3 lety

    David, limitation of speed might be related to read/write disc operation. Enterprise ssd migt help. Also, check in bios if any power savings mode is activated. Another bottleneck can be dynamic cpu power allocation...

  • @bibhashpodh1074
    @bibhashpodh1074 Před 3 lety

    Great video😍

  • @jerrybossard
    @jerrybossard Před 3 lety

    What is the PCIe version and bandwidth (x1, x8, x16) on the motherboard that the NIC is plugged into?

  • @yashwantreddyr8286
    @yashwantreddyr8286 Před 3 lety

    Woooww...that's awesome🔥🔥🔥

  • @abdenacerdjerrah7047
    @abdenacerdjerrah7047 Před 3 lety

    Awesome video sensie 👺

  • @sagegeas9205
    @sagegeas9205 Před 2 lety +2

    How ironically fitting is it that the most you can get at that 6:30 mark is 56Gbits per second...
    How far we have come from the simple modest and humble 56k modems... lol

    • @davidbombal
      @davidbombal  Před 2 lety +1

      lol... now that is a great comment!

  • @CLEARRTC
    @CLEARRTC Před 3 lety

    What drives are you using, and how much RAM? Best case I think Gen 4 PCIe NVMe can do is 64Gbps (8GT/s)

  • @farghamahsan5034
    @farghamahsan5034 Před 3 lety

    David you are awesome for the world. Please make video parts on SFP with details.

  • @sob3ygrime
    @sob3ygrime Před 3 lety

    I waas going to ask about what switch that was. thank you. i think i'll grab one to play with :)

  • @naeem8434
    @naeem8434 Před 3 lety

    Sir, I have a question If I don't have a router but I want to transfer data from one PC to another PC, then I use a normal ethernet cable and connect one to the end to one PC and the other to another, is this is going to work?

  • @RepaireroftheBreach
    @RepaireroftheBreach Před rokem

    David, were you ever able to fix this limitation? I have the same problem, but I have a Asus PCIe Gen 4 motherboard running a threadripper on Windows 11 22H2, with newer NICs such as the QNAP CXG-100G2SF-CX6 and the Mellanox (mcx623106an-cdat) with different 100G cables, etc. Also tried switching PCI slots with the GPU and verifying the NICs are running at 16x in the Bios and still cant break the 50-55 GB/s limit. Did you ever figure this out?

  • @LORDJPXX3
    @LORDJPXX3 Před 3 lety

    Bloody hell that's faster than the network backbone that I recently built at work.

  • @Firoz900
    @Firoz900 Před 3 lety

    Thank you guru.

  • @bahmanhatami2573
    @bahmanhatami2573 Před 3 lety +7

    Do these cards support Direct Memory Access (DMA)?
    I've heard that DMA and RDMA are here to solve these sort of problems. As I never had such a sweet issue, I don't know exactly for example, should you enable it or it is enabled by default, or how exactly you can take advantage of it on windows machines...

    • @davidbombal
      @davidbombal  Před 3 lety +3

      DMA is enabled on the computers. But, will need to check if that requires a newer version of network card.

    • @James_Knott
      @James_Knott Před 3 lety +2

      I thought DMA was the norm for many years. Not using it would be a real waste of performance even at much lower rates. While I haven't noticed this with NICs, specs for switches often list frames per second, which implies DMA.
      BTW, long distance fibre links often run at 100 Gb per wavelength.

    • @audiencemember1337
      @audiencemember1337 Před 3 lety +1

      @@davidbombal doesn't the switch need to be configured for rdma as well? You shouldt be able to see the file transfer in task man if this is working correctly as rdma bypasses the traditional network stack

    • @giornikitop5373
      @giornikitop5373 Před 3 lety +1

      @@davidbombal i believe if RDMA was working on all sides, there would have been no utilization in perf monitor nics, as it bypasses completely the cpu and those counters. also cpu utilization would have been minimum.

    • @giornikitop5373
      @giornikitop5373 Před 3 lety

      @@davidbombal also as others recommended, make sure the nic is using a full x16 pci-e slot directly to the cpu, not through the chipset. i think it's berrely enough for that 2port 100gbe fd.

  • @FYDanny
    @FYDanny Před 3 lety

    YES! I love networking!

  • @rzjo
    @rzjo Před rokem

    every thing in your setup seems well, you are reaching around a 40% of network resources and to reach around 90% you may need to check the PCIex NVME speed, it must be installed as RAID to increase the throughput and there is a special adaptor for this, hopefully this help you :)

  • @KevinSatterthwaite
    @KevinSatterthwaite Před 3 lety

    what mother board did you use? maybe your MB dumbs down the second PCIE x16 slot when both are populated.

  • @ragalMX
    @ragalMX Před 2 lety

    how much time take if you split your HD into other logic unit and copy those 40 GB to the other partition? maybe the bottle neck is the storage

  • @carl4992
    @carl4992 Před 3 lety

    Hi David, if you haven't already, try turning off interrupt moderation on both adaptors.

  • @curtbeers1606
    @curtbeers1606 Před 3 lety

    Would be interesting to run them both with Linux to see if there is an OS issue. I agree with one comment about the bottleneck possibly being the hdd/ssd or the bus speed of the expansion port the nic is installed.

  • @anfxf6513
    @anfxf6513 Před 3 lety

    Keep Up Sir We Love Your Videos
    Specially Ethical Hacking Related 🥰

  • @anfxf6513
    @anfxf6513 Před 3 lety

    This Is Awesome Sir
    Such a Speed
    I won't Be able To Test It Any Day😥
    Bcz My Pc is Very Low Performing.

    • @anfxf6513
      @anfxf6513 Před 3 lety

      @@davidbombal I Hope So Sir

  • @FliesEyes
    @FliesEyes Před rokem

    My thoughts would be the slot the adapter card is using. Consumer motherboards tend to have specific configurations to PCIe lane allocation an bifurcation settings in the bios.
    I hope to do some similar testing on Z790 motherboard in the near future.

  • @Sky-wp4vj
    @Sky-wp4vj Před 3 lety

    Hey David question don't know where to ask questions like this to message you now on the subnetting where you do the split on the host portion how will you do that with a single digit? Like 172.168.192.0/18 I can split the 192 at the 18 bit but what happens if is just a single digit can you help on this on subnetting signal digits instead of triple? Let me know, taking your CCNA course. Got the triple but little confused on single

  • @derickasamani5730
    @derickasamani5730 Před 3 lety

    Mr. Bombal, Please what are the prices of the NIC's, switch and DAC

  • @JorisSelsJS
    @JorisSelsJS Před 3 lety +2

    Hey David, as a 20 years old Belgian entrepreneur that has founded a networking company your video's are still very helpfull to me in all sorts of ways! I want to thank you for all the amazing content and want to encourage you to keep doing what you do because we all love it! Besides that if you ever want to talk about what we do or are interested feel free to contact me anytime! :)

  • @rahulprasad2318
    @rahulprasad2318 Před 3 lety +1

    You are sure you storage medium itself allows 100gbps?

  • @rahultatikonda
    @rahultatikonda Před 3 lety

    WE ENJOY YOUR VIDEOS SIR THANKS FOR YOUR TEACHING SIR

  • @mihumono
    @mihumono Před 3 lety +1

    Is the card pcie x16? Maybe it is running in x8 mode. Also how fast is Your RAM(SUBTIMINGS?)?

  • @ajs3041
    @ajs3041 Před 3 lety

    Really sick

  • @singhatul7442
    @singhatul7442 Před 3 lety

    Sir i have a question, for hacking wifi does we need external wifi adapter ?

  • @SumitSharma-tk8hp
    @SumitSharma-tk8hp Před 3 lety

    thanks you sir !

  • @ayush_panwar1
    @ayush_panwar1 Před 3 lety +1

    That speed is awesome even its not the maximum , also can you make videos on SOC and blue team career opportunities.🤗😇

  • @spuriustadius5034
    @spuriustadius5034 Před 2 lety +1

    These NIC's are intended for specialized networking applications and not general purpose usage. There ARE products with servers that can make use of them but they're typically for network monitoring, things like line-speed TLS decryption for enterprises, or "network appliances" that analyze IP traffic and also record it (usually, with some filtering conditions, because even a huge RAID array will fill up quickly at such speeds). It's fun to see what would happen if you pop one these into a relatively normal desktop, however!

    • @davidbombal
      @davidbombal  Před 2 lety +1

      Agreed. But got to have some fun with such cool switches :)

  • @JohnDoe-sm7vw
    @JohnDoe-sm7vw Před 3 lety

    Rockin Chillin 😎

  • @kintag4459
    @kintag4459 Před 3 lety

    Thanks brother

  • @CalvinHenderson
    @CalvinHenderson Před 2 lety

    Curious if the storage drives being used could be a bottleneck as well? I seen other comments about the pcie card slot. Just looking at a single HDD and SSD and their speeds would not be conducive individually to the speeds is my thought. Maybe my thought is wrong.
    Late to the party, I am.

  • @wilhelmngoma9009
    @wilhelmngoma9009 Před 3 lety

    Awesome!

  • @magicwheelder
    @magicwheelder Před 3 lety

    I would like to see how many computers you can put on it be for the speed started to drop below the 55gbs

  • @gregm.6945
    @gregm.6945 Před 3 lety +2

    the copying window @ 11:20 shows 2.23GB/s. Doesn't the 2.23GB/s represent 2.23Giga *bytes* /second, not 2.23Giga *bits* /second? i.e: (ie: uppercase B = bytes, lowercase b = bits). This would mean your throughput for these files is actually 2.23GB/s * 8 = 17.84 Giga *bits* /second or 17.84Gb/s.. Sadly, still nowhere near that 55Gb/s from iperf though

  • @robertmcmahon921
    @robertmcmahon921 Před 3 lety

    Speed is defined by latency, not throughput. iperf 2 supports latency or one way delay (OWD) measurements but one has to sync the clocks.

  • @paulsccna2964
    @paulsccna2964 Před 3 lety

    Most likely the PCI bus is the limit on the PC. If possible you might be able to tweak and ensure that the 100 Gbs Ethernet card is actually running at the full speed, (For example if the slot allows 1x, 2x, 3x or higher.) Also, some motherboards, will "steal," or allocate PCI slot speeds for other devices, like M.2. And, I am assuming you are using and M.2 for this testing. There could be a limit to the data transfer rate on the M.2 chip. These might be good places to start. It might be possible, on a modern AMD motherboard, to specifically allocate or assign PCI bus speeds to a specific PCI slot. The down-side, might be giving up performance on some other aspect of the motherboard, that depends (or steals, PCI bus speed). Regardless, 50 Gbs to 55 Gbs is really good. But, as you have demonstrated, there are limiting factors. Many applications might not even be designed to handle such these speeds, and only result in buffering. Certainly, for pushing data around, it is neat, perhaps gaming? I look forward to a follow up on this topic. One more thing. I wonder if there is way, such as Cisco to (not even sure you can turn those features on in a gradular way, such as cut-through switching), to rip more speed out of the switching, of the switch? As you mention, most likely the bottle neck is the software and the mobo.

  • @Mr.Ankesh725
    @Mr.Ankesh725 Před 3 lety

    Good knowledge of the video
    Love for India ❤️❤️

  • @MrTheAlexy
    @MrTheAlexy Před 3 lety

    What disks are you using on both PC? It's an unreal speed.

  • @majowhar
    @majowhar Před 3 lety

    Sir, can u do a video on how to check the wireless properties of hardware like Monitor Mode in Cmd

  • @sayedsekandar
    @sayedsekandar Před 3 lety +1

    Todays topic gives the feeling of Data Center.

  • @andreavergani7414
    @andreavergani7414 Před 3 lety

    I have got the same problem in Windows only with 10GbE. Changing Jumbo frames in every node of network seems not doing better.
    Have suggestion?
    Support your great work. Ciao
    PS: im so jelous about that Aruba switch ahah :)

  • @ivosarak959
    @ivosarak959 Před 3 lety +6

    Issue is likely with your disks. Make memory drives and test memory-memory transfers instead.

    • @leos4210
      @leos4210 Před 3 lety

      M.2 nvme ssd

    • @ivosarak959
      @ivosarak959 Před 3 lety +3

      @@leos4210 Even that is likely not performant enough to reach 100Gbps speeds.

    • @derekleclair8787
      @derekleclair8787 Před 2 lety +1

      Have to use nvme array. Try 3 to 4 drives, you you should not be using iperf and robocopy that’s silly. Also you have to have an rdma connection so make sure that’s available using power shell. I personally have hit well over 25gigabytes per second using multiple connects 3 dual 56 gb cards writing and reading from nvme storage. Memory copy is too slow. Also did this 4 years ago so go back and try this again.

    • @davidbombal
      @davidbombal  Před 2 lety

      iPerf in this example is using memory to memory transfer.

  • @aritrakumar093
    @aritrakumar093 Před 3 lety

    Now this is cool

  • @LloydStoltz
    @LloydStoltz Před 3 lety

    maybe the north bridge and the hardrives are the limiting factors, if you can try to setup multiple NVMe drives acting as one drive

  • @VincentYiu
    @VincentYiu Před 2 lety

    Have you tried messing with network congestion protocols?

  • @Technojunkie3
    @Technojunkie3 Před 3 lety +5

    You need PCIe Gen4 to keep up with 100Gbps. The ConnectX-4 NIC is Gen3. The AMD X570 desktop chipset doesn't have enough PCIe lanes to run both your GPU and NIC at full 16 PCIe lanes so the NIC is likely running 8x. You need a more modern NIC and a Threadripper or Epyc board that does PCIe Gen4. Maybe AMD can loan you a prerelease of their next-gen Threadrippers for testing? The current gen will work but...

    • @equilibrium4310
      @equilibrium4310 Před 3 lety

      PCI-e 3.0 x 16 is actually capable of 15.754 GB/s or converted to transfer speeds 125.6Gbps

    • @Technojunkie3
      @Technojunkie3 Před 3 lety

      @@equilibrium4310 I misremembered. PCIe Gen3 x16 can't sustain both 100Gbps ports on a dual port card. A single port at x16 would work. But he's almost certainly running x8 on that desktop board, so ~62Gbps or about what was shown in the video.
      Now that AMD is merging with Xilinx I think that this is a fine opportunity for AMD to loan out a pair of Threadrippers and Xilinx 100Gbps cards for testing.

    • @James_Knott
      @James_Knott Před 3 lety +2

      When I started working in telecom, way back in the dark ages, some of the equipment I worked on ran at a blazing 45.4 bits/second!

  • @heathbezuidenhout2551
    @heathbezuidenhout2551 Před 3 lety

    How about Network teaming 2 or more Network cards on one computer, maybe you can get higher speeds ? :)

  • @dlengelkes
    @dlengelkes Před 3 lety

    how about using actual servers with server grade hardware like the intel xeon or Amd Threadripper?

  • @dadsview4025
    @dadsview4025 Před 3 lety

    Unless you are using a Ramdisk you can't assume it's not the I/O to the drive. The fact that CPU is not at 100% indicates it's an I/O bottleneck. It also could be the PC Ethernet interface.
    Is the 10ge a built into the motherboard? Are you using physical drives? It would be helpful to give the motherboard and 10ge interface specs. I would also examine the throughput curve over time which would reveal any caching delays i.e. does the performance increase or decrease during the transfer? I did this sort of optimization on networks when 100mbits was fast ;)

  • @haxwizard2035
    @haxwizard2035 Před 3 lety

    😊😀really well explain

  • @igazmi
    @igazmi Před 3 lety +1

    My guess would be to check whether all required LANES from the 100g eth card are granted.

  • @DD-hn2jr
    @DD-hn2jr Před 3 lety

    Is there a course or videos of u for who entered in networking

  • @timmytwatcop8764
    @timmytwatcop8764 Před 3 lety +2

    Imagine that speed on an overt DdOS attack

  • @JeDeXxRioProKing
    @JeDeXxRioProKing Před 3 lety

    Hi David , Thanks For Video Aruba has great Networking gear *_* , for the performance you will improve a lot of performance if you use a Fast Hard drive , this is the main problem use SSD like Samsung SSD 970 Evo Plus

    • @samadams4582
      @samadams4582 Před 3 lety +2

      I've have a 970 Evo plus and can't get anywhere close to 100 ge throughput. There is no SSD that can push around 12.5 gigabytes per second.

    • @JeDeXxRioProKing
      @JeDeXxRioProKing Před 3 lety

      @@samadams4582 Yes you are right there is always limitation and that limitation depend on your need to. if you use for example 4 SSD drive as RAID-0 .. then what ? let me tell that you will get more performance.

    • @samadams4582
      @samadams4582 Před 3 lety

      @@JeDeXxRioProKing Check out this video about NVME Raid 0 Performance. These are 2 PCIe 4.0 NVME Drives in RAID 0 on a Ryzen 9 3900x. You can see that the write performance is different than the read performance.
      czcams.com/video/Ffxkvf4KOt0/video.html

  • @paganini9643
    @paganini9643 Před 3 lety

    Is it because pci-e 4 limit is 64gbs? And you need pci-e 5 or 6?

  • @LuK01974
    @LuK01974 Před 3 lety

    Ciao David, problem need to analyzed in deep.
    1st speed limitation of you hdd/sdd/nvme
    2nd driver of your nic card , use the driver from vendor.
    For testing full speed try to use ram disk on all your two pc and copy from ram disk to ram disk using robocopy.