Swap GPUs at the Press of a Button: Liqid Fabric PCIe Magic Trick!

Sdílet
Vložit
  • čas přidán 1. 06. 2024
  • Easily allocate hundreds of GPUs WITHOUT touching them!
    Check out other Liqid solutions here: www.liqid.com
    0:00 Intro
    1:00 Explaining the Magic
    2:00 Showing the Use Case Set Up
    4:41 The Problem with Microsoft
    6:07 Infrastructure as Code Hardware Control
    10:36 Game Changing our Game Testing and More
    16:12 Outro
    ********************************
    Check us out online at the following places!
    bio.link/level1techs
    IMPORTANT Any email lacking “level1techs.com” should be ignored and immediately reported to Queries@level1techs.com.
    -------------------------------------------------------------------------------------------------------------
    Music: "Earth Bound" by Slynk
    Edited by Autumn
  • Věda a technologie

Komentáře • 256

  • @user-eh8oo4uh8h
    @user-eh8oo4uh8h Před 16 dny +74

    The computer isn't real. The fabric isn't real. Nothing actually exists. We're all just PCI-express lanes virtualized in some super computer in the cloud. And I still can't get 60fps.

    • @AlumarsX
      @AlumarsX Před 16 dny +1

      Goddamn Nvidia all that money and keeping us fps capped

    • @gorana.37
      @gorana.37 Před 16 dny

      🤣🤣

    • @jannegrey593
      @jannegrey593 Před 15 dny +2

      There is no spoon taken to the extreme.

    • @fhsp17
      @fhsp17 Před 15 dny

      The hivemind secret guardians saw that. They will get you.

    • @nicknorthcutt7680
      @nicknorthcutt7680 Před 2 hodinami

      😂😂😂

  • @Ultrajamz
    @Ultrajamz Před 16 dny +92

    So I can literally hotswap my 4090’s as they melt like a belt fed gpu pc?

    • @christianhorn1999
      @christianhorn1999 Před 16 dny +11

      thats like a gatlinggun for gpus. dont bring manifacturers on ideas.

    • @TigonIII
      @TigonIII Před 16 dny +4

      Melt? Like turning them to liquid, pretty on brand. ;)

    • @BitsOfInterest
      @BitsOfInterest Před 13 dny

      I don't think 4090's fit in that chassis based on how much room is left in the front with those other cards.

    • @nicknorthcutt7680
      @nicknorthcutt7680 Před 2 hodinami

      Lmao

  • @ProjectPhysX
    @ProjectPhysX Před 16 dny +8

    That PCIe tech is just fantastic for software testing. I test my OpenCL codes on Intel, Nvidia, AMD, Arm, and Apple GPU drivers to make sure I don't step on any driver bugs. For benchmarks that need the full PCIe bandwidth, this system is perfect.

  • @abavariannormiepleb9470
    @abavariannormiepleb9470 Před 19 dny +99

    Please Liqid, introduce a tier for homelab users!

    • @popeter
      @popeter Před 16 dny +9

      oh yea could do so much, proxmox systems on dual ITX all sharing GPU and network off one of these

    • @marcogenovesi8570
      @marcogenovesi8570 Před 16 dny +8

      I doubt this can be made affordable for common mortals

    • @AnirudhTammireddy
      @AnirudhTammireddy Před 16 dny +6

      Please deposit your 2 kidneys and 1 eye before you make any such requests.

    • @abavariannormiepleb9470
      @abavariannormiepleb9470 Před 16 dny +2

      My humble dream setup would be a “barebones” kit consisting of the PCIe AIC adapters for the normal “client” motherboard and the “server” board that offers four x16 slots. You’d have to get your own cases and PSU solution for the “server” side.

    • @mritunjaymusale
      @mritunjaymusale Před 16 dny +1

      @@marcogenovesi8570 you can tho, in terms of hardware it's just a pci switch the hard part is the low level code to match the right pci device to right cpu and on top of that software that connects it to workflows that can understand this.

  • @wizpig64
    @wizpig64 Před 19 dny +79

    WOW! imagine having 6 different CPUs and 6 GPUs, rotating through all 36 combinations to hunt for regressions! Thank you for sharing this magic trick!

    • @joejane9977
      @joejane9977 Před 16 dny +4

      imagine if windows worked well

    • @onisama9589
      @onisama9589 Před 16 dny

      Most likely the windows box would need to be shutdown before you switch or the OS will crash.

    • @jjaymick6265
      @jjaymick6265 Před 12 dny

      I do this daily in my lab. 16 different servers, 16 GPUs (4 groups of 4) and do fully automated regressions for AI/ML models/GPU driver stacks / Cuda version comparisons. Like I have said in other posts once you stitch this together with Ansible / Digital Rebar things get really interesting. Now that everything is automated... I just simply input a series of hardware and software combos to test and the system does all the work while I sleep. Just wake up review the results and input the next series of tests. There is no more cost effective way for one person to test the thousands of combinations.

    • @formes2388
      @formes2388 Před 4 dny

      @@joejane9977 It does. I mean, it works well enough that few people go through the hastle of conciously switching. It's more a default switch if people go start using a tablet as a primary device, do to not needing a full fat desktop for their day to day needs.
      For perspective of where I am coming from: I have a trio of Linux systems, a pair of windows systems; one of the windows systems is also dual booted to 'nix. Used to have a macOS system but, have no need of one, and better things to spend money on.
      For some stuff: Linux is great; thing is, I have better things to do with my time than tinker with configs to get things running - so sometimes, a windows system just works.

  • @d0hanzibi
    @d0hanzibi Před 16 dny +20

    Hell yea, we need that consumerized!

  • @chaosfenix
    @chaosfenix Před 16 dny +11

    I would love this in the home setting. If it is hot pluggable it is also programmable which means that you could upgrade GPUs periodically but instead of just throwing it away you would push it down the list on your priority. Hubby and Wifey could get priority on the fastest GPU and if you have multiple kids they would be lower priority. If mom and dad aren't playing at the moment though they could just get the fastest GPU to use. You could centralize all of your hardware in a server in a closet and then have weaker terminal devices. They could have an amazing screen, keyboard, etc but they could cheap out on the CPU, RAM, GPU etc because those would just be composed when they booted up. Similar to how computers will switch between an integrated GPU and a dGPU now you could just use the cheap devices iGPU while doing the basics but then if you opened an application like a game it would dynamically mount the GPU from the rack. No more external GPUs for laptops and no more insanely expensive laptops with hardware that is obsolete for its intended task in 2 years.

    • @christianhorn1999
      @christianhorn1999 Před 16 dny +2

      moooom?! why is my fortnite dropping fps lmao

    • @SK83RJOSH
      @SK83RJOSH Před 16 dny

      I would have concerns about cross talk and latency from like, signal amplifiers, in that scenario. I could not imagine trying to triage the issues this will introduce. 😂

    • @chaosfenix
      @chaosfenix Před 15 dny

      @@SK83RJOSH I think latency would be the biggest one. I am not sure what you mean by cross talk though. If you are meaning signal interference I don't think that would apply here any more than it would apply in any regular motherboard and network. If you are meaning about cross talk in wifi then this really would not be how I would do it. I would use fiber for all of this. Even Wifi 7 is nowhere near fast enough for this kind of connectiviy and would have way too much interference. Maybe if you had a 60ghz connection but that is about it.

  • @cs7899
    @cs7899 Před 16 dny +6

    Love Wendell's off label videos

  • @seanunderscorepry
    @seanunderscorepry Před 16 dny +6

    I was skeptical that I'd find anything useful or interesting in this video since the use-case doesn't suit me personally, but Wendell could explain paint drying on a wall and make it entertaining / informative.

  • @nicknorthcutt7680
    @nicknorthcutt7680 Před 2 hodinami

    This is absolutely incredible! Wow, I didn't even realize how many possibilities this opens up. As always, another great video man.

  • @Maxjoker98
    @Maxjoker98 Před 16 dny +5

    I've been waiting for this video ever since Wendell first started talking about/with the Liquid people. Glad it's finally here!

  • @totallyuneekname
    @totallyuneekname Před 16 dny +110

    Can't wait for the Linus Tech Tips lab team to announce their use of Liqid in two years

    • @mritunjaymusale
      @mritunjaymusale Před 16 dny +13

      I mentioned this idea in his comments when Wendell was doing interviews with the liqid guys, but Linus being the dictator he is in his comments has banned me from commenting.

    • @krishal99
      @krishal99 Před 16 dny +20

      @@mritunjaymusale sure buddy

    • @janskala22
      @janskala22 Před 16 dny +10

      LTT does already use Liqid, just not this product. You can see in one of their videos they have a 2U Liqid server in their main rack. It seemd like a rebranded DELL server, but still from Liqid.

    • @totallyuneekname
      @totallyuneekname Před 16 dny

      Ah TIL, thanks for the info @janskala22

    • @tim3172
      @tim3172 Před 16 dny

      Can't wait for you to type "ltt liqid" into CZcams search and realize LTT has videos from the last 3 years showcasing Liquid products.

  • @pyroslev
    @pyroslev Před 16 dny +7

    This is wickedly cool. Practical or usable for me? Naw, not really. But seeing that messy workshop lived in, satisfying as the tech.

  • @TheFlatronify
    @TheFlatronify Před 16 dny +3

    This would come in so handy in my small three node Proxmox cluster, assigning GPUs to different servers / VMs when necessary. The image would be streamed using Sunshine / Moonlight (similar to Parsec). I wish there was a 2 PCIe Slot consumer tier available for a price that enthusiasts would be willing to spend!

    • @jjaymick6265
      @jjaymick6265 Před 16 dny

      I use this every day in my lab running Prox / XCP-NG / KVM. Linux hot plug PCIe drivers work like a champ to move GPUs in an out of hypervisors. If only virt-io had reasonable support for hot-plug PCIe into the VM so I would not have to restart the VM every time I wanted to change GPUs to run a new test. Maybe someday.

  • @ralmslb
    @ralmslb Před 16 dny +7

    I would love to see performance tests comparing the impact of the cable length, etc.
    Essentially, the PCI speed impact not only in terms of latency but also throughput, the native solution vs LiquidFabrid products.
    I have a hard time believing that this solution has 0 downsides, hence wouldn't be surprised that the same GPU has worse performance over LiquidFabric.

    • @MiG82au
      @MiG82au Před 16 dny

      Cable length is a red herring. An 8 m electrical cable only takes ~38 ns to pass a signal and the redriver (not retimer) adds sub 1 ns, while normal PCIe whole link latency is on the order of hundreds of ns. However, the switching of the Liqid fabric will add latency as will redrivers.

    • @paulblair898
      @paulblair898 Před 15 dny

      There are most definitely downsides. Some PCIe devices drivers will crash with the introduction of additional latency because fundamental assumptions were made when writing them that don't handle the >100ns latency the liqid switch adds well. ~150ns additional latency is not trivial compared to the base latency of the device.

  • @cem_kaya
    @cem_kaya Před 16 dny +8

    this might be very useful with CXL if it lives up to expectations.

    • @jjaymick6265
      @jjaymick6265 Před 16 dny +2

      Liqid already has demos of CXL memory pooling with their fabric. I would not expect it to reach production before mid 2025.

    • @hugevibez
      @hugevibez Před 16 dny +2

      CXL already goes far beyond this as it has cache coherency, so you can pool devices much more easily together. I see at as an evolution to this technology (and the nvswitch stuff), which CXL 3.0 and beyond expands on even further with the extended fabric capabilities and PCIe gen 6 speeds. I think that's where the holdup has been since it's a relatively new technology and those extended capabilities are significant for hyperscalar adoption which is what drives much of the industry and especially the interconnects subsector in the first place.

  • @MatMarrash
    @MatMarrash Před 9 dny

    If there's something you can cram into PCIe lanes, you bet Wendell's going to try it and then make an amazing video about it!

  • @scotthep
    @scotthep Před 16 dny

    For some reason this is one of the coolest things I've seen in a while.

  • @andypetrow4228
    @andypetrow4228 Před 16 dny

    I came for the magic.. I stayed for the soothing painting above the techbench

  • @shinythings7
    @shinythings7 Před 16 dny +1

    I was looking at the vfio stuff to have everything in a different part of the house. Now this seems like just as good of a solution. Having the larger heat generating components in a single box and having the mobo/cpu/os where you are would be a nice touch. Would be great for SFF mini pc's as well to REALLY lower your footprint on a desk or in an office/room.

  • @Ben79k
    @Ben79k Před 16 dny

    I had no idea something like this was possible. Very cool. Its not the subject of the video, but that iMac you were demoing on, is it rigged up to use as just a monitor? Or is it actually running? Looks funny with the glass removed

  • @brandonhi3667
    @brandonhi3667 Před 16 dny +1

    fantastic video!

  • @DaxHamel
    @DaxHamel Před 16 dny

    Thanks Wendell. I'd like to see a video about network booting and imaging.

  • @reptilianaliengamedev3612

    Hey if you have to record in that noisey environment again you can leave about 15 or 30 seconds of you being silent at the beginning or end of video to use as a noise profile. In Audacity use the noise reduction effect generate the noise profile than run it on the whole audio track. Then it should sound about 10x better, or nearly get rid of all noise.

    • @MartinRudat
      @MartinRudat Před 6 hodinami

      I'm surprised Wendell isn't using a pair of communication earmuffs; hearing protection coupled with a boom mic (or a bunch of mics and post-processing) possibly being fed directly to the camera.
      As far as I know a good, comfortable set of earmuffs, especially something like the Sensear brand (which allow you to have a casual conversation next to a diesel engine at full throttle) are more or less required equipment for someone that works in a data center all day.

  • @michaelsdailylife8563
    @michaelsdailylife8563 Před 16 dny

    This is really interesting and cool tech!

  • @ShankayLoveLadyL
    @ShankayLoveLadyL Před 13 dny

    WoW.. this is truly amazing, impressive, I dunno... like, I usually expect smart stuff on this channel from my list of tech channels, but this time, what Wendell done is a complete another league.
    I bet Linus was thinking about something similar with his tech lab, but now there is someone to be hired for his project with automated mass testing.

  • @annebokma4637
    @annebokma4637 Před 15 dny

    I don't want an expensive box in my basement. In my attic high and DRY. 😂

  • @chrismurphy2769
    @chrismurphy2769 Před 16 dny

    I've absolutely been wanting and dreaming of something like this

  • @christianhorn1999
    @christianhorn1999 Před 16 dny

    cool and so. is that the same notebooks do that have a switchable igpu and dedicated gpu?

  • @AzNcRzY85
    @AzNcRzY85 Před 16 dny +1

    Wendell does it fit in the Minisforum MS-01?
    It woukd be a massive plus if it would and works.
    RTX A2000 12GB is already good but this is a complete game changer for alot of systems mini or full desktop.

  • @misimik
    @misimik Před 16 dny +1

    Guys, can you help me gather Wendel's most used phrases? Like
    - coloring outside the lines
    - this is not what you would normally do
    - this is MADNESS
    - ...

    • @tim3172
      @tim3172 Před 16 dny

      He uses "RoCkInG" 19 times every video like he's a tween that needs extra time to take tests.

  • @solidreactor
    @solidreactor Před 16 dny +1

    I have been thinking about this use case for a year now, for UE5 development, testing and validation. Recently also thought about using image recognition with ML or "standard" computer vision (or a mix) for automatic validation.
    I can see this being valuable for both developers and also for tech media benchmarking. I just need to allocate time to dive into this.... or get it served "for free" by Wendel

  • @mritunjaymusale
    @mritunjaymusale Před 16 dny

    I really wanted to do something similar in my Uni's server for deep learning since we had 2 GPU based systems that had multiple GPUs using this we could've pooled those GPUs together to make a system of 4 gpu in one click.

  • @AGEAnimations
    @AGEAnimations Před 16 dny

    Could this use all the GPUs for 3D Rendering in Octane or Redshift 3D for a single PC or is it just one GPU at a time? I know Wendell mentions SLI briefly but to have a GPU render machine connected to a small desktop pc would be ideal for a home server setup.

  • @stamy
    @stamy Před 16 dny

    Wow very interesting !
    Can you control power on those PCI devices ? I mean lets say only one GPU powered on at a time, the one that is currently used remotely.
    Also how do you sent the video signal back to the monitor ? Are you using a extra long display port cable, or a fiber optic cable at some sort ?
    Thank you.
    PS: What is the approximative price of such a piece of hardware ?

  • @shodan6401
    @shodan6401 Před 12 dny

    Man, I'm not an IT guy. I know next to nothing. But I love this sht...

  • @Edsdrafts
    @Edsdrafts Před 16 dny

    How about power usage when you have all these GPUs running? Do the rest idle when unused at reasonable wattage / temp.? It's also hard doing game testing due to thermals as you are using different enclosure from standard PC etc. There muat be noticeable performance loss too.

    • @jjaymick6265
      @jjaymick6265 Před 16 dny

      I can't speak for client GPUs but enterprise GPUs have power saving features embedded into the cards. For instance an A100 at idle pulls around 50 watts. At full tilt it can pull close to 300'ish watts. The enclosure itself pulls about 80 watts empty (no GPUs). As far as performance loss. Based on my testing of AI/ML workloads on GPUs inside Liqid fabrics compared with published MLPerf results I would say performance loss is very minimal.

  • @Ironic-Social-Phobia
    @Ironic-Social-Phobia Před 16 dny +1

    Now we know what really happened to Ryan this week, Wendell was practicing his magic trick!

  • @abavariannormiepleb9470

    Thought of another question: Can the box that houses all the PCIe AICs hard-power off/on the individual PCIe slots via the management software in case there is a crashed state? Or do you have to do something physically at the box?

    • @jjaymick6265
      @jjaymick6265 Před 16 dny +1

      There is no slot power control features... There is however a bus reset feature of the Liqid fabric to ensure that devices are reset and in a good state prior to being presented to a host. So if you have a device in a bad state you can simply just remove it from the host and add it back in and it will get bus reset in the process. Per slot power control is a feature being looked at in future enclosures.

    • @abavariannormiepleb9470
      @abavariannormiepleb9470 Před 16 dny

      Again, thanks for that clarification. Would definitely appreciate the per slot power on/off control, would be helpful for diagnosing maybe defective PCIe cards and of course also reduce power consumption with unused cards not just idling around.

  • @4megii
    @4megii Před 16 dny

    What sort of cable does this use? Could this be run over fibre instead?
    Also can you have a singular GPU Box with a few GPUs and then use those GPUs interchangeably with different hosts.
    My thought process is. GPU box in the basement with multiple PCs connected over a fibre cables so I can just switch GPU on any device connected to the fibre network.

    • @jjaymick6265
      @jjaymick6265 Před 16 dny

      The cable is a SFF-8644 cable using copper as a media. (mini-sas) There are companies that use optical media but they are fairly pricey.

  • @ko260
    @ko260 Před 16 dny

    so instead of a disk shelf I could have one of those racks, fill it with HBAs instead of gpus or replacing them all with m.2 cards would that work ?!?!!? @Level1Techs

  • @dmytrokyrychuk7049
    @dmytrokyrychuk7049 Před 15 dny

    Can this work in an internet cafe or would the latency be too big for competitive gaming?

  • @dangerwr
    @dangerwr Před 15 dny

    I could see Steve and team at GamersNexus utilizing this for retesting older cards when new GPUs come out.

  • @jayprosser7349
    @jayprosser7349 Před 16 dny

    The Wizard at Techpowerup must be aware of this.

  • @stamy
    @stamy Před 16 dny

    Let's say you have a WS motherboard with 4 expansion slots PCIe x16.
    Can you dynamically activate/deactivate by software these PCIe slots so that the CPU can only see one at a time ? Each of the slot is populated with a GPU of course. This need then to be combined with a KVM to switch the video output to the monitor.

  • @hugevibez
    @hugevibez Před 16 dny

    The real question is, does this support Looking Glass so you can do baremetal-to-baremetal video buffer sharing between hosts? I know it should technically be possible since PCIe devices on the same fabric/chassis can talk to one another. Yes, my mind goes to some wild places, I've also had dreams of Looking Glass over RDMA. Glad you've finally got one of these in your lab. Anxiously awaiting the CXL revolution which I might be able to afford in like a decade.

  • @cal2127
    @cal2127 Před 16 dny +2

    whats the price?

  • @PsiQ
    @PsiQ Před 16 dny

    i might have missed it.. But would/will/could there be an option to shove around gpus (or AI hardware) on a server running multiple VMs to the VMs that currently need it, and "unplug" it from idle ones ? .. Well, ok, you would need to run multiple uplinks at some point i guess.. Or have all gpus directly slotted in your vm server.

    • @jjaymick6265
      @jjaymick6265 Před 16 dny +1

      The ability of Liqid to expose a single or multiple PCIe devices to a single or multiple hypervisors is 100% a reality. As long as you are using a linux based hypervisor hot-plug will just work. You can then expose those physical devices or vGPUs (if you can afford the license) to one or many virtual machines. The only gotcha is to change GPU types in the VM you will have to power-cycle the VM because I have not found any hypervisor (VMware / Prox/ XCP-NG / KVM-qemu that support hot-plug PCIe into a VM.

    • @PsiQ
      @PsiQ Před 15 dny

      @@jjaymick6265 thanks ;-) you seem to be going round answering questions here 🙂

    • @jjaymick6265
      @jjaymick6265 Před 12 dny

      @@PsiQ I have 16 servers (Dell MX blades and various other 2U servers all attached to a Liqid fabric in my lab with various GPUs/NICs/FPGAs/NVMe that I have been working with for the past 3 years. So have a fair bit of experience on what it is capable of. Once you stitch it together with some CI/CD stuff via Digital Rebar or Ansible it become super powerful for testing and automation.

  • @Jimster481
    @Jimster481 Před 12 dny

    Wow this is so amazing, I bet the pricing is far out of the range of a small office like mine though

  • @abavariannormiepleb9470
    @abavariannormiepleb9470 Před 19 dny +2

    …could you hook up a second Liqid adapter in the same client system to a Gen5 x4 M.2 slot to not interfere with the 16 dGPU lanes?

    • @jjaymick6265
      @jjaymick6265 Před 16 dny +2

      Liqid does support having multiple HBAs in a single host. Each Fabric device provisioned on the fabric is directly provisioned to a specific HBA so your thinking of isolating disk IO from GPU IO would work.

    • @abavariannormiepleb9470
      @abavariannormiepleb9470 Před 16 dny +1

      Thanks for that clarification.

  • @chrisamon5762
    @chrisamon5762 Před 16 dny

    I might actually be able to use all my pc addiction parts now!!!!!

  • @smiththewright
    @smiththewright Před 16 dny

    Very cool!

  • @ryanw.9828
    @ryanw.9828 Před 16 dny +1

    Hardware unboxed! Steve!!!!

  • @brianmccullough4578
    @brianmccullough4578 Před 16 dny

    Woooooo! PCI-E fabrics baby!!!

  • @Jdmorris143
    @Jdmorris143 Před 16 dny

    Magic Wendell? Now I cannot get that image out of my head.

  • @sebmendez8248
    @sebmendez8248 Před 16 dny

    This could genuinely be useful for massive engineering firms, most engineering firms nowadays use 3d modelling and thus having a server side gpu setup could technically mean every single computer on site has access to a 4090 for model rendering and creation without buying and maintaining 100+ gpus.

  • @_neon_light_
    @_neon_light_ Před 16 dny

    From where can one buy this hardware? I can't find any info on Liqid's website. Google didn't help either.

  • @shodan6401
    @shodan6401 Před 12 dny

    I know that GPU riser cables are common, but realistically, how much latency is introduced by having the GPU at such a physical distance compared to being directly in the PCIe slot on the board?

  • @LeminskiTankscor
    @LeminskiTankscor Před 16 dny

    Oh my. This is something special.

  • @_GntlStone_
    @_GntlStone_ Před 16 dny +27

    Looking forward to a L1T + GN collaboration video on building this into a working gaming test setup (Pretty Please ☺️)

    • @Mervinion
      @Mervinion Před 16 dny +6

      Throw Hardware Unboxed into the mix. I think both Steves would love it. Only if you could do the same with CPUs...

  • @spicyandoriginal280
    @spicyandoriginal280 Před 16 dny

    Does the card support 2 gpus at 8x each?

  • @thepro08
    @thepro08 Před 16 dny

    so you saying i can do this with my 15 gbs internet, and connect my monitor or pc to a server game and ps5??? just have to pay 20 per month right like netflix?

  • @kirksteinklauber260
    @kirksteinklauber260 Před 16 dny +2

    How much it costs??

  • @talon262
    @talon262 Před 16 dny

    My only question is how much latency does this add, even in a short run in the same rack?

  • @BestSpatula
    @BestSpatula Před 16 dny

    With SR-IOV, Could I attach different VFs of the same PCIe card to separate computers?

    • @jjaymick6265
      @jjaymick6265 Před 16 dny +1

      Liqid does support SRIOV, but the VFs are not composable. The way SRIOV is leveraged today is a single card that supports SRIOV is exposed to a host and the VFs and SRIOV bar space is then registered by that host. That host then can present each of those VFs to a VM just as if the card was physically installed into the host.

  • @rojovision
    @rojovision Před 16 dny

    What are the performance implications in a gaming scenario? I assume there must be some amount of drop, but I'd like to know how significant it is.

    • @Mpdarkguy
      @Mpdarkguy Před 16 dny

      A few ms of latency I reckon

  • @dgo4490
    @dgo4490 Před 16 dny

    How's the latency? Every PHY jump induces latency, so considering all the hardware involved, this should have at least 3 additional points of extra latency. So maybe like 4-5 times the round trip of native pcie...

    • @jjaymick6265
      @jjaymick6265 Před 16 dny

      100ns per hop. In this specific setup that would mean 3 hops between the CPU and the GPU device. 1 hop at the HBA, 1 hop at the switch, 1 hop at the enclosure. so 600 nanoseconds round trip.

  • @bluefoxtv1566
    @bluefoxtv1566 Před 16 dny

    Such a good thing for cloud computing.

  • @georgec2932
    @georgec2932 Před 16 dny

    How much worse is performance in terms of timing/latency compared to the slot on the motherboard? I wonder if it would be noticeable for gaming...

  • @immortalityIMT
    @immortalityIMT Před 14 dny

    How to do cluster for training LLM, first 4 x 8GB in one system and second 4x8gb over lan.

  • @GameCyborgCh
    @GameCyborgCh Před 16 dny

    your test bench has an optical drive?

  • @GorditoCrunchTime
    @GorditoCrunchTime Před 16 dny

    Wendell: “you may have noticed..”
    Me: “I noticed that Apple monitor!”

  • @mohammedgoder
    @mohammedgoder Před 16 dny +2

    Is there any PCIe rack-mount chassis that can allow this to be a rack-mounted solution?

    • @jjaymick6265
      @jjaymick6265 Před 16 dny

      Typical installation is rackmount. It is all standard 19 inch gear that gets deployed in datacenters around the world.

    • @mohammedgoder
      @mohammedgoder Před 16 dny

      @@jjaymick6265 can you post a model number that you'd recommend?

    • @mohammedgoder
      @mohammedgoder Před 16 dny

      @@jjaymick6265 Is there any particular model that you'd recommend.

    • @jjaymick6265
      @jjaymick6265 Před 11 dny

      @@mohammedgoder Somehow my previous comment got removed. If you are looking for supported systems / fabric device etc the best place to check is on Liqid's website. Under resources they publish a HCL of "Liqid Tested/Approved" devices.

    • @mohammedgoder
      @mohammedgoder Před 5 dny

      I found it. Wendell mentioned it in the video.
      Liqid makes rackmount PCIe enclosures.

  • @NickByers-og9cx
    @NickByers-og9cx Před 16 dny +1

    How do I buy one of these switches, I must have one

  • @arnox4554
    @arnox4554 Před 16 dny

    Maybe I'm misunderstanding this, but wouldn't the latency between the CPU and the GPU be really bad here? Especially with the setup Wendell has in the video?

  • @fanshaw
    @fanshaw Před 14 dny

    I just want this inside my workstation - a bank of x16 slots and I get to dynamically (or even statically, with dip switches) assign PCIE lanes to each slot or to the chipset.

  • @ThatKoukiZ31
    @ThatKoukiZ31 Před 16 dny

    Ah! He admits it, he is a wizard!

  • @gollenda7852
    @gollenda7852 Před 16 dny

    Where can I get a copy of that Wallpaper?

  • @daghtus
    @daghtus Před 16 dny

    What's the extra latency?

  • @felixspyrou
    @felixspyrou Před 16 dny

    Here take my money, this is amazing, me with a lot of computers and with I would be able to use my best GPU on all of them!

  • @shadowarez1337
    @shadowarez1337 Před 13 dny

    Have we cracked the code to pass a igpu to a vm in say TrueNas?

  • @dansanger5340
    @dansanger5340 Před 16 dny +1

    I didn't even know this was possible. How long can the cable run be without degrading performance?

    • @jjaymick6265
      @jjaymick6265 Před 16 dny +1

      In a datacenter setting using copper cables it is limited to 3 meters between host and switch port and 2 meters between switches and enclosures. (he did not show the switches but yes there are PCIe switches involved also) Host --> 3m --> PCIe switch --> 2m --> enclosure

    • @dansanger5340
      @dansanger5340 Před 16 dny

      @@jjaymick6265 Thanks for the info!

  • @lemmonsinmyeyes
    @lemmonsinmyeyes Před 16 dny

    This could greatly cut down on hardware for render farms in VFX. Neat

  • @Dan-Simms
    @Dan-Simms Před 16 dny

    Very cool

  • @wobblysauce
    @wobblysauce Před 16 dny

    Cool as heck it is.

  • @OsX86H3AvY
    @OsX86H3AvY Před 16 dny

    Id like to be able to hotplug GPUs in my running VMs as well...how nice would it be to have say two or three VM boxes for CPU and MEM, one SSD box, one GPU box and one NIC box so you could just swap any nic/gpu/disk to any VM in any of those three boxes in any combination.... that'd be sweet....i definitely dont need it but that just makes me want it more

    • @jjaymick6265
      @jjaymick6265 Před 12 dny

      Over the last couple days I have been working on this exact use case. In most environments this simply is not possible, however in KVM (libvirt) I have discovered the capability to hot-attach a PCIe device to a running VM like this... virsh attach-device VM-1 --file gpu1.xml --current . So with Liqid I can hot attach a GPU to the hypervisor and then hot attach said GPU all the way to the VM. The only thing I have not figured out how to get done is to get the BAR address space for GPU pre-allocated in the VM so the device is actually functional without a VM reboot. As of today the GPU will show up in the VM but drivers cannot bind to it because there is no bar space allocated for it so in lspci the device has a bunch of unregistered memory bars and drivers don't load. Once bar space can be pre-allocated in the VM I have confidence this will work. Baby steps.

  • @AdmV0rl0n
    @AdmV0rl0n Před 16 dny

    I like some of this.
    But let me look at the far end, outside of Parsec or similar, how am I re-routing the video signal or playback. Perhaps there needs to be a wink wink nudge nudge Level One KVM solution. But outside of this, walkiing down to the basement, to re-plumb the video cables old school to new host/ changed host, kinds degrades the magic a bit on the idea...

  • @philosoaper
    @philosoaper Před 16 dny

    Fun.. not sure it would be ideal for competitive gaming exactly.. but very very cool

  • @animalfort3183
    @animalfort3183 Před 16 dny

    I don't know how to thank you enough without being weird man....XOXO

  • @Orochimarufan1900
    @Orochimarufan1900 Před 5 dny

    This looks like it might also eventually enable migration of VMs with PCIe passthrough.

  • @Dr_b_
    @Dr_b_ Před 16 dny +1

    Do we want to know what this costs?

  • @japanskakaratemuva5309

    Nice ❤

  • @AzimsLives
    @AzimsLives Před 16 dny

    This is cool and all, but what's the CPU overhead?

    • @Level1Techs
      @Level1Techs  Před 16 dny +7

      Zero? CPU doesn't see the PCIe bus shenanigans. Its outside any sort of virtualization. Literally just electrical PCIe bus routing.

    • @AzimsLives
      @AzimsLives Před 16 dny +1

      okay, this is game changing

    • @jjaymick6265
      @jjaymick6265 Před 16 dny +1

      Functionally Liqid software is not in the data path. The "overhead" if you wanna call it that would be the latency induced when installing a device behind a PCIe switch instead of directly connecting it to a CPU root bridge. That latency is advertised as 100ns per hop switch latency. To date... I have yet to have anyone reliably be able to show the measured difference.

  • @vdis
    @vdis Před 16 dny

    What's your monthly power bill?!

  • @MP_7
    @MP_7 Před 15 dny

    Hold on, is that an old iMac you're using for a screen?

  • @bignicnrg3856
    @bignicnrg3856 Před 16 dny

    Sounds like GN is getting a new setup

  • @Fenestron
    @Fenestron Před 16 dny

    Oh wow, that is neat, how does it work?
    Let's go to the basement and let me show you...
    nice try Level1, not gunna get me that easily.
    Jokes aside, this is pretty awesome tech.

  • @Gooberpatrol66
    @Gooberpatrol66 Před 16 dny

    This would be great for KVM. Plug USB cards into PCIE, and send your peripherals to all your computers.

  • @NdxtremePro
    @NdxtremePro Před 16 dny

    This seems tailor made for all those single slot consumer boards that get sold. It would make them much more useful.
    I can imagine it could in some future reduce the cost of a recording studio with all of their specialized audio cards if their could spend 1/1th the cost on the motherboard, and share the cards across multiple equipment.
    I could seen cryptominers using the best cards depending on the current pricing.
    I could see switching GPUs depending on which gives the best gaming performance.
    How about retro machines using older PCI-E cards with VM's?
    I imagine the bandwidth of older GPUs wouldn't sature the bus, so you could connect them to the system and pas them through to individual VMs?
    Or, some PCI-E 1.0 cards in Crossfire and SLI with a one slot motherboard.
    Way overkill, but seriously cool tech.
    Speaking of that, you could ger some PCI-E to PCI-X audio equipment, pass that through to some Windows XP VMs, and get that latency goodness and unrestricted access audio engineers loved in a modern one slot solution.
    Enterprise side, I could see creating a massive networking VM set with one of these cards in each of the main systems slot, and attached to a separate PCI-E box, each setup with those multifunction cards. Custom bespoke network switch.

  • @HumblyNeil
    @HumblyNeil Před 16 dny

    The iMac bezel blew me away...