NVIDIA AI GPU Servers: PCIe vs. SXM | FiberMall

Sdílet
Vložit
  • čas přidán 9. 03. 2024
  • Nvidia’s GPU interconnect technology, featuring PCIe and SXM form factors, is designed to cater to different performance and application needs. (www.fibermall.com/blog/nvidia...)
    PCIe (Peripheral Component Interconnect Express): This is a widely used general protocol that allows GPUs to be installed into standard PCIe slots on a motherboard. It’s known for its comprehensive functions and compatibility with various systems. PCIe interface GPU cards can communicate with the CPU and other GPU cards in the server through PCIe slots, and with devices on external server nodes through network cards. To enhance the transmission speed, the NVLink bridge can be used for fast communication between GPU and CPU, although it typically supports the connection between only two GPU cards.
    SXM (NVIDIA’s proprietary form factor): SXM is specifically designed for high-performance GPU interconnection. It uses a dedicated protocol and offers higher transmission speeds and better native NVLink support than PCIe. SXM-based GPUs are ideal for NVIDIA’s DGX and HGX systems, where they are connected through NVSwitch integrated on the motherboard, allowing up to 8 GPUs to be interlinked without relying on PCIe, achieving very high bandwidth. For instance, the uncut A100 and H100 can reach 600GB/s and 900GB/s of bandwidth respectively.
  • Věda a technologie

Komentáře •