Covering Scientific & Technical AI | Tuesday, October 8, 2024

Faster Interconnects and Switches to Help Relieve Data Bottlenecks 

Fifth-generation NVLink Switch (Image courtesy Nvidia)

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier processors moot for AI, HPC, and big data analytic workloads. The good news is Nvidia is addressing the bottleneck with new interconnects and switches, including the NVLink 5.0 system backbone as well as 800Gb InfiniBand and Ethernet switches for storage connections.

Nvidia moved the ball forward on a system level with the latest iteration of its speedy NVlink technology. The fifth generation of the GPU-to-GPU-to-CPU bus will move data between processors at a speed of 100 gigabyte-per-second. With 18 NVLink connections per GPU, a Blackwell GPU will sport a total bandwidth of 1.8 terabytes per second to other GPUs or a Hopper CPU, which is twice the bandwidth of NVLink 4.0 and 14x the bandwidth of an industry standard PCIe Gen5 bus (NVLink is based Nvidia’s high-speed signaling interconnect protocol, dubbed NVHS).

Nvidia is using NVLink 5.0 as a building block for building truly massive GPU supercomputers atop its GB200 NVL72 frames. Each tray of the NVL72 is equipped with a two GB200 Grace Blackwell Superchip, each of which sports one Grace CPU and two Blackwell GPUs. A fully loaded NLV72 frame will feature 36 Grace CPUs and 72 Blackwell GPUs occupying two 48-U racks (there’s also a NVL36 configuration with half the number of CPUs and GPUs in a single rack). Stack enough of these NVL72 frames together and you have yourself a DGX SuperPOD.

A fifth-generation NVLink Interconnect (Image courtesy Nvidia)

A fifth-generation NVLink Interconnect (Image courtesy Nvidia)

All told, it will take nine NVLink switches to connect all the Grace Blackwell Superchips in the liquid-cooled NVL72 frame, according to an Nvidia blog post published today. “The Nvidia GB200 NVL72 introduces fifth-generation NVLink, which connects up to 576 GPUs in a single NVLink domain with over 1 PB/s total bandwidth and 240 TB of fast memory,” the Nvidia authors write.

Nvidia CEO Jensen Huang marveled over the speed of the interconnects during his GTC keynote Monday. “We can have every single GPU talk to every other GPU at full speed at the same time. That’s insane,” Huang said. “This is an exaflop AI system in one single rack.”

Nvidia also released new NVLlink switches to connect multiple NVL72 frames into a single namespace for training large language models (LLMs) and executing other GPU-heavy workloads. These NVLink switches, which utilize the Mellanox-developed Scalable Hierarchical Aggregation and Reduction Protocol (SHARP) protocol to provide optimization and acceleration, enable 130TB/s of GPU bandwidth each, the company says.

All that network and computational bandwidth will go to good use training LLMs. Because the latest LLMs reach into the trillions of parameters, they require massive amounts of compute and memory bandwidth to train. Multiple NVL72 systems are required to train one of these massive LLMs. According to Huang, the same 1.8-trillion parameter LLM that took 8,000 Hopper GPUs 90 days to train could be trained in the same amount of time with just 2,000 Maxwell GPUs.

GB200 compute tray featuring two Grace Blackwell Superchips (Image courtesy Nvidia)

GB200 compute tray featuring two Grace Blackwell Superchips (Image courtesy Nvidia)

At 30x the bandwidth compared to the previous generation HGX H100 kit, the new GB200 NVL72 systems will be able to generate up to 116 tokens per second per GPU, the company says. But all that horsepower will also be useful for things like big data analytics, as the database join times go down by a factor of 18x, Nvidia says. It’s also useful for physics-based simulations and computational fluid dynamics, which will see improvements of 13x and 22x, respectively, compared to CPU-based approaches.

In addition to speeding up the flow of data within the GPU cluster with NVLink 5.0, Nvidia unveiled new switches this week that are designed to connect the GPU clusters with massive storage arrays holding the big data for AI training, HPC simulations, or analytics workloads. The company unveiled its X800 line of switches, which will deliver 800Gb per second throughput in both Ethernet and InfiniBand flavors.

Deliverables in the X800 line will include the new InfiniBand Quantum Q3400 switch and the NVIDIA ConnectX-8 SuperNIC. The Q3400 switch will deliver a 5x increase in bandwidth capacity and a 9x increase in total computing capability, per Nvidia’s Scalable Hierarchical Aggregation and Reduction Protocol (SHARP) v4, compared to the 400Gb/s switch that came before it. Meanwhile, the ConnectX-8 SuperNIC leverages PCI Express (PCIe) Gen6 technology supporting up to 48 lanes across a compute fabric. Together, the switches and NICs are designed to train trillion-parameter AI models.

The Nvidia X800 line of switches, along with their associated NICs (Image courtesy Nvidia)

The Nvidia X800 line of switches, along with their associated NICs (Image courtesy Nvidia)

For non-InfiniBand shops, the company’s new Spectrum-X800 Ethernet switches and BlueField-3 SuperNICs are designed to deliver the latest in industry-standard network connectivity. When equipped with 800GbE capability, the Spectrum-X SN5600 switch (already in production for 400GbE) will boast a 4x increase in capacity over the 400GbE version, and will deliver 51.2 terabits per second of switch capacity, which Nvidia claims is the fastest single ASIC switch in production. The BlueField-3 SuperNICs, meanwhile, will help keep low-latency data flowing into GPUs utilizing remote direct-memory access (RDMA) technology.

Nvidia’s new X800 tech is slated to become available in 2025. Cloud providers Microsoft AzureOracle Cloud, and Coreweave have already committed to supporting it. Other storage providers like AivresDDNDell TechnologiesEvidenHitachi VantaraHewlett Packard EnterpriseLenovoSupermicro, and VAST Data have also committed to delivering storage systems based on the X800 line, Nvidia says.

AIwire