Covering Scientific & Technical AI | Wednesday, December 11, 2024

Google’s New AI-Focused ‘A3’ Supercomputer Has 26,000 GPUs 

Cloud providers are building armies of GPUs to provide more AI firepower. At its annual Google I/O developer conference today, Google announced an AI supercomputer with 26,000 GPUs. The Compute Engine A3 supercomputer is one more proof point that it is throwing more resources in an aggressive counteroffensive in its battle for AI supremacy with Microsoft.

An Nvidia DGX H100 system baseboard with 8 H100 Hopper GPUs, shown by Nvidia CEO Jensen Huang in April

The supercomputer has about 26,000 Nvidia H100 Hopper GPUs. For reference, the world’s fastest public supercomputer, Frontier, has 37,000 AMD Instinct 250X GPUs.

“For our largest customers, we can build A3 supercomputers up to 26,000 GPUs in a single cluster and are working to build multiple clusters in our largest regions,” a Google spokeswoman said in an email, adding that “not all of our locations will be scaled to this large size.”

The system was announced at the Google I/O conference, which is being held in Mountain View, California. The developer conference has emerged as a showcase for many of Google’s AI software and hardware capabilities. Google has accelerated its AI development after Microsoft put technologies from OpenAI into Bing search and office productivity applications.

The supercomputer is targeted at customers looking to train large-language models. Google announced the accompanying A3 virtual machine instances for companies looking to use the supercomputer. Many cloud providers are now deploying H100 GPUs, and Nvidia in March launched its own DGX cloud service, which is expensive compared to renting previous generation A100 GPUs.

Google said that the A3 supercomputer is a significant upgrade over compute resources provided by existing A2 virtual machines with Nvidia’s A100 GPUs. Google is pooling all A3 computing instances, which are spread geographically, into a single supercomputer.

“The A3 supercomputer’s scale provides up to 26 exaflops of AI performance, which considerably improves the time and costs for training large ML models,” said Google’s Roy Kim, a director, and Chris Kleban, a product manager, in a blog entry.

The exaflops performance metric, which is used by companies to estimate the raw performance of an AI computer, is still viewed with a pinch of salt by critics. In Google’s case, the flops are meted out in training-targeted TF32 Tensor Core performance, which gets you to “exaflops” about 30x faster than the double-precision (FP64) floating point math that most classic HPC applications still require.

The number of GPUs has become an important calling card for cloud providers to promote their AI computing services. Microsoft’s AI supercomputer in Azure, built in collaboration with OpenAI, has 285,000 CPU cores and 10,000 GPUs. Microsoft has also announced its next-generation AI supercomputer with more GPUs. Oracle’s cloud service provides access to clusters of 512 GPUs, and is working on new technology to boost the speed at which GPUs communicate.

Google TPU v4. Image courtesy of Google.

Google has been hyping up its TPU v4 artificial intelligence chips, which are being used to run internal artificial intelligence applications with LLMs, such as Google’s Bard offering. Google’s AI subsidiary, DeepMind, has said that the fast TPUs are guiding AI development for general and scientific applications.

By comparison, Google’s A3 supercomputer is versatile, and can be tuned to a wide range of AI applications and LLMs. “Given the high demands of these workloads, a one-size-fits-all approach is not enough – you need infrastructure that’s purpose-built for AI,” said Kim and Kleban in the blog entry.

As much as Google loves its TPUs, Nvidia’s GPUs have become a necessity for cloud providers given customers are writing AI applications in CUDA, which is Nvidia’s proprietary parallel programming model. The software toolkit generates the fastest results based on acceleration provided by H100’s specialized AI and graphics cores.

Customers can run AI applications via the A3 VMs, and use Google’s AI development and management services available via Vertex AI, Google Kubernetes Engine, and Google Compute Engine services.

Companies can use GPUs on the A3 supercomputer as one-time rentals to train large-scale models in conjunction with large-language models. The model is then updated – without the need for retraining from scratch – with new data fed into the model.

Google’s A3 supercomputer is a mish-mash of various technologies to boost GPU-to-GPU communications and network performance. The A3 virtual machines are based on Intel’s fourth-generation Xeon chips (codenamed Sapphire Rapids), which come packaged with the H100 GPUs. It is not clear if the virtual CPUs in the VM will support inferencing accelerators built into in the Sapphire Rapids chips. The VMs are accompanied with DDR5 memory.

Training models on Nvidia H100 are faster and cheaper than its previous-generation A100 GPUs, which are widely available in the cloud. A study done by AI services company MosaicML found H100 “to be 30% more cost-effective and 3x faster than the NVIDIA A100” on its seven-billion parameter MosaicGPT large language model.

The H100 can also inference, though it may be considered overkill, considering the amount of processing power provided by H100. Google Cloud offers Nvidia’s L4 GPUs for inferencing, and Intel has inferencing accelerators in its Sapphire Rapids CPUs.

Nvidia’s L4 GPU. Image courtesy of Nvidia.

“A3 VMs are also a strong fit for inference workloads, seeing up to a 30x inference performance boost when compared to our A2 VM’s A100 GPUs,” Google’s Kim and Kleban said.

The A3 VMs are the first to connect GPU instances via the infrastructure processing unit called Mount Evans, which was developed jointly by Google and Intel. The IPU allows the A3 virtual machines to offload networking, storage management and security features, which were traditionally done on virtual CPUs. The IPU allows data transfers at 200Gbps.

“A3 is the first GPU instance to use our custom-designed 200Gbps IPUs, with GPU-to-GPU data transfers bypassing the CPU host and flowing over separate interfaces from other VM networks and data traffic. This enables up to 10x more network bandwidth compared to our A2 VMs, with low tail latencies and high bandwidth stability,” the Google executives said in a blog entry.

The IPU’s throughput may be soon challenged by Microsoft, whose upcoming AI supercomputer with Nvidia’s H100 GPUs will have the chipmaker’s Quantum-2 400Gbps networking capabilities. Microsoft has not revealed the number of H100 GPUs in its next-generation AI supercomputer.

The A3 supercomputer is built on a spine derived from the company’s Jupiter datacenter networking fabric, which connects the geographically diverse GPU clusters via optical links.

“For almost every workload structure, we achieve workload bandwidth that is indistinguishable from more expensive off-the-shelf non-blocking network fabrics,” Google said.

Google also shared that the A3 supercomputer will have eight H100 GPU blocks that are interconnected using Nvidia’s proprietary switching and chip interconnect technology. The GPUs will be connected via the NVSwitch and the NVLink interconnect, which communicate at speeds of roughly 3.6TBps. The same speed is offered by Azure on its AI supercomputer, and both companies deploy Nvidia’s board designs.

“Each server uses NVLink and NVSwitch inside the server to inter-connect the 8 GPUs together. For GPU servers to communicate to each other, we use multiple IPUs on our Jupiter DC network fabrics,” a Google spokeswoman said.

The setup is somewhat similar to Nvidia’s DGX Superpod, which has a setup of 127 nodes, with each DGX node equipped with eight H100 GPUs.

This story originally appeared on sister site HPCwire.

About the author: Tiffany Trader

With over a decade’s experience covering the HPC space, Tiffany Trader is one of the preeminent voices reporting on advanced scale computing today.

AIwire