CoreWeave Introduces Nvidia’s GB200 NVL72 Systems for AI Training
Cloud services provider CoreWeave has recently announced that it is now offering Nvidia’s GB200 NVL72 systems, also known as “Grace Blackwell,” to cater to customers seeking to perform intensive AI training.
CoreWeave has stated that its range of cloud services have been specifically optimized for the GB200 NVL72, which includes CoreWeave’s Kubernetes Service, Slurm on Kubernetes (SUNK), Mission Control, and other related services. The Blackwell instances provided by CoreWeave are capable of scaling up to 110,000 Blackwell GPUs, equipped with Nvidia Quantum-2 InfiniBand networking.
The GB200 NVL72 system is a formidable and robust system featuring 36 Grace CPUs and 72 Blackwell GPUs that are interconnected to function as a unified, powerful processor. This system is primarily utilized for advanced large language model programming and training purposes.