CoreWeave Achieves New Record-Breaking AI Inferencing Benchmark with NVIDIA GB200 Grace Blackwell Superchips
CoreWeave, the AI Hyperscaler™, today announced its MLPerf v5.0 results, setting a new industry benchmark in AI inference with NVIDIA GB200 Grace Blackwell Superchips. Using a CoreWeave instance with NVIDIA GB200, featuring two NVIDIA Grace CPUs and four NVIDIA Blackwell GPUs, CoreWeave delivered 800 tokens per second (TPS) on the Llama 3.1 405B model1one of the largest open-source models.
“CoreWeave is committed to delivering cutting-edge infrastructure optimized for large-model inference through our purpose-built cloud platform,” said Peter Salanki, Chief Technology Officer at CoreWeave. “These benchmark MLPerf results reinforce CoreWeave’s position as a preferred cloud provider for leading AI labs and enterprises.”
CoreWeave also submitted new results for NVIDIA H200 GPU instances. It achieved 33,000 TPS on the Llama 2 70B model, representing a 40 percent improvement in throughput over NVIDIA H100 instances.2
These results further demonstrate CoreWeave as an industry-leading cloud infrastructure services provider. This year, the company became the first to offer general availability of NVIDIA GB200 NVL72-based instances. Last year, the company was among the first to offer NVIDIA H100 and H200 GPUs, and it was one of the first to demo NVIDIA GB200 NVL72.
MLPerf Inference is an industry-standard suite for measuring machine learning performance across realistic deployment scenarios. How quickly systems can process inputs and produce results using a trained model has a direct impact on user experience.