NVIDIA has announced the arrival of its Blackwell platform, which will enable organizations to build and run real-time generative AI on large language models at a significantly reduced cost and energy consumption. The Blackwell GPU architecture features six transformative technologies for accelerated computing, which will help unlock breakthroughs in data processing, engineering simulation, electronic design automation, computer-aided drug design, quantum computing and generative AI. Major companies such as Amazon Web Services, Dell Technologies, Google, Meta, Microsoft, OpenAI, Oracle, Tesla and xAI are expected to adopt Blackwell. The platform is named after David Harold Blackwell, a mathematician who specialized in game theory and statistics.
NVIDIA’s Blackwell Platform: A New Chapter in Computing
NVIDIA has announced the arrival of its Blackwell platform, a significant development in the field of computing. The platform is designed to facilitate the construction and operation of real-time generative AI on large language models with trillion-parameter scales. This is achieved with up to 25 times less cost and energy consumption than its predecessor. The Blackwell GPU architecture incorporates six transformative technologies for accelerated computing, which are expected to catalyze breakthroughs in various sectors, including data processing, engineering simulation, electronic design automation, computer-aided drug design, quantum computing, and generative AI.
Blackwell’s Adoption and Impact Across Industries
Several organizations are anticipated to adopt the Blackwell platform, including Amazon Web Services, Dell Technologies, Google, Meta, Microsoft, OpenAI, Oracle, Tesla, and xAI. These organizations recognize the potential of the Blackwell GPU in managing compute infrastructure, particularly as we transition into the AI platform shift. The Blackwell GPU is expected to bring breakthrough capabilities, accelerating future discoveries and innovations across various industries.
Blackwell’s Revolutionary Technologies
The Blackwell platform is equipped with six revolutionary technologies that enable AI training and real-time large language model inference for models scaling up to 10 trillion parameters. These include the world’s most powerful chip, a second-generation transformer engine, a fifth-generation NVLink, a RAS engine, secure AI, and a decompression engine. These technologies are designed to maximize system uptime, improve resiliency for massive-scale AI deployments, protect AI models and customer data, and accelerate database queries, among other functions.
The NVIDIA GB200 Grace Blackwell Superchip
The NVIDIA GB200 Grace Blackwell Superchip is a significant component of the Blackwell platform. It connects two NVIDIA B200 Tensor Core GPUs to the NVIDIA Grace CPU over a 900GB/s ultra-low-power NVLink chip-to-chip interconnect. The GB200 is a part of the NVIDIA GB200 NVL72, a multi-node, liquid-cooled, rack-scale system designed for the most compute-intensive workloads. The GB200 NVL72 provides up to a 30x performance increase compared to the same number of NVIDIA H100 Tensor Core GPUs for large language model inference workloads, and reduces cost and energy consumption by up to 25x.
Global Network of Blackwell Partners
Blackwell-based products are expected to be available from partners later this year. Several cloud service providers, including AWS, Google Cloud, Microsoft Azure, and Oracle Cloud Infrastructure, are among the first to offer Blackwell-powered instances. Additionally, a growing network of software makers, including Ansys, Cadence, and Synopsys, will use Blackwell-based processors to accelerate their software for designing and simulating electrical, mechanical, and manufacturing systems and parts. This will enable their customers to bring products to market faster, at lower cost, and with higher energy efficiency.
External Link: Click Here For More
