NVIDIA Contributes Blackwell Platform to Open Compute Project Accelerating AI

NVIDIA has made a significant contribution to the Open Compute Project, an open hardware ecosystem, by sharing its Blackwell platform design. This move is expected to accelerate innovation in artificial intelligence infrastructure. At the OCP Global Summit, NVIDIA announced that it will share key elements of its GB200 NVL72 system electro-mechanical design with the OCP community, including rack architecture and thermal environment specifications.

This contribution builds on NVIDIA’s previous contributions to OCP, including its HGX H100 baseboard design specification. Jensen Huang, founder and CEO of NVIDIA, emphasized the importance of advancing open standards to help organizations worldwide take advantage of accelerated computing and create AI factories of the future.

The Blackwell platform is designed to power a new era of AI, featuring 36 NVIDIA Grace CPUs and 72 NVIDIA Blackwell GPUs in a rack-scale design. Meta, a key partner, plans to contribute its Catalina AI rack architecture based on GB200 NVL72 to OCP, providing computer makers with flexible options to build high compute density systems.

Accelerating AI Infrastructure Innovation through Open Hardware Ecosystems

NVIDIA has made a significant contribution to the Open Compute Project (OCP) by sharing foundational elements of its NVIDIA Blackwell accelerated computing platform design. This move aims to drive the development of open, efficient, and scalable data center technologies. The company has also broadened its NVIDIA Spectrum-X support for OCP standards, enabling companies to unlock the performance potential of AI factories deploying OCP-recognized equipment while preserving their investments and maintaining software consistency.

The contribution includes key portions of the NVIDIA GB200 NVL72 system electro-mechanical design, such as rack architecture, compute and switch tray mechanicals, liquid-cooling and thermal environment specifications, and NVIDIA NVLink cable cartridge volumetrics. This will support higher compute density and networking bandwidth in data centers. NVIDIA has already made several official contributions to OCP across multiple hardware generations, including its NVIDIA HGX H100 baseboard design specification.

The open hardware ecosystem approach is crucial for driving innovation in AI infrastructure. By advancing open standards, organizations worldwide can take advantage of the full potential of accelerated computing and create the AI factories of the future. This collaboration between industry leaders will shape specifications and designs that can be widely adopted across the entire data center.

The NVIDIA Blackwell Accelerated Computing Platform

The NVIDIA Blackwell accelerated computing platform was designed to power a new era of AI. The GB200 NVL72 system is based on the NVIDIA MGX modular architecture, which enables computer makers to quickly and cost-effectively build a vast array of data center infrastructure designs. This liquid-cooled system connects 36 NVIDIA Grace CPUs and 72 NVIDIA Blackwell GPUs in a rack-scale design.

With a 72-GPU NVIDIA NVLink domain, it acts as a single, massive GPU and delivers 30x faster real-time trillion-parameter large language model inference than the NVIDIA H100 Tensor Core GPU. This platform is poised to revolutionize AI computing by providing unprecedented performance and scalability.

Critical Infrastructure for Data Centers

As the world transitions from general-purpose to accelerated and AI computing, data center infrastructure is becoming increasingly complex. To simplify the development process, NVIDIA is working closely with over 40 global electronics makers that provide key components to create AI factories. This collaboration will enable organizations to build highly flexible networks and meet the growing performance and energy efficiency needs of data centers.

Additionally, a broad array of partners are innovating and building on top of the Blackwell platform, including Meta, which plans to contribute its Catalina AI rack architecture based on GB200 NVL72 to OCP. This provides computer makers with flexible options to build high compute density systems and meet the growing demands of data centers.

The Role of Open Standards in Accelerating AI Innovation

The adoption of open standards is critical for accelerating AI innovation. By advancing open standards, organizations can take advantage of the full potential of accelerated computing and create the AI factories of the future. NVIDIA’s contributions to OCP will enable companies to unlock the performance potential of AI factories deploying OCP-recognized equipment while preserving their investments and maintaining software consistency.

The company’s Spectrum-X Ethernet networking platform, which now includes the next-generation NVIDIA ConnectX-8 SuperNIC, supports OCP’s Switch Abstraction Interface (SAI) and Software for Open Networking in the Cloud (SONiC) standards. This allows customers to use Spectrum-X’s adaptive routing and telemetry-based congestion control to accelerate Ethernet performance for scale-out AI workloads.

The availability of ConnectX-8 SuperNICs for OCP 3.0 next year will equip organizations to build highly flexible networks that can meet the growing demands of data centers. The collaboration between industry leaders, such as NVIDIA and Meta, will shape specifications and designs that can be widely adopted across the entire data center, driving innovation in AI infrastructure.

More information
External Link: Click Here For More
Ivy Delaney

Ivy Delaney

We've seen the rise of AI over the last few short years with the rise of the LLM and companies such as Open AI with its ChatGPT service. Ivy has been working with Neural Networks, Machine Learning and AI since the mid nineties and talk about the latest exciting developments in the field.

Latest Posts by Ivy Delaney:

IonQ Appoints Dr. Pistoia CEO of IonQ Italia

IonQ Appoints Dr. Pistoia CEO of IonQ Italia

November 24, 2025
Korean Startups Showcase Tech at ASEAN & Oceania Demo Day

Korean Startups Showcase Tech at ASEAN & Oceania Demo Day

November 20, 2025
Topological Quantum Compilation Achieves Universal Computation Using Mixed-Integer Programming Frameworks

Topological Quantum Compilation Achieves Universal Computation Using Mixed-Integer Programming Frameworks

November 15, 2025