Hammerspace New Benchmark in MLPerf 1.0

Hammerspace has achieved groundbreaking results in the MLPerf 1.0 benchmark with its Tier 0 technology, a new tier of ultra-fast shared storage that utilizes local NVMe storage in GPU servers. This innovation eliminates storage bottlenecks and maximizes GPU performance, transforming GPU computing infrastructure by improving resource utilization and reducing costs for AI, HPC, and other data-intensive workloads.

The MLPerf 1.0 benchmark, initially released by MLCommons in September 2024, was used to validate the performance benefits of Tier 0 architecture. The tests were run on Supermicro servers using ScaleFlux NVMe drives, demonstrating that Tier 0 enabled GPU servers to achieve 32% greater GPU utilization and 28% higher aggregate throughput compared to external storage accessed via 400GbE networking.

David Flynn, Founder and CEO of Hammerspace, stated that the MLPerf 1.0 benchmark results are a testament to Tier 0’s ability to unlock the full potential of GPU infrastructure, eliminating network constraints, scaling performance linearly, and delivering unparalleled financial benefits.

Hammerspace Sets New Records in MLPerf1.0 Benchmark with Tier 0 Storage

Hammerspace, a company revolutionizing the way unstructured data is used and preserved, has announced groundbreaking results in the MLPerf 1.0 benchmark using its innovative Tier 0 technology. This new tier of ultra-fast shared storage leverages local NVMe storage within GPU servers to eliminate storage bottlenecks and maximize GPU performance.

The MLCommons MLPerf 1.0 benchmark was initially released in September 2024, and Hammerspace used this benchmark to validate the performance benefits of its Tier 0 architecture. The tests were run on bandwidth-intensive 3D-Unet workflow on Supermicro servers using ScaleFlux NVMe drives. The results were compared with previously submitted benchmarks from other vendors.

Hammerspace’s Tier 0 technology achieved unmatched capabilities, demonstrating a significant improvement in resource utilization and cost reduction for AI, HPC, and other data-intensive workloads. By leveraging local NVMe storage inside GPU servers, Tier 0 makes existing deployed NVMe storage available as shared storage, delivering the performance benefits of a major network upgrade without the cost or disruption.

Virtually Zero CPU Overhead and Eliminating Network Bandwidth Constraints

Hammerspace’s software leverages the Linux kernel for protocol services and communication with Anvil metadata servers, using only a tiny fraction of the CPU. This leaves server resources available for their intended tasks. The benchmark demonstrated that network speed is critical to maintaining GPU efficiency, and traditional setups using 2x100GbE interfaces struggled under load. In contrast, Tier 0 local storage eliminates the network dependency entirely.

Linearly Scalable Performance and CapEx and OpEx Benefits

Tier 0 achieves linear performance scaling by processing data directly on GPU-local storage, bypassing traditional bottlenecks. Hammerspace’s data orchestration delivers data to local NVMe, protects it, and seamlessly offloads checkpointing and computation results. Extrapolated results from the benchmark confirm that scaling GPU servers with Tier 0 storage multiplies both throughput and GPU utilization linearly, ensuring consistent, predictable performance gains as clusters expand.

The integration of GPU-local NVMe into a global shared file system delivers measurable financial and operational benefits. These include reduced external storage costs, faster deployment, and enhanced GPU efficiency. With checkpointing durations reduced from minutes to seconds, Tier 0 unlocks significant compute capacity, accelerating job completion without additional hardware investments.

About Hammerspace

Hammerspace is radically changing how unstructured data is used and preserved. Their Global Data Platform unifies unstructured data across edge, data centers, and clouds, providing extreme parallel performance for AI, GPUs, and high-speed data analytics. The platform orchestrates data to any location and any storage system, eliminating data silos and making data an instantly accessible resource for users, applications, and compute clusters, regardless of their location.

Hammerspace’s innovative approach has earned them recognition in the industry, including being honored as one of the “Editors’ Choice: Top 5 Vendors to Watch” in the 2024 HPCwire Readers’ and Editors’ Choice Awards.

More information
External Link: Click Here For More
Quantum News

Quantum News

There is so much happening right now in the field of technology, whether AI or the march of robots. Adrian is an expert on how technology can be transformative, especially frontier technologies. But Quantum occupies a special space. Quite literally a special space. A Hilbert space infact, haha! Here I try to provide some of the news that is considered breaking news in the Quantum Computing and Quantum tech space.

Latest Posts by Quantum News:

Multiverse Computing Launches HyperNova 60B 2602, 50% Compressed LLM, on Hugging Face

Multiverse Computing Launches Quantum Inspired HyperNova 60B 2602, 50% Compressed LLM, on Hugging Face

February 24, 2026
AWS Quantum Technologies Blog: New QGCA Outperforms Simulated Annealing on Complex Optimization Problems

AWS Quantum Technologies Blog: New QGCA Outperforms Simulated Annealing on Complex Optimization Problems

February 23, 2026
AWS Quantum Technologies has released version 0.11 of the Qiskit-Braket provider on February 20, 2026, significantly enhancing how users access and utilize Amazon Braket’s quantum computing services through the popular Qiskit framework. This update introduces new “BraketEstimator” and “BraketSampler” primitives, mirroring Qiskit routines for improved performance and feature integration with Amazon Braket program sets. Importantly, the provider now fully supports Qiskit 2.0 while maintaining compatibility with versions as far back as v0.34.2, allowing users to “use a richer set of tools for executing quantum programs on Amazon Braket.” The release unlocks flexible compilation features, enabling circuits to be compiled directly for Braket devices using the to_braket function, accepting inputs from Qiskit, Braket, and OpenQASM3.

AWS Quantum Technologies Releases Qiskit-Braket Provider v0.11, Now Compatible with Qiskit 2.0

February 23, 2026