Generating entangled states is crucial for quantum technologies, but creating these states on today’s noisy intermediate-scale quantum (NISQ) devices presents significant challenges, particularly due to limited connections between qubits. S. Siddardha Chelluri from the Institute of Physics, Johannes-Gutenberg University of Mainz, Stephan Schuster and Sumeet from the Department of Physics, Friedrich-Alexander-Universität Erlangen-Nürnberg, along with Riccardo Roma and colleagues, investigate a method for generating GHZ states, a specific type of entangled state, that accounts for the restricted connectivity found in current quantum hardware. Their research demonstrates a measurement-based approach to GHZ state generation and benchmarks its performance against standard methods, utilising the Google Eagle processor and extensive simulations to assess scalability. The team’s findings reveal a performance trade-off between the two protocols, suggesting that while current architectures favour standard methods, the measurement-based approach holds promise for future, more robust quantum devices due to its potential for faster execution times and reduced circuit complexity.
In this work, researchers focus on GHZ state generation under the practical constraint of limited qubit connectivity, a hallmark of current Noisy Intermediate-Scale Quantum (NISQ) hardware. The study investigates GHZ state preparation across different connectivity graphs, inspired by IBM and Google chip architectures, as well as random graphs that reflect distributed quantum systems. Their approach involves a measurement-based protocol designed to utilise qubit connectivity constraints for the generation of GHZ states on NISQ devices.
Merging Smaller Clusters to Build GHZ States
Researchers are exploring methods to create large GHZ states on quantum computers, focusing on a technique called the merging protocol and comparing it to a more traditional growing protocol. The core challenge lies in building these states efficiently, given the limitations of current hardware where not every qubit can directly interact with every other. The merging protocol constructs large GHZ states by combining smaller, entangled clusters, offering an alternative to the growing protocol which builds the state sequentially. The team used simulations to evaluate both approaches, representing qubit connectivity as a graph and focusing on the ‘star size’, the number of qubits within each merged cluster.
Simulations were performed on graphs mirroring the connectivity of IBM and Google quantum computers, as well as randomly connected graphs. Researchers tested the merging protocol with different scaling factors, relating the target cluster size to the average size in the initial graph. The results reveal that the optimal strategy depends on the graph’s structure. On structured layouts like those found in IBM and Google computers, maximizing the cluster size during merging consistently yielded the best results. However, on randomly connected graphs, a trade-off emerged between circuit depth and the number of measurements needed to verify entanglement.
The research suggests that the best strategy for random graphs depends on the graph’s specific parameters. This work demonstrates that the optimal strategy for building GHZ states is heavily influenced by the underlying connectivity of the qubits. For structured layouts, maximizing cluster size during merging is most effective, while random layouts require a balance between circuit complexity and measurement requirements. The merging protocol can be competitive with the growing protocol, particularly in structured layouts, offering valuable insights for designing efficient quantum circuits.
Merging Protocol Efficiently Creates Large Entangled States
Researchers have developed and compared two distinct methods for creating large, entangled states known as GHZ states, crucial for advanced quantum computing applications. One approach, termed the merging protocol, builds large GHZ states by repeatedly combining smaller ones. This involves identifying groups of well-connected qubits, called “stars”, and constructing a GHZ state within each star. These smaller states are then merged via measurements and corrective operations, gradually building a larger entangled state. The alternative, a growing protocol, takes a different tack, constructing the GHZ state in a step-by-step manner using only standard quantum operations.
Starting with a small GHZ state built around a highly connected qubit, the method expands the entangled state by sequentially adding neighboring qubits via controlled-X gates. This approach avoids mid-circuit measurements, which can introduce errors, and is well-suited for current quantum hardware where stability is a primary concern. Comparative analysis reveals a trade-off between the two methods. The merging protocol excels when devices become more stable, as it requires fewer operations overall, potentially leading to faster execution times. The growing protocol, while more suitable for current technology, demands a greater number of operations. Importantly, both methods demonstrate an ability to effectively utilize the available connections between qubits, maximizing performance within the constraints of the hardware.
Unitary Protocol Excels with Current Hardware
This research explores methods for generating GHZ states on current and near-future quantum computers. The team investigated two distinct approaches: a measurement-based protocol that ‘merges’ smaller entangled states, and a fully unitary protocol that ‘grows’ entanglement step-by-step. Both methods were implemented on IBM quantum hardware and extensively simulated across various network topologies, mirroring architectures from IBM, Google, and distributed quantum systems. The results demonstrate that the unitary protocol currently performs better on available hardware. However, the measurement-based protocol offers potential advantages as quantum devices improve, specifically with higher fidelity two-qubit gates and more accurate readout capabilities, due to its shorter circuit depth.
The study also highlights a discrepancy between simulated and measured fidelities, suggesting that error mitigation techniques could further enhance performance. The authors acknowledge limitations stemming from experiments conducted solely on IBM hardware and the use of simplified noise models. Future research directions include extending the study to other quantum computing platforms, such as photonic, trapped-ion, and neutral-atom systems. The team also proposes applying reinforcement learning to optimise the measurement-based protocol and developing more accurate noise models for different hardware systems.
👉 More information
🗞 Shallow-depth GHZ state generation on NISQ devices
🧠 DOI: https://doi.org/10.48550/arXiv.2507.19145
