Quantum annealing holds considerable promise for tackling complex optimisation problems, yet its performance is fundamentally limited by the architecture of the quantum hardware itself. Mario Bifulco and Luca Roversi, both from the University of Turin, investigate how the physical connections between qubits impact the efficiency of solving these problems. Their work centres on ‘minor embedding’, a crucial process that translates a problem into a form the quantum hardware can understand, and reveals how certain hardware topologies introduce vulnerabilities to noise. By comparing the performance of minor embedding on current ‘Zephyr’ graphs and a novel class of ‘Havel-Hakimi’ graphs, the researchers demonstrate that the latter offer a significant advantage, requiring shorter qubit chains and exhibiting improved scalability for larger problem sizes, thus paving the way for more robust and powerful quantum annealers.
The connectivity graph defines how qubits connect within a quantum annealing-based quantum processing unit (QPU). Minor embedding represents a method for mapping a problem onto this hardware, associating logical variables to chains of connected qubits, preserving relationships within the problem as physical connections. Current QPU designs, with their fixed qubit arrangements, often lead to inefficient mappings, as they do not represent a fully connected network, prompting researchers to propose a methodology and criteria to evaluate how hardware topology affects problem embedding.
Topology Impacts Quantum Annealing Minor Embedding
This research investigates how QPU topology influences the performance of minor embedding, a critical step in quantum annealing. The team evaluated Zephyr graphs, currently used in D-Wave systems, and Havel-Hakimi graphs, which allow controlled variation of qubit connectivity. This approach enables the study of how the ratio of nodes to connections affects the success of mapping a problem graph onto the QPU graph. The findings, obtained through minor embedding simulations, suggest that Havel-Hakimi-based topologies, on average, require shorter qubit chains, exhibiting smoother scaling of the largest embeddable graph as the QPU size increases, indicating their potential as alternative designs for quantum annealing-based QPUs.
The workflow for quantum annealing involves formulating the optimization problem, embedding it into the QPU’s hardware graph, and executing the annealing process. Minor embedding, the hardware-aware compilation step, effectively routes problem connections across the physical QPU topology, minimizing the total qubit chain length required while preserving all logical connections. A graph is considered a minor of another if it can be obtained through edge contractions and vertex or edge deletions. Recent D-Wave annealers demonstrate a trade-off between connectivity and scalability, with the Pegasus topology allowing embeddings of up to roughly 100-variable cliques on a QPU with thousands of qubits, and the newer Zephyr topology offering increased connectivity.
The researchers generated both Zephyr and Havel-Hakimi graphs and, for each QPU, determined the largest embeddable clique, collecting descriptive statistics of the qubit chain lengths and recording the size of the largest embeddable clique. The results demonstrate that Havel-Hakimi graphs exhibit almost linear scaling, where the size of the embeddable clique grows proportionally with increasing connectivity. Zephyr topologies, however, exhibit a sublinear trend, becoming less expressive at higher degrees, with embeddings on Havel-Hakimi graphs yielding shorter qubit chains and smoother scaling. These findings suggest that synthetic topologies with controlled degree distributions can enhance embedding capacity without excessive chain length, potentially improving the robustness and scalability of quantum annealing. Future work will extend this comparative analysis to a broader set of QPU topologies and investigate correlations between structural indicators and practical embedding quality metrics. The researchers also plan to explore new candidate topologies derived from graph-theoretical principles, focusing on structures that could minimize qubit chain length while maintaining feasible connectivity and manufacturability, supporting the co-design of hardware architectures and embedding heuristics.
Havel-Hakimi Graphs Improve Embedding Capacity
This work presents a detailed evaluation of how QPU topology impacts the performance of quantum annealing, specifically focusing on the minor embedding process. Researchers investigated Zephyr graphs, currently used in D-Wave systems, and Havel-Hakimi graphs, which allow for controlled variation in connectivity. The core of the study involved assessing the maximum size of a clique that could be successfully embedded within each topology and measuring the length of the resulting qubit chains. Results demonstrate a clear advantage for Havel-Hakimi graphs in terms of embedding capacity and chain compactness, with Havel-Hakimi graphs exhibiting almost linear scaling, successfully embedding larger cliques compared to Zephyr topologies, which displayed a sublinear trend and saturation at higher connectivities.
Crucially, embeddings on Havel-Hakimi graphs consistently yielded shorter qubit chains, indicating a more efficient use of qubits and potentially improved robustness against noise. The team normalized the average degree of graphs and the size of the embedded clique to provide a unified comparison, removing dependencies on QPU size and allowing for direct comparison between topologies. Measurements confirm that Havel-Hakimi graphs maintain smoother scaling of the largest embeddable clique as the QPU size increases, suggesting they offer a pathway to more scalable quantum annealing systems, with the median chain length for the largest embedded clique consistently lower in Havel-Hakimi graphs, indicating a more compact representation of the problem on the hardware.
Havel-Hakimi Graphs Enhance Quantum Annealing
This research presents a detailed comparative analysis of QPU topologies, specifically examining how hardware connectivity impacts the efficiency of quantum annealing. Scientists systematically evaluated the performance of Zephyr and Havel-Hakimi graphs when used as the basis for embedding optimization problems, a crucial step in utilizing quantum annealers. The. While the study focused on these two specific topologies, the methodology developed provides a framework for evaluating a wider range of designs. The authors acknowledge that further research is needed to explore additional topologies and to correlate structural characteristics with practical embedding quality, including chain stability and noise susceptibility. Future work will also investigate novel topologies derived from graph theory, aiming to balance physical feasibility, connectivity, and embedding efficiency, ultimately supporting the co-design of hardware and embedding heuristics.
👉 More information
🗞 Exploring Topologies in Quantum Annealing: A Hardware-Aware Perspective
🧠 ArXiv: https://arxiv.org/abs/2511.03327
