A new method for reducing unfair solution sampling in quantum annealing has been developed by Shunta Ide and Shu Tanaka at Keio University. Adjusting the penalty coefficient within weighted graph bipartitioning problems reduces this unfairness, according to numerical simulations and experiments utilising the D-Wave Advantage2 system. Increasing the penalty coefficient generally improves sampling fairness, although it reduces the probability of finding ground states. Scaling analysis of instances with up to 12 spins revealed that over 70% exhibited improved fairness with increasing penalty coefficients, suggesting a key pathway towards more reliable quantum optimisation.
Penalty coefficient adjustments consistently enhance solution balance in quantum annealing
Sampling fairness in quantum annealing has improved dramatically. Over 70% of randomly generated weighted graph bipartitioning problem instances now demonstrate monotonically increasing fairness as the penalty coefficient is raised, a threshold previously unattainable. It signifies a substantial leap towards reliable optimisation, allowing for consistently enhancing the balance of sampled solutions across multiple ground states. The penalty method, a technique for handling constraints, directly influences this fairness by adjusting the coefficient which dictates how strongly infeasible solutions are penalised. Quantum annealing is a metaheuristic for finding the global minimum of a given objective function over a set of candidate solutions, and its effectiveness hinges on the ability to explore the solution space efficiently. However, due to the physical implementation of quantum annealing, particularly in systems like the D-Wave, the process is prone to unfair sampling, where degenerate ground states, multiple solutions with the same minimum energy, are not visited with equal probability. This bias can severely limit the utility of quantum annealing in applications requiring a representative sample of optimal solutions.
Numerical simulations and experiments on the D-Wave Advantage2 system confirm this improvement extends to actual quantum hardware, suggesting practical applications are within reach. Analysis of 12-spin instances revealed that increasing the penalty coefficient also reduced the energy gap between ground and first excited states, indicating a more stable optimisation field. This effect was particularly pronounced in instances where the penalty method successfully enforced feasibility. The energy gap is a crucial parameter in quantum annealing, as a smaller gap generally leads to a higher probability of tunneling through energy barriers and finding the global minimum. Reducing this gap through penalty adjustment suggests a more robust optimisation landscape, less susceptible to getting trapped in local minima. The D-Wave Advantage2 system employs superconducting qubits arranged in a Chimera graph, and the penalty coefficient is embedded within the problem Hamiltonian, influencing the energy landscape presented to the qubits.
Consistent reduction in unfair sampling across a range of penalty settings on the D-Wave Advantage2 system further validated these results, confirming they were not solely a product of the simulation environment. However, these improvements accompanied a decrease in the probability of finding any ground state solution, indicating a trade-off between balanced sampling and overall solution quality. Further refinement is needed to optimise the penalty coefficient for specific problem instances and maximise both fairness and solution quality before widespread practical implementation. The researchers employed a systematic variation of the penalty coefficient, ranging from lower values that allow for more constraint violations to higher values that strictly enforce feasibility. The resulting samples were then analysed to quantify the degree of unfairness, using metrics designed to assess the distribution of probabilities across the degenerate ground states.
Mitigating biased solutions enhances potential for quantum optimisation applications
Quantum optimisation is edging closer to reliability, demonstrating a method to lessen the frustrating issue of unfair sampling in complex problems. This is key because quantum annealing promises speed-ups for tasks like logistics, finance, and machine learning, but only if it delivers a truly diverse set of potential solutions. Logistics firms and financial institutions are keenly watching these developments, hoping for practical advantages. Boosting the penalty applied to problem constraints isn’t a guaranteed fix, however; it appears to improve fairness for most, but not all, tested scenarios. The significance of fair sampling extends beyond simply finding a solution; it is crucial for applications requiring a comprehensive understanding of the solution space. For example, in combinatorial counting problems, accurately determining the number of optimal solutions is essential, and unfair sampling can lead to significant errors. Similarly, in applications involving risk assessment or portfolio optimisation, a biased sample of solutions may not adequately represent the range of possible outcomes.
Genuine progress for quantum annealing is represented by improving solution diversity, bringing nearer the day when these machines might outperform conventional computers on complex tasks. The work focused on weighted graph bipartitioning, a common type of constrained optimisation problem, and demonstrated that adjusting a key parameter demonstrably improves the reliability of solution sampling. A reduction in unfair sampling, where the quantum computer favours certain solutions over others, was achieved by increasing the penalty coefficient used to enforce problem constraints, tackling a key hurdle for real-world applications. Weighted graph bipartitioning involves dividing the nodes of a graph into two disjoint sets such that the sum of the weights of edges connecting nodes in different sets is maximised, subject to constraints on the node assignments. This problem is NP-hard, meaning that finding the optimal solution becomes computationally intractable as the size of the graph increases, making it a suitable candidate for quantum annealing.
The implications of this research extend to other constrained optimisation problems beyond weighted graph bipartitioning. The principle of adjusting penalty coefficients to improve sampling fairness is likely applicable to a wide range of applications, including scheduling, resource allocation, and machine learning model training. Future work could explore the use of adaptive penalty adjustment strategies, where the coefficient is dynamically adjusted during the annealing process to optimise both fairness and solution quality. Furthermore, investigating the interplay between the penalty coefficient and other parameters of the quantum annealing process, such as the annealing time and the strength of the transverse field, could lead to further improvements in performance and reliability. The ultimate goal is to develop quantum annealing systems that can consistently deliver high-quality, unbiased solutions to complex optimisation problems, unlocking their full potential for real-world impact.
Researchers demonstrated that increasing the penalty coefficient improves sampling fairness in weighted graph bipartitioning problems solved using quantum annealing. This is important because unfair sampling, where some solutions are favoured over others, limits the reliability of results from these systems. Analysis of up to 12 spins showed over 70% of tested instances exhibited improved fairness with higher penalty coefficients, although this came at a cost to ground-state probability. The authors suggest future work could explore dynamically adjusting this coefficient during the annealing process to further optimise performance.
👉 More information
🗞 Unfair Sampling of Quantum Annealing in Weighted Graph Bipartitioning Problems
🧠ArXiv: https://arxiv.org/abs/2604.11449
