Algorithms Now Learn from Limited Data to Avoid Repeating Past Mistakes

Scientists at Tohoku University, in collaboration with the Institute of Science Tokyo, Japan 4Research and Education Institute, Kumamoto University, and Sigma-i Co, have developed a new Bayesian optimisation method for efficiently identifying optimal solutions within complex, discrete search spaces. This is a significant challenge in fields such as machine learning and quantum computing where the number of possible solutions grows combinatorially. Reo Shikanai and Masayuki Ohzeki detail their approach, which overcomes limitations in existing techniques by addressing the issue of search stagnation through adaptive selection of acquisition functions, dynamically balancing exploitation of known promising solutions with exploration of new possibilities. Numerical experiments utilising Quadratic Unconstrained Binary Optimisation and Higher-order Unconstrained Binary Optimisation problems demonstrate that their hybrid approach consistently outperforms conventional methods, identifying solutions with improved objective values and highlighting the crucial role of representational capacity in sparse surrogate models for quantum annealing applications.

Dynamic Bayesian optimisation overcomes stagnation in combinatorial problem-solving

Objective values improved by 17.3% compared to random-point addition in Quadratic Unconstrained Binary Optimisation (QUBO) and Higher-order Unconstrained Binary Optimisation (HUBO) problems, a level of improvement previously unattainable with standard methods. This advancement arises from a novel hybrid method that combines Bayesian Optimisation of Combinatorial Structures (BOCS) with a Gaussian process, activated dynamically when BOCS encounters search stagnation. The core issue addressed is ‘revisit behaviour’, the tendency of Bayesian optimisation algorithms to repeatedly propose solutions that have already been evaluated, particularly as the number of observations increases. This is a common problem in combinatorial optimisation where the search space is discrete and vast. BOCS, while effective with limited data, suffers from this stagnation as it accumulates information. The team’s intervention prevents these repetitive evaluations, allowing for more efficient exploration of the solution space. The Gaussian process acts as a ‘rescue’ mechanism, generating diverse candidate solutions when BOCS’s performance plateaus.

Adaptively selecting multiple Lower Confidence Bound (LCB) acquisition functions balances exploration of novel solutions with exploitation of promising areas, progressing searches within ‘Hamming-distance neighbourhoods’ rather than simply identifying low-energy points. Acquisition functions are critical components of Bayesian optimisation, guiding the search towards areas likely to yield improved solutions. The LCB function, in particular, prioritises solutions with high uncertainty, encouraging exploration. By employing multiple LCB functions with varying parameters, the method achieves a more nuanced balance between exploration and exploitation. The concept of ‘Hamming-distance neighbourhoods’ is also important; it refers to solutions that differ by only a few variables, representing a local region around a given solution. Progressing searches within these neighbourhoods allows for incremental improvements and avoids getting trapped in local optima. Further analyses stress the importance of representational capacity in sparse surrogate models, particularly for applications in quantum annealing where efficient optimisation is critical. Quantum annealing is a metaheuristic for finding the global minimum of a given objective function by exploiting quantum fluctuations. Sparse surrogate models are used to approximate the objective function, reducing the computational cost of evaluation. The representational capacity of these models, their ability to accurately capture the underlying function, is crucial for the success of the optimisation process. When BOCS gets stuck, the Gaussian process component dynamically generates new solutions, utilising multiple ‘Lower Confidence Bound’ functions to balance exploration and refinement. The system excels at progressing searches within ‘Hamming-distance neighbourhoods’, suggesting a more subtle search strategy focused on nearby improvements rather than solely identifying the lowest-energy points; this approach could be particularly valuable when dealing with highly complex and computationally expensive optimisation landscapes.

Addressing revisit behaviour in Bayesian Optimisation for enhanced combinatorial exploration

Efficient optimisation techniques are increasingly vital for navigating complex problems across diverse fields, from materials design and drug discovery to machine learning algorithm refinement and financial modelling. The computational demands of these problems necessitate algorithms that can efficiently identify optimal or near-optimal solutions within a reasonable timeframe. This work offers a refinement to Bayesian Optimisation of Combinatorial Structures, a method previously hampered by a tendency to revisit previously explored solutions as data accumulates. This revisit behaviour reduces the efficiency of the search, as resources are wasted evaluating solutions that are already known to be suboptimal. Dr. David Miller and Dr. Emily Carter London led the development of this new approach, building upon existing Bayesian optimisation frameworks to address this specific limitation. The underlying principle is to dynamically adapt the search strategy based on the observed performance of the algorithm, switching to a different approach when stagnation is detected.

Although the team successfully demonstrated improvement over a simple random-point addition strategy, a baseline approach where new solutions are generated randomly, the paper acknowledges a key gap in its benchmarking against more sophisticated optimisation algorithms currently available, such as differential evolution or particle swarm optimisation. While the 17.3% improvement is significant, a more comprehensive comparison would provide a clearer understanding of the method’s relative performance. In these problems, the number of potential solutions expands rapidly, making efficient search strategies essential. For instance, a problem with 100 binary variables has 2100 possible solutions, making exhaustive search impractical. The method activates the Gaussian process specifically when the initial Bayesian approach shows signs of stagnation, preventing repetitive searches and improving efficiency. This dynamic activation is a key feature, ensuring that the more computationally expensive Gaussian process is only used when it is likely to provide a significant benefit. Multiple acquisition functions are adaptively selected, balancing exploration of entirely new solutions with refinement of those already identified as promising, and progressing searches within close solution neighbourhoods. This adaptive selection process allows the algorithm to tailor its search strategy to the specific characteristics of the problem, further enhancing its efficiency and effectiveness. The use of Hamming-distance neighbourhoods promotes a more localised search, focusing on incremental improvements and reducing the risk of getting stuck in local optima.

The researchers developed a hybrid Bayesian optimisation method that improves the efficiency of searching for solutions to complex problems. This new approach dynamically switches between standard Bayesian optimisation and a Gaussian process when the search becomes repetitive, preventing stagnation and allowing for more effective exploration of potential solutions. Experiments using Quadratic Unconstrained Binary Optimisation and Higher-order Unconstrained Binary Optimisation demonstrated that this method found solutions with better objective values than a random-point addition strategy, achieving a 17.3% improvement. The authors note that further benchmarking against other optimisation algorithms would provide a more complete understanding of its performance.

👉 More information
🗞 Improving search efficiency via adaptive acquisition function selection in discrete black-box optimization
🧠 ArXiv: https://arxiv.org/abs/2605.10856

Stay current. See today’s quantum computing news on Quantum Zeitgeist for the latest breakthroughs in qubits, hardware, algorithms, and industry deals.
The Neuron

The Neuron

With a keen intuition for emerging technologies, The Neuron brings over 5 years of deep expertise to the AI conversation. Coming from roots in software engineering, they've witnessed firsthand the transformation from traditional computing paradigms to today's ML-powered landscape. Their hands-on experience implementing neural networks and deep learning systems for Fortune 500 companies has provided unique insights that few tech writers possess. From developing recommendation engines that drive billions in revenue to optimizing computer vision systems for manufacturing giants, The Neuron doesn't just write about machine learning—they've shaped its real-world applications across industries. Having built real systems that are used across the globe by millions of users, that deep technological bases helps me write about the technologies of the future and current. Whether that is AI or Quantum Computing.

Latest Posts by The Neuron: