Researchers are tackling the complex challenge of multi-objective optimisation, where competing goals must be balanced with limited resources and often only ‘black-box’ access to the problem itself. Leonard Papenmeier from the University of Munster and Petru Tighineanu from Robert Bosch GmbH, along with colleagues, present a novel approach called SMOG , Scalable Meta-Learning for Multi-Objective Bayesian Optimisation , which leverages historical data from similar tasks to accelerate the process. This work is significant because it uniquely combines meta-learning with Bayesian optimisation for multi-objective problems, offering a scalable solution that explicitly models correlations between objectives and propagates uncertainty in a principled manner, potentially leading to substantial improvements in efficiency and performance across a range of applications.
This innovative approach propagates uncertainty from historical data into the surrogate model in a principled manner, improving the reliability of predictions. After conditioning on metadata from related tasks, SMOG yields a closed-form target-task prior, augmented by a flexible residual multi-output kernel, which captures complex relationships between objectives. Consequently, the resulting surrogate model seamlessly integrates with standard multi-objective Bayesian optimisation acquisition functions, streamlining the optimisation process.
This study unveils a probabilistic framework that closes the gap in existing methods by meta-learning a correlated multi-objective Gaussian process surrogate while maintaining principled Bayesian uncertainty propagation. The resulting target-task surrogate can be readily incorporated into standard MOBO pipelines, such as those using hypervolume-based acquisition optimisation, providing coherent propagation of metadata uncertainty to the target task. Experiments demonstrate that SMOG effectively leverages historical data to accelerate multi-objective optimisation in scenarios where tasks are related but heterogeneous. The research establishes a method for efficiently exploiting cross-objective correlations, which is particularly valuable when evaluations are expensive and data is scarce. This work opens up new possibilities for applications in industrial process tuning, materials discovery, and machine-learning system design, where balancing competing objectives is crucial for achieving optimal performance and resource utilisation. This innovative model constructs a structured joint Gaussian process prior across both meta-tasks and target tasks, enabling the propagation of metadata uncertainty into the target surrogate in a principled manner. The study employed a hierarchical, parallel training strategy to enhance scalability. This caching mechanism allows for efficient reuse of learned information, reducing the computational burden of subsequent optimization tasks.
Scientists harnessed this surrogate model within standard multi-objective Bayesian optimization acquisition functions, seamlessly integrating the meta-learned prior into existing optimization pipelines. SMOG’s core innovation lies in its ability to model the relationships between multiple objectives simultaneously. The team engineered a flexible residual multi-output kernel, augmenting the target-task prior and allowing for a more nuanced understanding of objective correlations. This construction avoids the pitfalls of treating objectives independently, which can lead to wasted information when evaluations are costly.
Experiments demonstrate that the approach effectively integrates historical data, propagating uncertainty appropriately and enabling faster convergence towards Pareto-optimal solutions. Furthermore, the research prioritised Bayesian uncertainty awareness, ensuring that the model accounts for the inherent uncertainty in both the historical data and the target task. This is achieved through a fully Bayesian treatment of all task data, providing a robust and reliable framework for meta-learning in low-data regimes. The resulting surrogate model delivers a principled way to balance exploration and exploitation, crucial for efficient multi-objective Bayesian optimization and unlocking significant performance gains across diverse applications.
SMOG boosts initial multi-objective Bayesian optimisation performance
Scientists developed SMOG, a scalable meta-learning algorithm for multi-objective black-box optimisation problems, demonstrating improved sample efficiency in initial Bayesian optimisation iterations. The research focused on leveraging observations from related tasks and modelling inter-task correlations to construct an informative target-task posterior. Experiments on the two-objective Hartmann benchmark revealed that methods not utilising metadata initially struggled, while SMOG achieved a significant initial speedup and outperformed Ind. -ScaML-GP, which lacked the ability to model task correlations. Specifically, SMOG attained a normalised hypervolume of approximately 0.4 at 30 BO iterations, exceeding the performance of Ind. -GP and MO-GP which remained below 0.3 at the same iteration count.
Results demonstrate that SMOG consistently performs well across various benchmarks, including the four-objective Hartmann benchmark where it maintained competitive aggregated performance. On the Terrain benchmark with two objectives, averaged over three target tasks and 50 random restarts, SMOG achieved a normalised hypervolume of approximately 0.6 at 30 BO iterations, comparable to Ind. -ABLR. Further analysis on the four-objective Terrain benchmark showed SMOG’s robust performance, maintaining competitiveness despite Ind. -ABLR’s decline in effectiveness. The team measured hypervolume (HV) as a key metric, evaluating the Pareto front built from solutions found at each BO iteration.
The study on the Protein Structure task of HPOBench showed that SMOG, alongside Ind. -ABLR, significantly outperformed Ind. -GP and MO-GP, achieving higher initial solution quality. Data shows that SMOG and Ind. -ABLR exhibited the strongest performance on this benchmark, indicating the effectiveness of metadata utilisation. Measurements confirm that SMOG’s ability to model task correlations contributes to its superior performance, particularly in scenarios where related tasks provide valuable information. The research recorded that methods leveraging metadata consistently outperformed those that did not, highlighting the importance of incorporating prior knowledge into the optimisation process. The team observed that MO-TPE, while initially competitive, did not maintain its performance against SMOG and other metadata-leveraging methods. This work introduces a robust algorithm competitive on every benchmark while other methods struggled on at least one problem, demonstrating its versatility and potential for real-world applications.
👉 More information
🗞 SMOG: Scalable Meta-Learning for Multi-Objective Bayesian Optimization
🧠 ArXiv: https://arxiv.org/abs/2601.22131
