Kernel learning for regression receives a novel boost from quantum annealing in research led by Yasushi Hasegawa and Masayuki Ohzeki, from Tohoku University and associated institutions. Their work proposes a new framework integrating quantum annealing not simply as a sampling method, but as a core component defining the learned kernel itself. By modelling spectral distributions with restricted Boltzmann machines and utilising quantum annealing to generate samples, the researchers create data-adaptive kernels for regression tasks. This approach addresses limitations of existing random Fourier feature approximations, improving kernel contrast and ultimately demonstrating reduced training loss and root mean squared error across benchmark datasets compared to traditional Gaussian kernels. The findings suggest a promising pathway for harnessing quantum technologies to enhance machine learning algorithms.
Quantum Annealing for Kernel Function Construction Quantum annealing
Quantum annealing (QA) has been developed for combinatorial optimisation, however practical QA devices operate at finite temperature and under noise. Consequently, their outputs can be regarded as stochastic samples close to a Gibbs, Boltzmann distribution. This study proposes a QA-in-the-loop kernel learning framework that integrates QA not merely as a substitute for Markov-chain Monte Carlo sampling, but as a component that directly determines the learned kernel for regression. Based on Bochner’s theorem, a shift-invariant kernel is represented as an expectation over a spectral distribution, and random Fourier features (RFF) are utilised.
The research focuses on leveraging the stochastic nature of QA to construct novel kernel functions for machine learning applications. Rather than simply using QA to find solutions to optimisation problems, the approach uses the annealing process to define the characteristics of a kernel, influencing how data is mapped and processed. This differs from conventional kernel methods where the kernel is pre-defined and fixed during the learning process. By integrating QA directly into the kernel learning stage, the framework aims to improve regression performance, particularly in scenarios where data exhibits complex, non-linear relationships.
Specifically, the contribution lies in demonstrating how the outputs of a QA device, treated as samples from a Gibbs, Boltzmann distribution, can be used to estimate the spectral distribution required for defining a shift-invariant kernel. This is achieved through the application of Bochner’s theorem, which establishes a connection between kernel functions and spectral distributions. The use of random Fourier features then allows for efficient computation of the kernel, making the approach scalable to larger datasets. The framework, tested with simulated data, offers a new way to harness the power of quantum annealing within a classical machine learning pipeline. Sigma-i Co., Ltd., Minato, Tokyo 108-0075, Japan and 860-8555, Japan are affiliated with this research.
Quantum Kernel Learning via Restricted Boltzmann Machines Researchers
جنبًا إلى جنب مع وصف مفصل للتجربة والنتائج، يقدم هذا النص ملخصًا شاملاً لعملية تعلم النواة باستخدام جهاز الكمبيوتر الكمي. فيما يلي تفصيل للنقاط الرئيسية: الهدف: * تطوير طريقة لتعلم النواة باستخدام جهاز الكمبيوتر الكمي (QA) لمهام الانحدار. * استكشاف إمكانية استخدام عينات QA لإنشاء نواة بيانات قابلة للتكيف. المنهجية: 1. النموذج الطيفي: * يستخدم نموذجًا طيفيًا يعتمد على شبكة بولتزمان المقيدة (RBM).
- يتم تدريب RBM لتعلم توزيع البيانات0.2. أخذ العينات الكمومية: * يتم أخذ عينات حالات RBM باستخدام جهاز الكمبيوتر الكمي. * يتم تحويل عينات RBM المتقطعة إلى ترددات مستمرة باستخدام تحويل Gaussian-Bernoulli0.3.
تدريب النواة: * يتم تدريب النواة الناتجة بشكل شامل عن طريق تقليل خطأ Nadaraya-Watson (NW) المتوسط المربع. * يتم استخدام أوزان النواة التربيعية لتجنب القواسم الصفرية في تقريب RFF ذي العينة المحدودة0.4. الانحدار الخطي المحلي: * يتم تقييم الانحدار الخطي المحلي باستخدام نفس أوزان النواة التربيعية. النتائج: * أظهر تعلم النواة انخفاضًا في الهدف التدريبي.
- أدت عملية التدريب إلى تغييرات واضحة في مصفوفة النواة، مما يشير إلى أن هيكل التشابه قد تم تكييفه مع البيانات. * أظهرت النواة المتعلمة تحسينات في R2 و RMSE مقارنة بنواة Gaussian الثابتة. * أدى زيادة عدد الميزات العشوائية في الاستدلال إلى تحسينات إضافية في الدقة. * يمكن أن يوفر الانحدار الخطي المحلي مكاسب إضافية في الحالات التي يؤثر فيها التحيز الحدودي أو عدم الانتظام المحلي على انحدار NW. الآثار: * توضح النتائج أن أخذ العينات القائمة على QA يمكن دمجها في خط أنابيب كامل لتعلم النواة.
- يمكن أن تؤدي هذه الطريقة إلى إنشاء نواة قابلة للتكيف ومفيدة لمهام الانحدار. اتجاهات العمل المستقبلية: * تحليل العلاقة بين عينات QA وخصائص درجة الحرارة/الضوضاء للأجهزة. * توسيع النموذج الطيفي إلى RBMs أكبر. * توسيع طريقة تعلم النواة إلى مهام التصنيف والانحدار الواعي بعدم اليقين. * تطوير فهم نظري أكثر تفصيلاً للتعميم مع التدريب على ترك واحد باستخدام نواة RFF العشوائية. بشكل عام، يقدم هذا النص مساهمة قيمة في مجال التعلم الآلي الكمي، مما يدل على إمكانية استخدام أجهزة الكمبيوتر الكمية لتعلم نواة بيانات قابلة للتكيف يمكنها تحسين أداء مهام الانحدار.
Quantum Kernel Learning via Direct Annealer Integration Scientists
Scientists achieved a breakthrough in kernel learning for regression by integrating a quantum annealer (QA) directly into the learning process. The research team developed a QA-in-the-loop framework, moving beyond using QA simply as a sampling method and instead employing it to define the learned kernel itself. Based on Bochner’s theorem, the study models the spectral distribution with a restricted Boltzmann machine (RBM), generating discrete samples using the D-Wave Advantage system and mapping them to continuous frequencies through a Gaussian-Bernoulli transformation. Experiments demonstrate a decrease in training loss, accompanied by structural changes observed within the kernel matrix, indicating successful kernel adaptation to the data.
The team measured performance across multiple benchmark regression datasets, including bodyfat, Mackey-Glass, energy efficiency, and concrete compressive strength. Results demonstrate improvements in both the coefficient of determination (R2) and root mean squared error (RMSE) compared to baseline Nadaraya-Watson (NW) regression with a Gaussian kernel. Specifically, on the bodyfat dataset, the learned kernel achieved an R2 of 0.941 and an RMSE of 0.005 on the test set, using 1000 random features at inference.
Further analysis on the energy efficiency dataset yielded an impressive R2 of 0.985 and a remarkably low RMSE of 0.000, again with 1000 random features. Measurements confirm that increasing the number of random features at inference consistently enhances accuracy, suggesting that the approximation error inherent in using a finite number of features can be mitigated. The study also investigated local linear regression (LLR) applied selectively to endpoint queries, further refining performance.
For the concrete compressive strength dataset, the KLNW approach with endpoint LLR achieved an R2 of 0.837 and an RMSE of 6.558, surpassing the performance of standard NW regression which recorded an R2 of 0.912 and RMSE of 9.653.
Visualisation of the learned spectral distribution on the concrete compressive strength dataset revealed a deviation from a single Gaussian distribution, confirming the ability of the approach to acquire a data-adaptive spectral structure. The research team utilized the D-Wave Advantage system, requesting R reads with an annealing time of 1μs, setting R = Ntrain/2 at each training iteration to generate RBM states. These findings demonstrate a significant advancement in kernel methods, offering a pathway to more accurate and efficient regression models.
Quantum Kernel Learning via Annealing Samples Researchers have
This work introduces a novel kernel learning framework integrating quantum annealing (QA) into the kernel construction process for regression tasks. By modelling the spectral distribution of a shift-invariant kernel with a restricted Boltzmann machine (RBM), the researchers demonstrate that discrete samples generated by a QA device can be effectively mapped to continuous frequencies, forming random Fourier features. This approach allows the kernel to be learned directly from data, adapting the similarity structure to improve regression performance. Experiments across several benchmark datasets show a reduction in training loss and improved R-squared and root mean squared error compared to baseline Gaussian-kernel Nadaraya-Watson regression.
The use of nonnegative squared-kernel weights was found to enhance performance, primarily by preventing near-zero denominators in the regression calculations and also by increasing the contrast of kernel weights. Further gains were observed by increasing the number of random features used during inference, suggesting reduced approximation variance. The authors acknowledge limitations related to the scaling of the RBM model for higher-dimensional problems and the need for a more systematic analysis of the relationship between QA hardware characteristics and kernel quality. Future research will focus on addressing these points, including exploring improved RBM embeddings and extending the framework to classification and uncertainty-aware regression. They also suggest investigating sparse or structured RBMs to improve scalability.
👉 More information
🗞 Kernel Learning for Regression via Quantum Annealing Based Spectral Sampling
🧠 ArXiv: https://arxiv.org/abs/2601.08724
