Wladimir Silva and colleagues at North Carolina State University have developed an adaptive quantum algorithm for matrix multiplication that reduces the complexity of inner product calculations to O (log N) by using Quantum Random Access Memory. Their new “Adaptive Stacking” framework dynamically adjusts the algorithm’s execution, enabling compatibility with both near-term and fault-tolerant quantum systems and offering a flexible range of time complexities. Validation through a Quantum Machine Learning simulation on the MNIST dataset shows 96% accuracy and suggests a pathway towards sharply more efficient high-dimensional linear algebra operations.
Demonstrated numerical stability and MNIST classification accuracy with a dynamically adjusted
AQ-Stacker, a new hybrid quantum-classical algorithm, achieves 96 per cent accuracy on the MNIST dataset, exceeding the performance of previous methods with improved numerical stability in a practical quantum machine learning simulation. Realising super-classical efficiency in high-dimensional linear algebra has long been hindered by the difficulty of reconciling quantum speedups with the limitations of current quantum hardware. The MNIST dataset, comprising 70,000 labelled images of handwritten digits, serves as a standard benchmark for evaluating machine learning algorithms, particularly those dealing with image recognition and classification. Achieving high accuracy on this dataset demonstrates the algorithm’s ability to handle complex data representations and perform meaningful computations. AQ-Stacker overcomes this challenge by dynamically adjusting its execution pattern to optimise performance. This dynamic adjustment is crucial because the performance of quantum algorithms is heavily influenced by the number and quality of available qubits, as well as the coherence time, the duration for which qubits maintain their quantum state.
Quantum Random Access Memory reduces the complexity of computing vector inner products to O(log N), enabling a tunable time-complexity range potentially reaching O(N2) on fault-tolerant systems. The use of QRAM allows for the preparation of quantum states in O(log N) time, significantly reducing data loading bottlenecks compared to traditional methods requiring O(N) time for similar operations. QRAM is a theoretical model of quantum memory that allows for accessing data in superposition, enabling parallel data retrieval. This is a significant advantage over classical RAM, which requires sequential access. However, building practical QRAM remains a substantial technological hurdle. Moreover, AQ-Stacker’s “Adaptive Stacking” framework dynamically adjusts computations, switching between sequential and parallel processing based on available qubit resources. This adaptive approach allows the algorithm to function effectively even with limited qubit availability, a common constraint in current quantum computing platforms. The algorithm intelligently balances the trade-off between computational speed and resource utilisation, maximising performance within the constraints of the hardware.
AQ-Stacker’s logarithmic complexity and adaptive resource utilisation for matrix multiplication
The core innovation of AQ-Stacker is its “Adaptive Stacking” framework, which dynamically reconfigures execution from sequential to parallel processing based on available qubit resources. This adaptability theoretically enables a tunable time complexity, potentially reaching O(N2) on fault-tolerant systems, while accommodating the limitations of near-term hardware. The algorithm relies on the QRAM Input Model, assuming its ability to provide state preparation in O (log N) time, thereby decoupling data loading from computational logic. The O(N2) complexity represents the standard computational cost of classical matrix multiplication, and achieving a comparable or superior complexity on a quantum computer would represent a significant advancement. The decoupling of data loading is essential because data access is often the bottleneck in quantum algorithms. By assuming efficient QRAM, the algorithm focuses on optimising the computational steps themselves.
A detailed mathematical mapping transforms classical vectors into quantum states, utilising the Hadamard test to estimate inner products and preserve magnitude information through classical norm-tracking. The Hadamard test is a quantum algorithm used to estimate the overlap between two quantum states. In AQ-Stacker, it is employed to efficiently calculate the inner product of vectors represented as quantum states. Classical norm-tracking ensures that the magnitude of the vectors is accurately maintained throughout the computation, preventing errors and preserving the integrity of the results. A QML simulation assessed numerical stability, achieving 96% accuracy on the MNIST handwritten digit dataset and confirming the algorithm’s capacity to maintain expressive power. This advancement addresses the computational bottleneck of classical matrix multiplication in machine learning applications, reducing the complexity of computing the inner product of two vectors to O (log N). The simulation was conducted using established quantum machine learning frameworks, allowing for a rigorous evaluation of the algorithm’s performance and scalability. The study focuses on the algorithmic framework and simulation results, without exploring comparisons to alternative quantum or classical matrix multiplication methods.
Adaptive quantum algorithm accelerates matrix multiplication via dynamic qubit reconfiguration
To accelerate matrix multiplication, a fundamental operation in machine learning, researchers have developed a hybrid quantum-classical algorithm called AQ-Stacker. Matrix multiplication is at the heart of many machine learning algorithms, including deep neural networks, and its computational cost can significantly limit the scalability of these models. The algorithm introduces an “Adaptive Stacking” framework, dynamically reconfiguring its execution pattern to suit available qubit resources, allowing for a tunable time-complexity range theoretically reaching O(N2) on fault-tolerant systems. This dynamic reconfiguration is achieved through a sophisticated control mechanism that monitors the quantum hardware and adjusts the algorithm’s parameters accordingly. A QML simulation validated the algorithm’s numerical stability, achieving 96% accuracy on the MNIST handwritten digit dataset, demonstrating its ability to maintain precision while processing complex data. The simulation involved encoding the MNIST images as quantum states and performing matrix multiplications using AQ-Stacker, followed by a classical readout to determine the classification accuracy.
Optimising the algorithm for near-term quantum devices and developing scalable QRAM technology will be the focus of future work, as QRAM availability currently presents a significant practical challenge. Near-term quantum devices, also known as Noisy Intermediate-Scale Quantum (NISQ) devices, are characterised by limited qubit counts and high error rates. Adapting AQ-Stacker to these devices requires careful consideration of error mitigation techniques and resource allocation strategies. Further research will explore the potential of adaptive quantum MatMul to deliver super-classical efficiency in high-dimensional linear algebra operations. This could unlock new possibilities in areas such as drug discovery, materials science, and financial modelling, where high-dimensional data analysis is crucial. This newly developed AQ-Stacker algorithm dynamically adjusts its processing technique, switching between different configurations based on the quantum computer’s available resources. This flexibility is key to bridging the gap between theoretical quantum speedups and practical hardware limitations, offering an alternative to conventional matrix computation as quantum technology advances. The development of AQ-Stacker represents a significant step towards realising the full potential of quantum computing for machine learning and beyond.
The researchers demonstrated a new quantum-classical algorithm, AQ-Stacker, which reduced the complexity of computing the inner product of vectors to O(log N). This matters because matrix multiplication is a fundamental but computationally intensive process in machine learning, and improving its efficiency could accelerate data processing. Validation using the MNIST dataset showed 96% accuracy, indicating the algorithm maintains precision with complex data. The authors intend to focus on optimising the algorithm for current quantum devices and developing scalable Quantum Random Access Memory.
👉 More information
🗞 AQ-Stacker: An Adaptive Quantum Matrix Multiplication Algorithm with Scaling via Parallel Hadamard Stacking
🧠 ArXiv: https://arxiv.org/abs/2604.02530
