Predicting how long quantum computations will take is a critical challenge in the development of practical quantum computers, and a team led by Lucy Xing, Sanjay Vishwakarma, and David Kremer, all from IBM Quantum at the IBM T. J. Watson Research Center, now demonstrates a powerful solution using machine learning. Their research introduces predictive models that accurately forecast quantum processing unit (QPU) job completion times, leveraging a dataset of over 150,000 completed jobs and advanced machine learning techniques. This achievement, which also includes contributions from Francisco Martin-Fernandez, Ismael Faro, and Juan Cruz-Benito, significantly improves the ability to manage and schedule quantum resources, paving the way for more efficient and streamlined quantum computing operations and ultimately accelerating progress in the field. The team’s work establishes a foundation for integrating artificial intelligence into the core infrastructure of advanced computing systems, promising substantial gains in performance and usability.
This research focuses on improving operational efficiency in quantum computing systems. The results demonstrate the effectiveness of machine learning in forecasting quantum job durations, which can have implications for improving resource management and scheduling within quantum computing frameworks. This work demonstrates the potential of artificial intelligence to optimize resource management within complex quantum computing systems. QPU time, defined as the duration the QPU is occupied executing a quantum job, is a crucial metric for assessing the overall performance and efficiency of quantum systems. Researchers meticulously prepared the data, categorizing input features into numerical and categorical types for effective model training. The model successfully learned patterns from the job metadata, including details about the backend, primitive ID, shot count, and error correction methods, to forecast processing times.
Experiments revealed the effectiveness of this approach in anticipating job behavior. The team utilized metadata such as the total number of executions and batches as a proxy to determine job size, which played a significant role in predicting QPU times. This predictive capability offers a pathway towards AI-driven optimization of quantum computing systems, enhancing performance, efficiency, and overall capabilities. By employing a gradient-boosting model, specifically LightGBM, the team accurately forecasts job completion times using a dataset of over 166,000 quantum jobs. The model incorporates various job metadata, including the QPU type, the number of shots requested, and circuit depth, to achieve its predictions. The findings highlight the potential of artificial intelligence to refine resource management and scheduling within quantum computing frameworks.
Feature importance analysis revealed that the QPU type, the total number of shots, and circuit depth are key determinants of processing time. While the study demonstrates strong predictive capability, the authors acknowledge that the model’s performance is influenced by the specific dataset used and may require refinement with additional data. Future work could explore the integration of this predictive capability into existing quantum computing systems to optimise job scheduling and enhance overall performance. The team suggests that continued data collection and model retraining will be crucial for maintaining accuracy and adapting to evolving quantum hardware.
👉 More information
🗞 Quantum Processing Unit (QPU) processing time Prediction with Machine Learning
🧠 ArXiv: https://arxiv.org/abs/2510.20630
