The quest to build powerful quantum computers demands software capable of harnessing their potential, and researchers are now delivering tools to explore increasingly complex quantum circuits. Oliver Knitter, Jonathan Mei, Masako Yamada, and Martin Roetteler, all from IonQ, have developed TorchQuantumDistributed, a new software library built on the widely used PyTorch platform. This library allows scientists to study the behaviour of quantum circuits with a large number of qubits, regardless of the specific quantum hardware used, and crucially, enables the investigation of circuits where the parameters themselves can be adjusted and learned, paving the way for more adaptable and powerful quantum algorithms. The development represents a significant step towards realising the full potential of near-term and fault-tolerant quantum computers by providing a flexible and scalable software foundation for exploring their capabilities.
Differentiable programming, accelerator independence, and scalable simulation are vital for advancing quantum machine learning. Many popular frameworks now support these features, allowing researchers to develop and implement complex quantum algorithms effectively. The ability to combine these capabilities is crucial for exploring the potential of quantum computing.
Scalable Quantum Simulation with TorchQuantumDistributed
Researchers have developed TorchQuantumDistributed, or tqd, a new PyTorch-based library for scalable and differentiable quantum statevector simulation. This tool operates independently of specific hardware accelerators, enabling the study of complex, parameterized quantum circuits with a high number of qubits, which is essential for progress in quantum machine learning research. Demonstrations using circuits inspired by common quantum machine learning approaches reveal promising scaling behavior, suggesting the library’s potential for integration into future research pipelines. The team conducted both strong and weak scaling tests, evaluating performance as computational resources increased.
Results demonstrate favorable power law trends between the number of processing units and key simulation benchmarks including walltime, total communication time, and memory usage. This indicates that increasing computational power effectively reduces simulation time without substantial overhead from communication between processing units. While acknowledging the need for further optimization, the researchers highlight the library’s extensibility and potential to accelerate quantum algorithm development. Future work will focus on detailed profiling of resource usage and incorporating techniques to further reduce communication costs, enhancing the simulator’s efficiency and scalability.
Scalable Quantum Simulation with TorchQuantumDistributed
Researchers developed TorchQuantumDistributed, or tqd, a PyTorch-based library designed for scalable, differentiable quantum statevector simulation, independent of specific accelerator hardware. This tool enables the study of learnable, parameterized quantum circuits with a high number of qubits, paving the way for advancements in near-term and fault-tolerant quantum computing. Experiments focused on assessing the scaling behavior of tqd when applied to circuit simulations inspired by common quantum machine learning approaches. Researchers conducted both “strong” and “weak” scaling tests, varying the number of processing units from 1 to 1024.
The strong scaling test utilized a 24-qubit system, while the weak scaling test incrementally increased problem size from 18 to 28 qubits. Results, presented on a log-log scale, demonstrate favorable power law trends between the number of processing units and key benchmarks. These experiments confirm that tqd exhibits promising scalability, suggesting its potential integration into future quantum machine learning research pipelines. Specifically, the strong scaling test showed that increased communication overhead did not substantially hinder the expected improvement in walltime as the number of processing units increased. The team measured walltime, communication time, and memory usage, demonstrating that the library can efficiently handle increasingly complex quantum simulations as computational resources are added. This work represents a significant step towards realizing scalable quantum simulations and accelerating the development of quantum machine learning algorithms.
👉 More information
🗞 TorchQuantumDistributed
🧠 ArXiv: https://arxiv.org/abs/2511.19291
