Quantum software development presents significant challenges, including a complex technological landscape and a shortage of skilled programmers, and researchers are now exploring how artificial intelligence can help bridge this gap. Nazanin Siavash and Armin Moin, both from the University of Colorado Colorado Springs, investigate a new approach to automatically generating quantum code using large language models, enhanced with a technique called retrieval-augmented generation. Their work demonstrates that carefully designed prompts can dramatically improve the accuracy and consistency of generated code, achieving up to four times better results compared to standard methods. This research paves the way for model-driven quantum software development, potentially reducing costs and accelerating innovation in this rapidly evolving field by automating code creation from high-level system models.
Language Models (LLMs) offer potential enhancements when integrated with Retrieval-Augmented Generation (RAG) pipelines. This research focuses specifically on quantum and hybrid quantum-classical software systems, where model-driven approaches can help reduce costs and mitigate risks associated with heterogeneous platforms and a shortage of skilled developers. The team validates a proposed method for generating code from software system models, producing Python code that utilises the well-established Qiskit library for execution on quantum computers. Experimental results demonstrate the feasibility of this approach.
LLM Code Generation for Quantum Computing
Researchers pioneered a novel approach to quantum software development by leveraging large language models (LLMs) enhanced with retrieval-augmented generation (RAG) pipelines, addressing the challenges of limited developer skills and complex platforms. The core of this methodology involves automated Python code generation, specifically designed to work with IBM’s Qiskit quantum software library, initiated by textual model instances as input for the LLM. This process moves beyond traditional rule-based methods by harnessing the LLM’s ability to learn patterns from extensive code corpora, eliminating the need for manually crafted transformation rules and enabling a more flexible system for quantum code creation. To mitigate the inherent limitations of LLMs, the team deployed a sophisticated RAG pipeline, integrating a retrieval system with a generative model to dynamically access external knowledge during code generation, ensuring greater reliability and accuracy.
The current configuration of this pipeline focuses on providing contextual information to the LLM, enhancing its ability to produce correct and consistent quantum code based on the input model instance. Further refinement of the system involves careful prompt engineering, a process of crafting and refining the input provided to the LLM to clearly express the desired outcome. Researchers examined both general and specific prompts, incorporating detailed instructions to improve performance. This iterative process of prompt design allows for precise control over the LLM’s behavior, maximizing its ability to generate accurate and efficient quantum code. Experiments demonstrate that well-engineered prompts can improve code quality and consistency, highlighting the significant impact of this technique.
LLMs Automate Quantum Code Generation from Models
Researchers are pioneering a new approach to software development for quantum computers by combining the power of Large Language Models (LLMs) with model-driven engineering techniques. This work addresses the growing complexity of quantum software and a shortage of skilled developers capable of navigating heterogeneous platforms. The team proposes leveraging LLMs, such as the GPT family of models, to automatically generate quantum code directly from software system models, effectively bridging the gap between high-level design and executable programs. This innovative method aims to reduce development costs and mitigate risks associated with the rapidly evolving quantum computing landscape.
The core of this approach lies in a Retrieval-Augmented Generation (RAG) pipeline, which enhances the LLM’s ability to produce accurate and consistent quantum code. Rather than relying solely on the LLM’s pre-existing knowledge, the RAG pipeline incorporates relevant code examples from public repositories, grounding the generated code in established best practices. Experiments demonstrate that well-engineered prompts, combined with the RAG pipeline, can improve the accuracy of generated quantum code by a factor of four. This substantial improvement signifies a significant leap forward in the automation of quantum software development.
This research builds upon existing model-driven software engineering techniques and extends them to the quantum realm. By utilizing software system models as the source of information for the LLM, the team moves beyond simple code completion and towards a more comprehensive automated code generation process. The team validated their approach using model instances developed by others, demonstrating compatibility with existing tools and workflows. Future work will explore deploying LLMs for more complex code transformations, such as transpilation, further expanding the potential of this innovative approach and paving the way for more accessible and efficient quantum software development.
Automated Quantum Code Generation with Language Models
This research introduces a new approach to generating quantum code by leveraging large language models and retrieval-augmented generation techniques. The work focuses on automating code creation for quantum and hybrid quantum-classical systems, aiming to reduce the need for specialized expertise and the costs associated with developing software for these complex platforms. Researchers successfully demonstrated that well-crafted prompts can significantly improve the accuracy of generated code, achieving up to a fourfold increase in code quality metrics. The study validated this approach by generating Python code, utilising the Qiskit library, from models of software systems.
While the incorporation of external code repositories as contextual knowledge did not demonstrably improve performance in this instance, the results indicate the potential of this technique for automating quantum code development. The authors acknowledge that current performance metrics, while improved through prompt engineering, do not yet match those achieved by existing methods when generating code for entire models, as the focus was specifically on the quantum circuit portion. Future work will concentrate on refining the retrieval-augmented generation pipeline by incorporating more relevant datasets and improving query formulation. The team also plans to evaluate alternative language models and broaden the scope of evaluation metrics. The source code and research data are publicly available to facilitate further research and development in this emerging field.
👉 More information
🗞 Model-Driven Quantum Code Generation Using Large Language Models and Retrieval-Augmented Generation
🧠 ArXiv: https://arxiv.org/abs/2508.21097
