Alex Krasnok and colleagues at Florida International University present a practical framework for procuring quantum capability, moving beyond simply selecting a specific hardware platform. The optimal choice, whether cloud access, dedicated hardware, or a full strategic installation, depends heavily on an institution’s purpose and resources. By distinguishing five capability layers and comparing superconducting circuits, trapped ions, neutral atoms, quantum annealing, and photonics, the study advocates for a phased approach, prioritising repeatable near-term value and strategic flexibility over immediate investment in large on-premises systems. It establishes that a key consideration is the alignment of quantum resources with institutional goals and available funding. The framework enables institutions to build quantum capacity incrementally, using a set of tools to assess their needs and evaluate different procurement options.
Diverse institutional needs challenge standardised quantum procurement
A university may require a teaching platform supporting numerous students at a low cost. A research centre could desire faster and more predictable access than a shared public queue offers. A national laboratory may need local control, secure data handling, and close coupling to high-performance computing (HPC) systems. Industrial groups often focus solely on whether a benchmark workload benefits from quantum resources. These represent distinct purchases, even when buyers use the same phrase, “buy a quantum computer”. The market complicates decisions because quantum hardware is heterogeneous, performance metrics are not directly comparable across vendors, and public roadmaps often outpace institutional capital cycles.
A qubit count does not reveal queue quality, software maturity, facilities burden, service model, or upgrade path. A promising research result does not guarantee a supported commercial product. A bold roadmap does not prove delivered capability. Consequently, procurement mistakes in quantum computing are often organisational before they are technical. Public discussion frequently begins with qubit counts, vendor rankings, and predictions about long-term platform dominance.
This framing weakens procurement guidance by neglecting the operating model. A campus needing broad student access should not begin with a flagship local installation. A laboratory requiring secure, repeatable, low-latency hybrid workflows should not rely on a free cloud tier. The initial question is not which device appears strongest on paper. Instead, the first question is what layer of quantum capability the institution can use, support, and justify.
Therefore, this article frames procurement as quantum capability acquisition, defining five capability layers hidden within the phrase “buy a quantum computer”, ranging from preparation and cloud access to strategic local ownership. It also separates four evidence classes that buyers often conflate: peer-reviewed science, customer-visible commercial offerings, public price anchors, and public roadmaps. Furthermore, it compares the major commercial platform families in terms relevant to a buyer, including access, operational burden, and upgrade risk, and interprets public roadmaps as signals of refresh pressure rather than proof of present-day performance.
The goal is not to rank vendors or predict a technological winner. It is to help institutions avoid the wrong initial purchase. A disciplined procurement process begins by defining what the institution is trying to acquire. In quantum computing, the answer is rarely “the most qubits”. It is usually one of a limited number of operational goals: broad access for students, repeated hands-on use for hardware research, secure and low-latency integration with classical infrastructure, or evidence that a target workload can benefit from quantum resources.
These goals are organised into five capability layers, each corresponding to a different operating model and institutional burden. Role is vital because each layer solves a different problem. Preparation or deferral is a valid procurement choice, not a failure to choose; it buys time to define benchmark tasks, train staff, and establish classical baselines. Multivendor cloud access is an exploration layer, offering breadth, low initial cost, and exposure to multiple software stacks.
Reserved or premium access is a throughput layer, providing predictable queue quality, user support, and a stable path for courses, grant milestones, and early production-style workflows. A modest local instrument is a Methods and culture layer, enabling repeated hands-on use, control experience, calibration practice, and local organisational learning. A strategic local installation is infrastructure, providing sovereignty, security, and continuous availability, but also committing the institution to facilities work, staffing, service contracts, and lifecycle planning.
The mission typically aligns with institutional type. A teaching-focused campus prioritises users served per dollar, course reliability, and simple tooling. A research-intensive university may value Methods development, graduate training, and the ability to experiment repeatedly on local hardware. A national laboratory or sovereign HPC centre may prioritise data handling, co-scheduling with classical resources, and operational control. An industrial team may care almost exclusively about whether a benchmark workflow improves relative to the classical baseline.
These missions overlap, but should not be forced into the same purchase logic. This is illustrated as a stage-gated process. Stage 1 defines the mission and primary users. Stage 2 separates evidence types, preventing papers, product pages, prices, and roadmaps from being treated as interchangeable. Stage 3 scores capability layers without assuming more knowledge than the buyer possesses.
Stage 4 rejects blocked options, including systems the site cannot support or platforms that do not fit the workload. Stage 5 tests the low-budget path. Stage 6 makes the final decision only after operational assumptions are explicit. This structure is important because many quantum procurements fail before hardware arrival: the workload is vague, the user base is unprepared, or the support model is missing. The evidence split is the most important control.
Scientific papers demonstrate what a platform has achieved under research conditions. Commercial pages show current customer access. Public price pages and procurement disclosures indicate rough spending scale. Roadmaps show a company’s stated future intentions. These are useful forms of evidence, but support different claims.
Progress in error correction, as demonstrated in a paper, is not evidence of a supported product. A product page promising access is not evidence of long-term technical leadership. A roadmap is not evidence of current delivery. This distinction is graphically represented, separating solid blocks (current offerings) from dashed blocks (future targets). A simple weighted score is sufficient for initial comparison: Sj = w1Rscience,j + w2Roperations,j + w3Raccess,j + w4Rfit,j, where Rscience measures technical fit to the workload, Roperations measures facilities and staffing burden, Raccess measures user access reliability, and Rfit measures institutional fit across software, data handling, training, governance, and upgrade path.
Ordinal scores are often sufficient, as increased numerical precision rarely improves the decision due to contractual and operational uncertainties. The buyer, not the vendor, chooses the weights. A teaching campus and a sovereign HPC centre should not use the same weights because they have different objectives. Uncertainty should remain visible. If pricing is quote-only, the model should widen the cost range instead of assuming a known price.
If the service model is private, the framework should record operational uncertainty rather than assume weak support. If a feature appears only on a roadmap, the buyer should treat it as unavailable unless contractually guaranteed. Sensitivity analysis is also useful. If a recommendation changes with slight weight adjustments, the institution lacks a strong decision. A sound process may conclude that the best purchase is no purchase.
If institutions cannot define benchmark problems, success metrics, ownership responsibilities, and the rationale for local control over access, a cloud pilot or preparatory period may yield the greatest benefit. In a competitive market, considered postponement can be a sound strategy. Platform selection influences more than just raw performance; it also determines the control stack, software environment, staffing needs, site requirements, and probable refresh cycle.
From a buyer’s perspective, platform choice is therefore a decision about operations as much as about qubit physics. This is summarised by platform family, separating systems sold now from roadmap-only claims. This expands into a vendor-by-vendor procurement reading. Together they show why a platform label is insufficient: the buying experience within one platform family can vary sharply between companies. For procurement, the most useful scientific question is not only what vendors promise, but what the platform has already achieved in peer-reviewed work. That literature is the clearest public evidence of what a well-prepared institution may realistically achieve with the hardware family during the purchase lifetime. Public roadmaps are signals of refresh pressure rather than proof of present-day performance.
A staged approach to quantum computing capability for research institutions
Institutions are redefining access to quantum computing, demonstrating a shift from solely focusing on hardware to a five-layer capability framework. This model allows institutions to begin with a capability layer delivering repeatable near-term value. The five layers range from initial preparation and cloud access, through reserved capacity, to modest local instruments and ultimately, strategic local ownership, each addressing distinct institutional needs and operational burdens.
By separating scientific results from commercial offerings and roadmaps, the framework provides a clearer assessment of current versus future capabilities, allowing for more informed investment choices. Institutions seeking teaching resources will have very different requirements from national laboratories needing secure data handling and integration with high-performance computing systems. The framework categorises acquisition options, ranging from initial cloud access and benchmark problem validation, through reserved capacity, to modest local instruments. It distinguishes between peer-reviewed scientific results, commercial offerings, pricing and future roadmaps, offering a clearer picture of current versus projected capabilities. A campus prioritising broad student access should not immediately invest in a large, on-premises installation.
Prioritising application definition and staged access for effective quantum technology adoption
Institutions are beginning to map a sensible path through the hype surrounding quantum computing, advocating a pragmatic approach to procurement rather than a headlong rush towards ever-larger machines. This framework rightly points out that institutions must first define what they want to do with quantum technology, be it student teaching or secure research, before considering which hardware might fit. Focusing on practical capability remains vital work, acknowledging the considerable debate around whether genuinely useful quantum computers are imminent.
Institutions needn’t commit to building a full-scale quantum facility; instead, they can strategically acquire access via cloud services or smaller, training-focused machines like nuclear magnetic resonance (NMR) systems. This staged approach builds expertise and allows organisations to explore potential applications, from materials science to financial modelling, without massive upfront investment or lengthy delays. Institutions should prioritise acquiring the smallest demonstrable layer of quantum capability, supporting internal expertise and strategic agility rather than immediately investing in large installations. This redefines procurement as a question of operational goals, teaching, research, or specific applications, rather than simply selecting a hardware platform. By distinguishing between peer-reviewed science, commercial availability, pricing, and future roadmaps, organisations can better assess current versus projected capabilities.
The research demonstrated that institutions should prioritise acquiring a defined quantum capability, rather than focusing solely on hardware selection. This approach means beginning with the smallest access layer that delivers repeatable value and builds internal expertise, such as through cloud access or modest local instruments. The study highlights the importance of aligning procurement with specific institutional needs, like teaching or research, and separating scientific results from commercial offerings. By considering factors such as pricing and future roadmaps, organisations can make informed decisions and maintain strategic flexibility.
👉 More information
🗞 What quantum computer to buy?
🧠 ArXiv: https://arxiv.org/abs/2604.04761
