More than a century separates Max Planck’s reluctant introduction of the quantum from Google’s announcement that an error-corrected logical qubit had finally crossed the threshold where larger codes mean fewer mistakes. The space between those two events contains the full story of quantum computing, and it is a stranger story than most accounts suggest. It begins with a desperate mathematical fix to a problem about radiating ovens. It detours through paradoxes, thought experiments, and bitter philosophical disputes. And it arrives, somewhere around 1980, at the deceptively simple observation that if the universe is quantum mechanical, then perhaps the computers we use to model the universe should be quantum mechanical too.
Table of Contents
From Quantum Theory to Quantum Information
The story usually begins with Max Planck. In 1900 he proposed, almost apologetically, that energy could only be radiated in discrete chunks called quanta. The trick worked, and over the next quarter-century quantum mechanics took shape through the work of Einstein, Bohr, Heisenberg, Schrödinger and a generation of others. By 1930 the formalism was essentially complete, even if its philosophical interpretation remained a battleground.
The most consequential of those battles for our purposes came in 1935. Einstein, with Boris Podolsky and Nathan Rosen, published the EPR paper, arguing that quantum mechanics must be incomplete because it predicted correlations between distant particles that seemed to imply faster-than-light influence. Einstein called it “spooky action at a distance” and assumed there had to be hidden variables explaining it.
For nearly thirty years the EPR argument sat unresolved as a matter of philosophy rather than physics. Then in 1964 John Stewart Bell, an Irish physicist working at CERN, derived an inequality that any local hidden-variable theory must obey. Quantum mechanics predicted violations of his inequality, turning a metaphysical dispute into something experiments could decide. Tests by John Clauser, Alain Aspect and Anton Zeilinger through the 1970s and 1980s confirmed those violations, and the three would share the 2022 Nobel Prize in Physics for that work. Reality was, in some operational sense, non-local. Entanglement was real, and it would later turn out to be the resource on which quantum computing depends.
The first practical use of these correlations arrived in parallel with the rise of the field itself. In 1984, Charles Bennett of IBM and Gilles Brassard of the Université de Montréal proposed BB84, a quantum key distribution protocol whose security rested on the no-cloning theorem and the disturbance caused by quantum measurement. Quantum cryptography arrived a year before quantum computing was formally described, and the two fields have been intertwined ever since.
A Computer Made of Atoms
By the late 1970s a small group of physicists had begun asking an unfashionable question. If nature is fundamentally quantum mechanical, why are the computers we use to model nature classical? The first serious answer came from Paul Benioff, an American physicist at Argonne National Laboratory, who in 1980 published a paper showing that a Turing machine could in principle be implemented as a quantum mechanical system. Benioff’s construction did not yet promise any computational advantage. It simply demonstrated that quantum mechanics did not, by itself, prevent computation.
The computational advantage came a year later, at a conference held jointly by MIT and IBM at the Endicott House outside Boston. Richard Feynman, already a Nobel laureate and arguably the most famous physicist alive, delivered a now-canonical lecture called “Simulating Physics with Computers.” His central observation was disarmingly simple. Classical computers cannot efficiently simulate quantum systems, because the state space of even a modest quantum system grows exponentially with the number of particles. A computer that was itself quantum mechanical, Feynman suggested, might not suffer from this limitation. “Nature isn’t classical, dammit,” he told the audience, “and if you want to make a simulation of nature, you’d better make it quantum mechanical.”

In the same period the Russian mathematician Yuri Manin made a similar argument in his 1980 book Computable and Uncomputable. Manin’s contribution was largely overlooked in the West for many years, but he deserves to be remembered alongside Benioff and Feynman as one of the founders of the field.
Deutsch and the Universal Quantum Computer
Feynman’s lecture identified an opportunity. It did not formalise it. That task fell to David Deutsch, an Oxford physicist with an unusually wide-ranging philosophical bent, who in 1985 published a paper in the Proceedings of the Royal Society titled “Quantum Theory, the Church-Turing Principle and the Universal Quantum Computer.”
Deutsch’s paper did several things at once. It described a model of computation, the quantum Turing machine, capable in principle of simulating any physical system. It argued that this model represented a more accurate version of the Church-Turing thesis than the classical original, because it took the actual laws of physics into account. And in passing it provided the first concrete example of a computational task in which a quantum machine could outperform a classical one. The example was almost trivially simple, but it sufficed to demonstrate that the speedup was real rather than an artefact of careless analysis.
Deutsch had an additional motivation that he was not shy about discussing. He was, and remains, a committed advocate of Hugh Everett’s many-worlds interpretation of quantum mechanics. For Deutsch, the obvious explanation for quantum parallelism was that quantum computers were performing their calculations across the branches of the multiverse. When Hartmut Neven, the founder of Google Quantum AI, made a similar remark in announcing the Willow chip in late 2024, he was channelling a tradition that runs straight back to Deutsch’s 1985 paper.
The Algorithms That Made the Field
Deutsch’s example computational advantage was theoretically interesting but practically negligible. The field needed an algorithm that solved a problem people actually cared about. It got two of them, in remarkably quick succession.
In 1992 Deutsch and Richard Jozsa generalised Deutsch’s earlier construction into the Deutsch-Jozsa algorithm, a procedure that distinguished between two classes of function with a single quantum query where any deterministic classical algorithm would require exponentially many. The problem solved was still artificial. But the structure of the proof made clear that quantum advantage was not a curiosity confined to a single example.
Then, in the summer of 1994, Peter Shor, a researcher at Bell Labs, found something extraordinary. Shor showed that a sufficiently large quantum computer could factor large integers in polynomial time. Since the difficulty of factoring is the basis for the RSA cryptosystem, which underpins essentially all modern public-key cryptography, Shor’s result had immediate and dramatic implications for national security. Funding for quantum computing research, which had been polite but limited, increased sharply within months. Three decades later the prospect of a Shor-capable machine continues to drive global efforts to migrate to post-quantum cryptography.
A year later Lov Grover, also at Bell Labs, presented an algorithm for unstructured search that delivered a quadratic speedup over any classical method. The Grover speedup is more modest than Shor’s exponential advantage, but it applies to a vastly larger class of problems. Together, Shor’s algorithm and Grover’s algorithm gave the field its two flagship applications and the bulk of the popular interest that has followed it ever since.
Quantum Error Correction Arrives
In 1995 a different sort of paper appeared, again from Peter Shor. The new paper addressed a problem that had loomed over the entire enterprise from the beginning. Quantum states are notoriously fragile. The slightest interaction with the environment causes decoherence, the loss of the delicate superpositions that make quantum computation possible. Many serious researchers had quietly assumed this fragility would prove fatal. Shor’s 1995 paper showed how to encode a single logical qubit in nine physical qubits in such a way that errors could be detected and corrected without measuring the underlying state and destroying the computation.
Andrew Steane at Oxford produced an alternative seven-qubit code shortly afterwards, and in 1996 Robert Calderbank, Shor and Steane developed the broader CSS framework that unified these constructions with classical coding theory. Daniel Gottesman extended the work into the stabiliser formalism, providing the mathematical scaffolding for almost all subsequent quantum error correction. The threshold theorem, due to Dorit Aharonov, Michael Ben-Or, Alexei Kitaev and others in the late 1990s, showed that arbitrarily long quantum computations are possible if the underlying physical error rate falls below a certain critical value.
These results fundamentally changed the conversation. Quantum computing was no longer a question of whether decoherence could be defeated. It was a question of whether engineers could push their devices below the threshold.
The First Hardware
The earliest quantum computing hardware did not look much like a computer. In 1995 Juan Ignacio Cirac and Peter Zoller proposed using individual ions, trapped in a vacuum and manipulated with lasers, as qubits. The first two-qubit gate of this kind was demonstrated almost immediately by David Wineland’s group at NIST Boulder. Wineland would later share the 2012 Nobel Prize in Physics for the broader experimental programme it inspired, alongside the French physicist Serge Haroche.
A different and rather surprising platform emerged from nuclear magnetic resonance chemistry. In 1997 and 1998 a Stanford-MIT-IBM team led by Isaac Chuang and Neil Gershenfeld showed that the spin states of molecules in solution could be manipulated as qubits using techniques borrowed from medical imaging. In 2001 the same group used a seven-qubit NMR computer to factor the number 15, the first experimental implementation of Shor’s algorithm. NMR machines did not scale, and the field eventually moved on, but the demonstrations gave quantum computing its first real taste of headlines.
The platform that would come to dominate the next two decades was superconducting circuits. In 1984 and 1985, the same period in which Deutsch published his universal quantum computer paper, John Clarke, Michel Devoret and John Martinis published a series of experiments demonstrating that an electrical circuit large enough to be held in the hand could exhibit quantum mechanical tunnelling and discrete energy levels. The Nobel Prize in Physics 2025 was awarded jointly to John Clarke, Michel H. Devoret and John M. Martinis “for the discovery of macroscopic quantum mechanical tunnelling and energy quantisation in an electric circuit”, recognising that those experiments laid the groundwork for the entire field of superconducting qubits. By the late 1990s, groups at NEC in Tokyo, CEA Saclay in France and Yale in the United States were building working transmon qubits, and by the mid-2000s the technology had reached a level of fidelity that made larger devices feasible.
Cloud Quantum Computing and the Race for Supremacy
The Canadian company D-Wave Systems began selling commercial quantum annealers in 2011, although the question of whether their devices were “really” quantum, and whether they offered any speedup over classical methods, would generate years of controversy. A more straightforwardly transformative event came in May 2016, when IBM put a five-qubit superconducting processor on the public internet. The IBM Quantum Experience, as the service was called, allowed anyone with a web browser to write and run quantum circuits on real hardware. Within months the user base had grown into the tens of thousands, and the cloud-based delivery model has dominated the industry ever since.
Two years later John Preskill of Caltech coined the term “quantum supremacy” to describe the moment when a quantum device would perform a calculation no classical computer could replicate in any reasonable amount of time. He also introduced the more lasting acronym NISQ, for noisy intermediate-scale quantum, to characterise the kind of devices that would dominate the next decade.
Quantum supremacy duly arrived, contentiously, in October 2019. Google’s Sycamore processor, built and operated by a team led by John Martinis, was used to sample from the output distribution of a random quantum circuit on 53 qubits. The Google researchers argued that simulating the same circuit classically would take the world’s fastest supercomputer around ten thousand years. IBM disputed the figure, suggesting better classical algorithms could compress the comparison considerably. A more decisive and harder-to-contest demonstration came a year later from Pan Jianwei’s group at the University of Science and Technology of China, who used a photonic device called Jiuzhang to perform a Gaussian boson sampling task that no classical method could feasibly approach.

A Software Stack and Microsoft’s Topological Bet
Hardware is only half the story. The decade after the IBM Quantum Experience launched in 2016 produced the first serious quantum software ecosystem. IBM released Qiskit, an open-source Python framework, in 2017. Google followed with Cirq in 2018. Microsoft, which had been investing in quantum computing through its Station Q research group at Santa Barbara since 2005, released the Q# language and Quantum Development Kit in late 2017. Rigetti’s Forest, Xanadu’s PennyLane and the Amazon Braket service rounded out the major platforms. By 2025 most of these tools were converging on a similar stack of features, with the OpenQASM 3 intermediate representation acting as a common assembly language.
Microsoft’s hardware bet during this period was deliberately unorthodox. Rather than chase superconducting transmons or trapped ions, the company committed to topological qubits based on Majorana zero modes, a class of quasiparticle whose existence had been predicted but never confirmed. The strategy ran into serious trouble when a high-profile 2018 Nature paper claiming evidence of Majoranas was retracted in 2021 after independent groups questioned the data. Microsoft persisted, and in February 2025 announced Majorana 1, an eight-qubit chip built on a new class of materials it called topoconductors. The accompanying Nature paper has drawn significant scrutiny, with parts of the physics community arguing that decisive evidence of topological qubits has not yet been published. Microsoft has continued to present results at conferences and has also partnered with Atom Computing on a parallel neutral-atom programme. The episode is a useful reminder that progress in this field is not always monotonic.
China’s effort, meanwhile, ran on a different timetable and a different set of priorities. The Hefei National Laboratory under Pan Jianwei produced not only the Jiuzhang photonic devices but also the Zuchongzhi superconducting series and a pioneering programme in satellite-based quantum key distribution. The Micius satellite, launched in 2016, demonstrated entanglement distribution across more than a thousand kilometres in 2017, an achievement that sits at the intersection of quantum computing and the parallel programme to build a quantum internet.
The Logical Qubit Era
By the early 2020s the focus of the field was shifting decisively away from raw qubit counts and towards quality. IBM unveiled its 433-qubit Osprey processor in 2022 and the 1,121-qubit Condor in 2023, but the company’s own roadmap explicitly framed scale as a means rather than an end. The real prize was the logical qubit, a stable computational unit built from many noisy physical qubits whose collective behaviour was protected from error by the surrounding code.
That prize, or at least the first credible glimpse of it, arrived in December 2024. Google announced the Willow chip, a 105-qubit superconducting processor designed from the start with surface-code error correction in mind. Willow demonstrated that increasing the size of the encoded logical qubit drove the logical error rate down by a factor of roughly two with each step, culminating in a distance-7 surface code with a logical error rate of 0.143 per cent per cycle. For the first time, an experimental device had crossed below the surface code threshold in hardware, with the result published in Nature. Twenty-nine years after Shor’s 1995 paper proposed quantum error correction, the threshold had finally been crossed.

In the same period the neutral atom platform began to assert itself with growing confidence. Mikhail Lukin’s group at Harvard, working with the startup QuEra, had demonstrated 48 logical qubits in late 2023. Microsoft, partnering with Atom Computing, pushed those numbers further the following year, encoding 28 logical qubits onto 112 atoms and entangling 24 of them, the highest count of entangled logical qubits on record at the time. Trapped-ion systems from Quantinuum and IonQ continued to set the pace on gate fidelity, with Oxford Ionics, acquired by IonQ in 2025, demonstrating physical two-qubit gate fidelities of 99.99 per cent.
Other Roads: Photons, Silicon and the Quieter Platforms
The narrative so far has concentrated on the platforms that have generated the most headlines: superconducting circuits, trapped ions, neutral atoms and topological qubits. The history is incomplete without the parallel roads.
Photonic quantum computing dates back to the Knill-Laflamme-Milburn proposal of 2001, which showed that linear optics, single-photon sources and photon-counting detectors are sufficient in principle for scalable quantum computation. The KLM scheme was widely viewed as theoretically elegant but practically forbidding. Two decades of patient engineering have changed that assessment. PsiQuantum, founded in California in 2016 by an Australian team led by Jeremy O’Brien, has raised more than two billion dollars to pursue a million-qubit photonic chip fabricated on standard 300mm silicon wafers, with major data centre projects now underway in Brisbane and Chicago. Xanadu in Toronto demonstrated photonic quantum advantage with its 216-mode Borealis processor in 2022, launched its modular Aurora system in 2025, and went public in 2026 as the first pure-play photonic quantum company. ORCA Computing in the UK and Quandela in France round out a small but well-capitalised photonic ecosystem.
A second parallel thread runs through silicon spin qubits. The original proposal came from Daniel Loss and David DiVincenzo in 1998, with parallel donor-based work by Bruce Kane the same year. The attraction is straightforward: if quantum information can be encoded in the spin of a single electron held in a quantum dot, then the entire machinery of CMOS chip fabrication is in principle available to scale the technology. The bet is paying off, slowly. Intel has demonstrated 99.9 per cent gate fidelity on silicon spin qubits manufactured on standard 300mm wafers, while Diraq, spun out of UNSW Sydney by Andrew Dzurak, has achieved comparable fidelities at temperatures above one kelvin, opening the door to co-locating control electronics on the same chip.
Other platforms continue to find their constituencies. Nitrogen-vacancy centres in diamond have proved their worth in quantum sensing rather than computation, but small-scale processors built on them, including Quantum Brilliance’s room-temperature diamond systems, have begun to appear. The honest summary as of 2026 is that no platform has won. The architectural diversity may itself be a feature.
2025 and the Year of Fault Tolerance, Into 2026
If 2024 was the year quantum error correction crossed the threshold, 2025 was the year the industry began betting on it without hedging. In 2025, Quantum Error Correction (QEC) emerged as the universal priority to achieve utility-scale quantum computing, with industry experts recognising QEC as a crucial competitive differentiator to scale and succeed. Public investment kept pace with private enthusiasm. Japan now leads public quantum investment with nearly $8 billion committed, much of it allocated in 2025. The United States follows with $7.7 billion, driven in part by the Department of Defense’s Quantum Benchmarking Initiative.
The Royal Swedish Academy of Sciences chose the same year to recognise the foundations of the dominant qubit platform. The 2025 Nobel Prize in Physics went to John Clarke, Michel Devoret and John Martinis for their 1984-85 experiments on macroscopic quantum tunnelling. The award was something of a closing of the loop. The descendants of those original Berkeley experiments were now producing the chips that ran below the surface code threshold, and one of the prize winners had personally led the supremacy demonstration on Sycamore six years earlier.
Other 2025 milestones told the same story of an industry maturing rather than transforming. Fujitsu and RIKEN delivered a 256-qubit superconducting machine in April. Microsoft announced Majorana 1 in February. In November, Lukin’s group at Harvard, with QuEra and MIT, demonstrated a fault-tolerant architecture running on 448 atomic qubits, executing algorithms with up to 96 logical qubits. The same month, IBM unveiled its Nighthawk processor, designed to support up to 5,000 two-qubit gates and aimed squarely at producing the first verifiable quantum advantage by the end of 2026. IBM also published its detailed roadmap to fault tolerance, projecting a Starling processor with hundreds of logical qubits by 2029 and laying out the qLDPC error-correction codes required to make the architecture work. In a development that should focus minds in the security community, one paper also demonstrated that breaking RSA encryption may require only one million qubits, down from earlier estimates of 20 million.
That last figure deserves a moment of reflection. In 1994 Shor’s algorithm was a theoretical curiosity. In 2025 the engineering problem of building a Shor-capable machine has been refined to the point where companies and governments can argue meaningfully about timelines and costs.
Early 2026 has continued the trend of consolidation and capital. In January, IonQ acquired the chip foundry SkyWater Technology for $1.8 billion, the first time a quantum-pure-play has bought a chipmaker outright. Pasqal announced a $2 billion SPAC merger to go public on Nasdaq. And in March, Xanadu completed its merger with Crane Harbor Acquisition Corp and began trading as XNDU on Nasdaq and the Toronto Stock Exchange, the first publicly traded pure-play photonic quantum company.
At a Glance: The Modern Timeline
| Year | Milestone |
|---|---|
| 1980 | Paul Benioff publishes the first quantum mechanical model of a Turing machine. Yuri Manin makes a parallel argument in Computable and Uncomputable. |
| 1981 | Richard Feynman delivers “Simulating Physics with Computers” at the MIT-IBM conference at Endicott House. |
| 1984 | Charles Bennett and Gilles Brassard introduce the BB84 quantum key distribution protocol. |
| 1985 | David Deutsch describes the universal quantum computer. Clarke, Devoret and Martinis demonstrate macroscopic quantum tunnelling in superconducting circuits. |
| 1992 | The Deutsch-Jozsa algorithm gives the first clear-cut example of exponential quantum advantage. |
| 1994 | Peter Shor’s polynomial-time factoring algorithm puts quantum computing on the national-security agenda. |
| 1995 | Shor publishes the first quantum error correction code. Cirac and Zoller propose trapped-ion qubits. |
| 1996 | Lov Grover’s quantum search algorithm appears. The CSS error-correction framework is formalised. |
| 1998 | Chuang, Gershenfeld and Kubinec demonstrate the first two-qubit nuclear magnetic resonance quantum computer. |
| 2001 | A seven-qubit NMR machine factors the number 15 using Shor’s algorithm. |
| 2011 | D-Wave Systems begins selling commercial quantum annealers. |
| 2012 | John Preskill coins “quantum supremacy”. Wineland and Haroche share the Nobel Prize in Physics. |
| 2016 | IBM Quantum Experience puts a five-qubit superconducting processor on the public internet. |
| 2017 | Qiskit, Cirq and Q# launch, establishing the first serious quantum software ecosystem. |
| 2019 | Google’s 53-qubit Sycamore claims the first demonstration of quantum supremacy. |
| 2020 | The Jiuzhang photonic device at USTC demonstrates Gaussian boson sampling beyond classical reach. |
| 2022 | IBM unveils the 433-qubit Osprey processor. Aspect, Clauser and Zeilinger share the Nobel Prize in Physics. |
| 2023 | IBM crosses 1,000 qubits with the Condor processor. QuEra demonstrates 48 logical qubits on a neutral-atom array. |
| 2024 | Google’s Willow chip is the first device to demonstrate below-threshold surface code error correction. |
| 2025 | Microsoft announces Majorana 1, the first topological-qubit chip. Clarke, Devoret and Martinis share the Nobel Prize in Physics. IBM unveils the Nighthawk processor and publishes a roadmap to fault tolerance by 2029. Microsoft and Atom Computing entangle 24 logical qubits; Harvard and QuEra demonstrate a 448-atom fault-tolerant architecture. |
| 2026 | Xanadu becomes the first publicly traded pure-play photonic quantum company. IonQ acquires SkyWater Technology for $1.8 billion, the first chipmaker acquisition by a quantum-pure-play. IBM targets verifiable quantum advantage on Nighthawk by year-end. Pasqal announces a SPAC merger to go public. |
Projected Futures: What 2030 Could Look Like
Predicting the future of any computing technology is hazardous, and quantum computing has been the subject of more revised forecasts than most. With that caveat in place, the major industry roadmaps as of early 2026 paint a strikingly consistent picture of the next five years.
Most of the leading hardware companies are now publicly committed to delivering fault-tolerant systems by the end of the decade. IBM’s roadmap, released in mid-2025, targets a Starling processor with around 200 logical qubits and the ability to execute 100 million gates by 2029. Quantinuum has announced its Apollo system, intended to be a fully fault-tolerant universal machine by 2030. IonQ, having absorbed Oxford Ionics in 2025, is projecting logical error rates below one part in a trillion by the same year. IQM in Espoo and Atom Computing in California are aiming at the same horizon by different routes, with the Microsoft-Atom partnership planning a 50-logical-qubit machine called Magne for early 2027. None of these roadmaps will be hit on schedule in their entirety. The remarkable thing is that they exist at all, and that the milestones they describe are now physically credible rather than aspirational.
The intellectual lineage of the field remains visible in its current direction. Peter Shor’s 1994 algorithm continues to define the cryptographic timeline, and Lov Grover’s quadratic search remains the template against which broad-scope quantum advantage is measured. Seth Lloyd’s 1996 generalisation of Feynman’s simulation idea sits at the heart of the chemistry and materials applications that every roadmap targets, and the HHL algorithm he co-authored with Aram Harrow and Avinatan Hassidim in 2009 continues to underpin most of the quantum machine learning literature. David Deutsch continues to publish on the foundations of quantum computation, and the many-worlds interpretation he has long advocated still echoes through statements from working hardware teams.
The applications targeted are broadly the same across all the major roadmaps. Quantum simulation of chemistry and materials sits at the top of every list. Drug discovery, catalyst design, battery chemistry, fertiliser production and superconductor research are the recurring exemplars. Optimisation problems in logistics, finance and energy networks come next. Quantum machine learning sits in third place, more speculative but heavily funded. Cryptography, the application that originally put the field on the agenda, hovers in the background with a different set of stakeholders attached.
The cryptographic timeline has moved sharply over the past two years. Resource estimates for a Shor-capable machine have dropped from around twenty million physical qubits in 2019 to roughly one million in mid-2025, thanks to better algorithms, better codes and better architectures. That figure is still a stretch by any current measure. A reasonable consensus places the first cryptographically relevant quantum computer somewhere in the early 2030s, with significant uncertainty in either direction. The implication is that the global migration to post-quantum cryptography, already underway following the NIST standardisation in 2024, is no longer optional. The data being encrypted with classical methods today will, in many cases, still need to be confidential when the harvest-now-decrypt-later attacks become decryption attacks.
Two structural questions hang over the field. The first is the architecture question. The consensus is moving towards modular designs in which many smaller quantum processors are linked together with quantum networking, rather than a single monolithic chip with millions of qubits. This shifts attention towards photonic interconnects, quantum repeaters and the broader programme to build a quantum internet. The second is the workforce question. The Riverlane and Resonance Quantum Error Correction Report 2025 estimated that there are roughly 1,800 to 2,200 specialists in quantum error correction worldwide, against a projected need of five to sixteen thousand by 2030. Specialised QEC training takes the better part of a decade, and there is no obvious way to close the gap quickly.
Beyond 2030 the predictions become genuinely speculative. McKinsey’s most recent figures suggest that 72 per cent of executives and investors expect fault-tolerant quantum computing to be commercially viable by 2035. The market itself has been variously projected at twenty to a hundred billion dollars by the same date. These numbers are worth treating with epistemic humility. The history of computing forecasts is mostly a history of getting both directions wrong; the 1970s underestimated personal computing and overestimated artificial intelligence by several decades each. Quantum computing is statistically likely to disappoint someone, just not necessarily on the timescale anyone currently believes. What is no longer in question is the direction of travel. The next ten years will involve scaling rather than discovery, engineering rather than physics, and integration rather than isolation. The science is essentially settled. Whether the industry can deliver on the timelines it has now publicly committed to is, at last, a question that can be answered.
What Comes Next
The field arrives at 2026 with most of the foundational questions answered. Quantum computers are real. Some of them are useful for specialised tasks. Error correction works. Multiple hardware platforms remain competitive, and the consensus view is that they are likely to coexist rather than converge on a single winning approach. The unresolved questions are now those of scale, cost, networking and software rather than fundamental physics.
Looking back over the century and a quarter since Planck’s reluctant quantum, what stands out is how much of the current programme was anticipated by a small number of theorists working before any of the relevant hardware existed. Feynman saw the opportunity. Deutsch built the framework. Shor showed there was a problem worth solving and another problem that needed fixing. When the engineers finally caught up, they found the maps already drawn.
