The Scientific Prelude to Quantum Computing

From Planck’s 1900 quantum hypothesis to Bell’s theorem, the EPR paradox, and the no-cloning theorem — the 80-year scientific prelude that made quantum computing possible.

Quantum computing did not arrive in 1980. It arrived in 1900, when Max Planck reluctantly proposed that energy might come in discrete packets; again in 1905, when Einstein took the idea seriously enough to apply it to light; and again across the next eight decades, as the implications worked their way through theory, experiment and information science. By the time Paul Benioff, Yuri Manin, Richard Feynman and David Deutsch began asking whether quantum mechanics could be turned into a computational substrate, almost everything they needed was already in place. This article walks the eighty-year scientific prelude to quantum computing, from the trouble with black-body radiation through Bell’s theorem, the Aspect experiments, the no-cloning theorem and BB84.

The Trouble With Black Bodies

The opening problem was prosaic. By the late nineteenth century, classical physics could describe the radiation given off by a hot body across most of the spectrum, but at short wavelengths, the predictions diverged from experiment in a way that nobody could fix. The mathematics insisted that an ideal radiator should emit infinite energy at ultraviolet frequencies, which was both physically absurd and contradicted by every measurement ever made. The discrepancy became known as the ultraviolet catastrophe, and through the 1890s it was the most embarrassing unresolved puzzle in theoretical physics.

On 14 December 1900, Max Planck presented a paper to the German Physical Society in Berlin proposing a solution that he himself disliked. If electromagnetic radiation could only be emitted in discrete packets of energy, with each packet proportional to the frequency through a small constant later named after him, then the catastrophe disappeared, and the observed black-body spectrum fell out of the mathematics correctly. Planck described the move as a desperate act, an algebraic trick to make the equations work. He spent the next two decades trying and failing to derive the same result without the discontinuity, and to the end of his life he remained uncomfortable with the idea that nature might actually be granular at the smallest scales.

The constant was extraordinarily small. Planck’s constant, in modern units, is approximately 6.63 times 10^-34 joule-seconds, which is why the granularity of the world is invisible at human scales. But the principle was a fundamental break with the previous two centuries of physics. Energy, it now appeared, did not flow continuously. It came in indivisible chunks. The theoretical framework that would absorb this idea, work out its consequences and eventually lead to quantum computing did not yet exist, but the constant of proportionality at its heart had now been introduced.

Einstein, Light Quanta and the Old Quantum Theory

Einstein’s 1905 paper on the photoelectric effect did to light what Planck had done to thermal radiation. The phenomenon at issue was the observation that light shining on certain metals knocks electrons free, but with a peculiar dependence on frequency rather than intensity that classical electromagnetic theory could not explain. Einstein proposed that light itself, not just its emission and absorption, came in discrete quanta, and that each quantum carried an energy equal to Planck’s constant multiplied by the frequency. The argument was published in the Annalen der Physik in March 1905, in the same year as the special theory of relativity and the equivalence of mass and energy, and it would later win him the 1921 Nobel Prize in Physics.

The proposal was more radical than Planck’s. Where Planck had treated the granularity as a property of the interaction between matter and radiation, Einstein treated it as a property of light itself. Most physicists, including Planck, found this implausible until Robert Millikan’s careful experiments through 1914 confirmed the Einstein relationship to high precision. Light, the argument now ran, was both a wave and a particle depending on what one was measuring, and the same duality might apply to other things as well.

In 1913 Niels Bohr applied the new quantum ideas to atomic structure. His model of the hydrogen atom posited that electrons could only occupy specific allowed orbits around the nucleus, with transitions between orbits accompanied by the emission or absorption of single quanta of light. The Bohr model explained the discrete spectral lines of hydrogen, the most precisely measured atomic spectrum then known, with a precision that no classical model could approach. It also broke completely with the smooth, continuous trajectories that Newtonian physics had described for two and a half centuries. Eleven years later, in his 1924 doctoral thesis at the Sorbonne, Louis de Broglie completed the symmetry by proposing that matter, like light, also had a wave nature. The wavelength associated with a particle, by his hypothesis, was Planck’s constant divided by its momentum. Three years later, Clinton Davisson and Lester Germer at Bell Labs confirmed the prediction by observing electron diffraction from a nickel crystal. The wave-particle duality was real and went both ways. The stage was set for a complete reformulation of physics.

The Birth of Quantum Mechanics

The actual reformulation arrived in two waves. In June and July 1925, the twenty-three-year-old Werner Heisenberg, recovering from hay fever on the North Sea island of Helgoland, worked out a new mathematical framework for atomic physics built around directly observable quantities like the frequencies and intensities of spectral lines, with non-commuting matrices standing in for the unobservable orbits of older models. The breakthrough is described in detail in the QZ retrospective on the quantum discoveries of 1925 and 1926. Heisenberg’s collaboration with Max Born and Pascual Jordan in Göttingen produced the first complete and logically consistent formulation of quantum mechanics, published in late 1925 as matrix mechanics.

A few months later, Erwin Schrödinger, then a professor at the University of Zurich, produced an entirely different but mathematically equivalent formulation. His 1926 wave equation described the time evolution of a quantum system via a continuous wave function rather than matrices, and it had the considerable advantage of resembling the kinds of differential equations that physicists had been working with for centuries. The Schrödinger equation became, and remains, the central calculational tool of non-relativistic quantum mechanics. It was the foundation on which much of the next century of atomic, molecular and condensed-matter physics would be built.

The two formulations turned out to be different mathematical languages for the same theory. Paul Dirac, in his 1927 transformation theory, demonstrated their equivalence and provided the more general framework, the Dirac formalism using bras and kets, that physicists still use today. Max Born, also in 1926, supplied the interpretive piece that nobody had quite wanted to admit. The wave function, he proposed, was not a physical wave at all. The square of its amplitude gave the probability of finding the particle at a given location. Quantum mechanics, on this reading, was an inherently statistical theory. It could not predict where any particular electron would go. It could only predict the probability distribution of where many electrons would go.

In 1927, Heisenberg published his uncertainty principle, demonstrating that the product of the uncertainties in position and momentum of any quantum particle could not be smaller than a fundamental limit set by Planck’s constant. The principle was not a statement about measurement error but about the structure of reality itself. Some pairs of properties, by the new theory, simply did not have simultaneous well-defined values. A year later, in 1928, Dirac produced the relativistic version of the wave equation. The Dirac equation predicted the existence of antimatter four years before it was experimentally observed, unified quantum mechanics with special relativity, and is generally regarded as one of the most beautiful equations in physics.

The Scientific Prelude to Quantum Computing
Quantum mechanics is the branch of physics that describes the behaviour of matter and energy at the smallest scales, where particles exhibit both wave-like and particle-like properties and outcomes are governed by probabilities rather than certainties.

The Solvay Years and the Bohr-Einstein Debate

The famous 1927 Solvay Conference in Brussels brought together nearly everyone who had contributed to the new theory. Planck, Einstein, Bohr, Heisenberg, Schrödinger, Dirac, Born, de Broglie, Pauli, Curie and seventeen other luminaries gathered to discuss what the theory actually meant. The photograph from that conference, with the Curies and the men in formal dress arranged on the steps of the Institut International de Physique Solvay, has become one of the most reproduced images in the history of science. Twenty-nine attendees would eventually win Nobel Prizes.

The disagreement at Solvay was less about the equations than about their interpretation. Bohr and Heisenberg held to what would become known as the Copenhagen interpretation, in which the act of measurement plays a fundamental role in determining what physical properties a quantum system actually has. Before measurement, the theory says, a property like the polarisation of a photon or the spin of an electron does not have a definite value at all. The measurement creates the value rather than revealing it.

Einstein found this deeply unsatisfying. His objection, which he repeated in different forms across the rest of his life, was that physics should describe an objective external reality independent of what observers happen to be doing. He suspected the new quantum mechanics was incomplete, that there must be deeper hidden variables underlying the statistical predictions. The most famous summary of his discomfort, “God does not play dice”, captured his view that the apparent randomness of quantum mechanics was a sign of the theory’s limits rather than a fundamental property of nature.

Through the late 1920s and into the 1930s, Einstein challenged Bohr in a sequence of thought experiments designed to demonstrate inconsistencies in the new theory. Bohr answered each one. The challenges and the responses produced a remarkable refinement of the theory’s foundations, but did not change Einstein’s basic dissatisfaction. The Bohr-Einstein debate, examined in the QZ historical retrospective on quantum theory, became one of the most consequential disagreements in twentieth-century physics. The disagreement was eventually resolvable not by argument but by experiment, but the apparatus needed to settle it would not exist for another fifty years.

EPR, Spooky Action and Schrödinger’s Cat

In May 1935, Einstein, Boris Podolsky and Nathan Rosen published a four-page paper in Physical Review with a polite title that disguised its purpose. The paper, now generally known as the EPR paper, constructed a thought experiment in which two particles are prepared in a correlated state, allowed to separate to arbitrary distance, and then independently measured. According to quantum mechanics, the measurements on the two particles would remain perfectly correlated regardless of the separation. Either the authors argued, the theory was incomplete, and there must be hidden variables determining the outcomes in advance, or the measurement on one particle was instantaneously affecting the other across space, which they considered absurd.

Einstein referred to the second possibility, sceptically, as spooky action at a distance. The phrase was meant as a reductio ad absurdum. The implication, as Einstein saw it, was that quantum mechanics could not be the complete final theory, since no respectable physical theory could describe such a thing. The paper was, in effect, a more sophisticated and rigorous version of the same objection Einstein had been making at the Solvay conferences for the previous decade.

Bohr’s response, also published in Physical Review later in 1935, accepted the strange correlations but argued that they did not constitute action at a distance in any operationally meaningful sense, since neither party to the experiment could use the correlations to send a signal faster than light. The argument, which became known as the no-signalling principle, was correct as a matter of practical physics but did not address Einstein’s deeper concern about the underlying nature of reality.

A few months after the EPR paper, Schrödinger published his own response in Die Naturwissenschaften, in which he gave the entanglement phenomenon its modern name, “Verschränkung”, and proposed his now-famous thought experiment to dramatise the implications. A cat sealed in a box with a quantum-triggered radioactive decay mechanism, by the strict reading of quantum mechanics, would be in a superposition of alive and dead states until the box was opened and the wave function collapsed. The thought experiment was meant to illustrate how implausible the Copenhagen interpretation became when applied to macroscopic objects. The cat was not meant as a description of how nature actually works. It was meant as an indictment of the orthodoxy that, taken seriously, seemed to claim it did.

The EPR paper, Schrödinger’s response and the cat scenario together set up the conceptual framework that would dominate the foundations of quantum mechanics for the rest of the twentieth century. Three things were now clear. Quantum mechanics made strange predictions about correlated systems. The interpretation of those predictions was philosophically contested. And no experiment yet existed that could distinguish between Bohr’s view and Einstein’s.

The Scientific Prelude to Quantum Computing
Is the cat dead or alive? Schrödinger’s cat is a thought experiment in which a cat sealed in a box with a quantum-triggered poison is, until observed, considered to be simultaneously alive and dead, illustrating the paradox of applying quantum superposition to everyday objects.

Information Theory Comes In

A parallel scientific thread, initially unconnected to quantum mechanics, was developing in a quite different domain through the 1940s and 1950s. Claude Shannon’s 1948 paper “A Mathematical Theory of Communication” at Bell Labs introduced the formal concept of information measured in bits, demonstrated the theoretical limits of error-free communication over noisy channels, and effectively founded the modern field of information theory. Shannon’s work was a complete and rigorous mathematical theory of classical communication and storage, and it would prove indispensable when, decades later, the question of how to do the same for quantum systems became urgent.

Through the 1960s a small group of physicists at IBM and elsewhere began to ask whether physical limits on computation existed. Rolf Landauer, working at IBM in 1961, proved that any irreversible logical operation must dissipate at least a fixed minimum amount of energy as heat, an amount now known as the Landauer limit. The result connected information processing to thermodynamics in an unexpected way. Computation, on Landauer’s reading, was not just an abstract mathematical activity but a physical process subject to physical constraints.

In 1973, Charles Bennett at IBM demonstrated that classical computation could, in principle, be made fully reversible, sidestepping the Landauer limit. The result required a particular logical structure in which no information was ever discarded, but it showed that the connection between computation and dissipation was not absolute. Reversible classical computing remained a curiosity for some years, but it would become foundational when quantum computing emerged, since the unitary evolution of a quantum system is necessarily reversible. The Landauer-Bennett line of work prepared the conceptual ground for treating quantum systems as computational devices long before anyone proposed actually doing so. It also seeded a mathematical and physical sensibility, common to both Bennett and his eventual collaborator Gilles Brassard, that would matter when the time came to invent quantum cryptography.

Bell’s Theorem

In 1964, working in his office at CERN in Geneva, the Northern Irish physicist John Stewart Bell produced the result that would eventually settle the Bohr-Einstein dispute. Bell’s paper, “On the Einstein Podolsky Rosen Paradox”, published in the short-lived journal Physics, demonstrated that any local hidden variable theory of the kind Einstein had hoped for must obey a specific mathematical inequality. Quantum mechanics, by direct calculation, predicted that this inequality would be violated for certain measurement configurations. The two pictures of reality, in other words, made different experimental predictions. The question that had seemed purely philosophical for thirty years was, in principle, decidable in the laboratory.

The Bell inequalities were a remarkable mathematical achievement. They translated the metaphysics of the EPR debate into a precise empirical test. Either Einstein’s intuition was correct and certain correlations between separated measurements would be bounded above, or quantum mechanics was correct and those correlations could exceed the bound by a calculable amount. There was no longer any room for ambiguity. The experiment had only to be performed.

The experiments turned out to be technically demanding. The first attempts, by John Clauser and Stuart Freedman at Berkeley in 1972, used cascade emission from calcium atoms to produce correlated photon pairs and measured their polarisations at varying angles. The results agreed with quantum mechanics and violated Bell’s inequality, but the experimental setup left several loopholes through which a sufficiently determined hidden-variable theory could in principle have escaped.

The Scientific Prelude to Quantum Computing
Entanglement is a quantum phenomenon in which two or more particles become linked so that the state of one instantaneously determines the state of the other, no matter how far apart they are.

Aspect, Zeilinger and the Closing of the Loopholes

Closing the loopholes took four decades. Alain Aspect, working at the Institut d’Optique in Orsay outside Paris, performed a sequence of experiments between 1981 and 1982 that addressed the most important loophole, the so-called locality loophole, by switching the measurement settings rapidly enough that no signal travelling at light speed could communicate the choice of setting from one detector to the other before measurement. Aspect’s results were dramatic and unambiguous. The Bell inequality was violated. Quantum mechanics was correct. Local hidden variable theories of the kind Einstein had hoped for could not describe nature.

Anton Zeilinger’s group at the University of Innsbruck, and later at the University of Vienna, extended the experiments through the 1990s and 2000s with progressively more demanding setups. They closed the detection loophole, which concerned the possibility that the photons being measured were a biased subset of those produced. They closed the locality loophole more rigorously than Aspect had, with measurement choices made by genuinely random processes at sufficient distances. They demonstrated entanglement over progressively longer distances, eventually including ground-to-satellite tests with the Chinese Micius satellite in 2017.

The decisive experimental confirmation came in 2015, when Ronald Hensen and colleagues at Delft University of Technology performed a loophole-free Bell test using nitrogen-vacancy centres in diamond separated by 1.3 kilometres on the university’s campus. The Hensen experiment closed the detection and locality loopholes simultaneously for the first time. Subsequent experiments by Zeilinger’s group in Vienna and by NIST in Boulder confirmed the result with photonic systems through 2015 and 2016. Quantum mechanics was vindicated. The non-local correlations Einstein had found unacceptable were a real feature of nature.

In October 2022, the Royal Swedish Academy of Sciences awarded the Nobel Prize in Physics jointly to John Clauser, Alain Aspect and Anton Zeilinger for their experimental work on entangled photons and the violation of Bell’s inequalities. The prize was, in effect, an official recognition that the Einstein-Bohr debate had been settled, more than half a century after Einstein’s death. Bohr had been right. The same Nobel Prize implicitly recognised the foundational science underlying quantum information processing as a whole, and the field that had emerged from the same body of work would, by the time the prize was awarded, be a major industrial enterprise.

No-Cloning, Teleportation and the Pre-Computing Quantum Information Era

While the Bell test programme was working through the 1970s and 1980s, a separate set of theoretical results was establishing that quantum mechanics could be treated as a substrate for information processing in ways that classical mechanics could not. Stephen Wiesner, then a graduate student at Columbia, wrote a paper around 1970 proposing the use of quantum states for unforgeable money and for what he called conjugate coding. The paper was not published until 1983, but Wiesner had circulated it informally and it had reached, among others, Charles Bennett at IBM.

In 1982, William Wootters and Wojciech Zurek published a short paper in Nature, and Dennis Dieks published an independent parallel result in Physics Letters A, demonstrating what became known as the no-cloning theorem. The theorem proved that an arbitrary unknown quantum state cannot be perfectly copied. The result was a direct consequence of the linearity of quantum mechanics, and it had two profound implications. The first was that quantum mechanics imposed limitations on information processing that had no classical analogue. The second, more constructive, was that quantum states had a property of unique identifiability that classical information lacks, and this property could in principle be used for cryptography.

In 1984, Charles Bennett at IBM and Gilles Brassard at the University of Montreal presented a paper at the IEEE International Conference on Computers, Systems and Signal Processing in Bangalore that turned the no-cloning theorem into a working cryptographic protocol. The protocol, BB84 after their initials and the year, used the polarisation states of single photons to distribute a shared cryptographic key between two parties in such a way that any eavesdropper attempting to intercept the photons would necessarily disturb them and reveal their presence. The security of BB84 did not depend on computational assumptions. It depended on the laws of physics. If quantum mechanics was correct, BB84 was unconditionally secure.

BB84 was the first practical application of quantum information processing. It was demonstrated experimentally for the first time by Bennett and Brassard themselves in 1989, with photons travelling through 32 centimetres of free space on an optical bench at IBM’s Yorktown Heights laboratory. By the early 2000s, commercial quantum key distribution systems were operating over kilometre-scale distances, and the field of quantum cryptography had become a sub-industry in its own right. Quantum cryptography was, in this sense, the first commercially deployed quantum information technology, predating quantum computing as a practical enterprise by more than two decades.

The Handoff

By the early 1980s the scientific prelude was substantially complete. Quantum mechanics existed as a fully developed theory, with consistent mathematical foundations from Schrödinger, Heisenberg, Dirac and Born. The Bell experiments had established that its non-local predictions were real. Information theory had a developed mathematical framework from Shannon. Reversible classical computing existed as a known concept thanks to Bennett’s 1973 work. The no-cloning theorem had identified a key non-classical feature of quantum information. BB84 had demonstrated that quantum systems could perform information-processing tasks that had no classical analogue.

The next step was almost obvious in retrospect. If quantum systems could be used for cryptographic key distribution, what other information-processing tasks could they perform? In 1980, Paul Benioff at Argonne National Laboratory published the first paper proposing a quantum mechanical model of a Turing machine, demonstrating that a quantum system could in principle simulate a classical computer. The same year, Yuri Manin in Moscow proposed, in a short note, that quantum systems might be used to compute things classical systems could not. In May 1981, at the first conference on the physics of computation at MIT’s Endicott House, Richard Feynman delivered the lecture, later published in 1982 as “Simulating Physics with Computers”, in which he argued that simulating quantum systems on classical computers was inherently inefficient, and that a computer built from quantum elements would be needed instead. Four years later, in 1985, David Deutsch at the University of Oxford published “Quantum Theory, the Church-Turing Principle and the Universal Quantum Computer” in the Proceedings of the Royal Society A, providing the first formal model of a universal quantum computer.

Where this article ends, the main quantum computing history begins. The science needed to make quantum computing possible had been built, paper by paper and experiment by experiment, across eighty-five years of theoretical and experimental physics. The questions that remained were no longer about whether quantum mechanics was correct or what it meant. They were about what one could do with it.

At a Glance: The Scientific Prelude Timeline

YearMilestoneKey figures
1900Black-body radiation hypothesis proposes that energy comes in discrete quantaMax Planck
1905Photoelectric effect explained by treating light as composed of quantaAlbert Einstein
1913Atomic model with discrete electron orbits explains hydrogen spectral linesNiels Bohr
1924Matter waves proposed as a universal property of all particlesLouis de Broglie
1925Matrix mechanics introduces the first complete formulation of quantum mechanicsWerner Heisenberg, Max Born, Pascual Jordan
1926Wave equation provides an equivalent and more familiar mathematical frameworkErwin Schrödinger
1926Probabilistic interpretation of the wave functionMax Born
1927Uncertainty principle is publishedWerner Heisenberg
1927Solvay Conference in Brussels formalises the Bohr-Einstein debateBohr, Einstein and others
1928Relativistic wave equation predicts antimatterPaul Dirac
1935EPR paper challenges the completeness of quantum mechanicsEinstein, Podolsky, Rosen
1935Schrödinger’s cat thought experiment, term “entanglement” introducedErwin Schrödinger
1948Mathematical theory of communication founds modern information theoryClaude Shannon
1961Landauer’s principle connects information processing to thermodynamicsRolf Landauer
1964Bell’s theorem turns the foundational dispute into an experimental questionJohn Stewart Bell
1972First experimental Bell test using cascade emission from calcium atomsJohn Clauser, Stuart Freedman
1973Reversible classical computing demonstrated as theoretically possibleCharles Bennett
1981-82Aspect experiments close the locality loophole in Bell testsAlain Aspect
1982No-cloning theorem published in NatureWootters, Zurek (and independently Dieks)
1984BB84 quantum cryptography protocol presented at IEEE Bangalore conferenceCharles Bennett, Gilles Brassard
2015First loophole-free Bell test using nitrogen-vacancy centres in diamondHensen et al, Delft
2022Nobel Prize in Physics awarded for the experimental Bell test programmeClauser, Aspect, Zeilinger

Frequently Asked Questions

Who first introduced the concept of the quantum?
What did Einstein actually win his Nobel Prize for?
What are matrix mechanics and wave mechanics, and how do they relate?
What is the EPR paradox?
What is Bell’s theorem?
Who performed the first Bell test experiments?
What is the no-cloning theorem?
What is BB84?
Why did the 2022 Nobel Prize in Physics matter for quantum computing?
How did information theory contribute to quantum computing?
What is the relationship between quantum mechanics and quantum computing?
When did quantum computing actually begin as a field?
Quantum TechScribe

Quantum TechScribe

I've been following Quantum since 2016. A physicist by training, it feels like now is that time to utilise those lectures on quantum mechanics. Never before is there an industry like quantum computing. In some ways its a disruptive technology and in otherways it feel incremental. But either way, it IS BIG!! Bringing users the latest in Quantum Computing News from around the globe. Covering fields such as Quantum Computing, Quantum Cryptography, Quantum Internet and much much more! Quantum Zeitgeist is team of dedicated technology writers and journalists bringing you the latest in technology news, features and insight. Subscribe and engage for quantum computing industry news, quantum computing tutorials, and quantum features to help you stay ahead in the quantum world.

Latest Posts by Quantum TechScribe: