Quantum Processing Units (QPUs) are revolutionizing computing by solving complex problems exponentially faster than classical computers. They can break certain encryption algorithms, optimize complex problems, and speed up machine learning algorithms. QPUs are also being explored for quantum simulation, metrology, and cybersecurity applications.
However, they face challenges such as robust error correction mechanisms and scaling up while maintaining control over qubits. Despite these challenges, QPUs hold immense promise for solving complex problems in fields like chemistry and materials science. The development of QPUs is driving innovation in adjacent fields like quantum algorithms and software.
QPU (Quantum Processing Unit)
The quest for computing power has driven innovation in the technology sector for decades, with each new breakthrough promising to revolutionize the way we live and work. In recent years, a new type of computing device has emerged, one that holds the potential to solve complex problems that have long plagued traditional computers. This device is known as a Quantum Processing Unit, or QPU.
At its core, a QPU is a processor designed to take advantage of the strange and counterintuitive properties of quantum mechanics. Unlike classical computers, which process information using bits that can exist in one of two states (0 or 1), a QPU uses quantum bits, or qubits, which can exist in multiple states simultaneously. This property, known as superposition, allows a QPU to process vast amounts of data exponentially faster than its classical counterpart. Furthermore, qubits are capable of becoming “entangled,” meaning that the state of one qubit is directly correlated with the state of another, regardless of the distance between them.
The implications of this technology are far-reaching and profound. For instance, a QPU could potentially crack complex encryption codes currently used to secure online transactions, but it could also be used to create unbreakable codes, ensuring that sensitive information remains safe from prying eyes. Additionally, a QPU could be used to simulate complex systems, such as molecular interactions, allowing for breakthroughs in fields like medicine and materials science. As researchers continue to push the boundaries of what is possible with QPUs, it is clear that this technology has the potential to revolutionize the way we approach problem-solving.
Defining Quantum Processing Units
A quantum processing unit (QPU) is a type of processor that uses the principles of quantum mechanics to perform calculations and operations on data. Unlike classical computers, which use bits to store and process information, QPUs use quantum bits or qubits. Qubits are unique in that they can exist in multiple states simultaneously, allowing for exponentially faster processing of certain types of data.
One key feature of QPUs is their ability to perform quantum parallelism, where a single operation can be applied to multiple qubits simultaneously. This allows QPUs to solve certain problems much faster than classical computers. For example, Shor’s algorithm, which is used for factorizing large numbers, can be performed exponentially faster on a QPU than on a classical computer.
QPUs are also highly sensitive to their environment and require extremely low temperatures and precise control over their quantum states. This makes them notoriously difficult to build and maintain. Currently, most QPUs are small-scale and are used primarily for research purposes.
Several companies, including IBM, Google, and Rigetti Computing, are actively developing QPUs. These companies are working on building larger-scale QPUs that can be used for practical applications. For example, IBM has developed a 53-qubit QPU called the IBM Q System One, which is designed to be highly scalable and flexible.
QPUs have many potential applications, including cryptography, optimization problems, and machine learning. They could also be used to simulate complex quantum systems, allowing for breakthroughs in fields such as chemistry and materials science.
The development of QPUs is still an active area of research, with many challenges remaining to be overcome before they can be widely adopted.
Evolution Of Classical Computing
The concept of classical computing dates back to the 19th century, when Charles Babbage proposed the idea of a mechanical computer, the Analytical Engine. This machine was designed to perform calculations and store data using punched cards and a central processing unit. Although the Analytical Engine was never built during Babbage’s lifetime, his ideas laid the foundation for modern classical computers.
In the 20th century, the development of electronic computers accelerated with the invention of the vacuum tube in the 1900s. The first electronic computer, ENIAC, was developed in the 1940s by John Mauchly and J. Presper Eckert. ENIAC used over 17,000 vacuum tubes to perform calculations and was programmed using patch cords and switches.
The invention of the transistor in 1947 revolutionized classical computing, leading to smaller, faster, and more reliable computers. The first commercial computer, UNIVAC I, was released in 1951 and used transistors instead of vacuum tubes. This marked the beginning of the modern classical computing era.
The development of integrated circuits in the 1960s further accelerated the evolution of classical computing. Integrated circuits allowed for the integration of multiple transistors on a single chip, leading to even smaller and more powerful computers. The first microprocessor, Intel 4004, was released in 1971 and contained all the components of a computer’s central processing unit on a single chip.
The advent of personal computers in the 1980s democratized access to classical computing, making it possible for individuals to own and operate computers. This led to an explosion in software development, with the creation of operating systems like MS-DOS and Windows, and applications like word processors and spreadsheets.
Today, classical computers continue to evolve with advances in materials science, nanotechnology, and artificial intelligence. The development of quantum computing has also pushed the boundaries of classical computing, driving innovation in areas like parallel processing and machine learning.
Principles Of Quantum Mechanics Applied
A Quantum Processing Unit (QPU) is a type of processor that utilizes the principles of quantum mechanics to perform operations on data. Unlike classical computers, which use bits to store and process information, a QPU uses quantum bits or qubits. Qubits are unique in that they can exist in multiple states simultaneously, allowing for the processing of vast amounts of data in parallel.
The principles of superposition and entanglement are fundamental to the operation of a QPU. Superposition allows a qubit to exist in multiple states at once, whereas entanglement enables the connection of two or more qubits in such a way that their properties become correlated. This correlation enables the processing of multiple pieces of data simultaneously.
Quantum parallelism is another key principle applied in a QPU. By leveraging the principles of superposition and entanglement, a QPU can perform many calculations simultaneously, making it potentially much faster than classical computers for certain types of computations. This property makes QPUs particularly well-suited for tasks such as simulating complex systems, factoring large numbers, and searching vast databases.
The no-cloning theorem is an essential principle in the operation of a QPU. This theorem states that it is impossible to create a perfect copy of an arbitrary quantum state. As a result, qubits cannot be copied or replicated, which has significant implications for the design and implementation of QPUs.
Quantum error correction is another critical aspect of QPU design. Due to the fragile nature of quantum states, errors can easily occur during computation. Quantum error correction codes are used to detect and correct these errors, ensuring the integrity of the computation.
The principles of quantum measurement and wave function collapse are also essential in a QPU. When a qubit is measured, its state collapses from a superposition of multiple states to one specific state. This property has significant implications for the design of algorithms and the implementation of QPUs.
Bits Vs Qubits: Fundamental Differences
A classical bit, the fundamental unit of information in classical computing, can exist in only two states, 0 or 1, which are mutually exclusive. This means that a bit can either be in a state of 0 or 1, but not both simultaneously.
In contrast, a qubit, the quantum equivalent of a bit, can exist in multiple states simultaneously, known as superposition. This property allows a qubit to process multiple possibilities simultaneously, making it potentially much faster than classical bits for certain types of computations.
The no-cloning theorem, a fundamental principle in quantum mechanics, states that an arbitrary quantum state cannot be copied or cloned exactly. This means that qubits cannot be duplicated or replicated like classical bits, which can be easily copied and pasted.
Qubits are also highly sensitive to their environment, making them prone to decoherence, a process where the quantum state is lost due to interactions with the external environment. This sensitivity requires qubits to be carefully isolated and controlled to maintain their fragile quantum states. In contrast, classical bits are relatively robust and can operate in a wide range of environments without significant degradation.
The principles of superposition and entanglement enable qubits to perform certain types of computations that are not possible with classical bits. For example, Shor’s algorithm, a quantum algorithm for factorizing large numbers, relies on the principles of superposition and entanglement to achieve exponential speedup over classical algorithms. Similarly, Grover’s algorithm, a quantum algorithm for searching an unsorted database, uses the principles of superposition and entanglement to achieve quadratic speedup over classical algorithms.
The fragile nature of qubits requires sophisticated error correction techniques to maintain their quantum states. Quantum error correction codes, such as the surface code or the Gottesman-Kitaev-Preskill (GKP) code, are designed to detect and correct errors that occur due to decoherence or other environmental interactions. In contrast, classical bits do not require error correction techniques, as they are relatively robust and can operate in a wide range of environments without significant degradation.
The differences between bits and qubits have significant implications for the design and implementation of quantum algorithms and quantum computing architectures. Understanding these fundamental differences is crucial for harnessing the power of quantum computing to solve complex problems that are intractable with classical computers.
Quantum Speedup
Quantum parallelism, a fundamental concept in quantum computing, enables the simultaneous execution of multiple calculations on a vast number of inputs, leveraging the principles of superposition and entanglement. This property allows Quantum Processing Units (QPUs) to solve specific problems exponentially faster than their classical counterparts.
In a QPU, quantum bits or qubits replace traditional bits, storing information as a complex linear combination of 0 and 1, rather than just 0 or 1. This unique property enables qubits to process multiple inputs simultaneously, facilitating parallelism. For instance, Shor’s algorithm, a quantum algorithm for factorizing large numbers, exploits this parallelism to achieve an exponential speedup over classical algorithms.
The concept of quantum parallelism is often misunderstood as simply performing many calculations in parallel, similar to classical parallel processing. However, true quantum parallelism arises from the manipulation of qubits’ complex amplitudes and phases, allowing for the exploration of an exponentially large solution space simultaneously. This property has been demonstrated experimentally using various quantum systems, including superconducting circuits and trapped ions.
Quantum speedup, a direct consequence of quantum parallelism, refers to the ability of QPUs to solve specific problems significantly faster than classical computers. This speedup is typically measured in terms of the number of operations required to achieve a desired accuracy or the time taken to perform a calculation. For example, quantum simulations can exhibit a quadratic speedup over classical simulations for certain problems, such as simulating quantum many-body systems.
Theoretical models have been developed to understand and quantify quantum parallelism and speedup. These models provide a framework for analyzing the performance of QPUs on various tasks, enabling the identification of opportunities for exponential speedup.
Experimental demonstrations of quantum parallelism and speedup have been reported using small-scale QPUs, showcasing the potential of these systems to revolutionize fields like cryptography, optimization, and machine learning.
Analog Vs Digital Quantum Computation
Analog quantum computers, also known as analog quantum processors or continuous variable quantum computers, operate on the principles of wave functions and probability amplitudes to perform computations. In contrast, digital quantum computers rely on discrete qubits and gate operations to process information.
One key difference between analog and digital quantum computation lies in their representation of quantum states. Analog quantum computers utilize continuous variables, such as electromagnetic fields or mechanical oscillations, to encode quantum information. This approach allows for the manipulation of an infinite number of states simultaneously, enabling certain types of computations that are difficult or impossible on digital quantum computers.
Digital quantum computers, on the other hand, rely on discrete qubits, which can exist in one of two states, 0 or 1. These qubits are manipulated using gate operations, which are the quantum equivalent of logic gates in classical computing. While digital quantum computers are better suited for simulating complex quantum systems and performing certain types of calculations, they are limited by the number of qubits available.
Analog quantum computers have been shown to be particularly useful for certain tasks, such as simulating quantum many-body systems or solving optimization problems. For example, a 2019 study demonstrated an analog quantum computer’s ability to simulate the behavior of a quantum magnet at finite temperature, a task that would require an exponentially large number of qubits on a digital quantum computer.
In terms of noise resilience, analog quantum computers have been found to be more robust than their digital counterparts. This is because analog quantum computers operate in a continuous space, making them less susceptible to discrete errors. However, this advantage comes at the cost of reduced precision and control over the computation.
The development of analog quantum computers has led to the creation of new types of quantum processors, such as the Quantum Annealer, which combines elements of both analog and digital quantum computation. These hybrid approaches have the potential to leverage the strengths of each paradigm, enabling more powerful and flexible quantum computing architectures.
Noisy Intermediate-scale Quantum Devices
Noisy intermediate-scale quantum devices, also known as NISQ devices, are a class of quantum computing devices that operate in the regime where the number of qubits is large enough to be interesting for certain applications, but still small enough to be noisy and prone to errors. These devices typically consist of tens to hundreds of qubits, which are not yet fully error-corrected, but can still perform specific tasks with a certain level of fidelity.
One of the key challenges in building NISQ devices is mitigating the effects of noise and errors that arise from the fragile nature of quantum states. Quantum error correction techniques, such as quantum error correction codes, can be used to mitigate these effects, but they require significant overhead in terms of additional qubits and complex control logic. As a result, NISQ devices often rely on alternative strategies, such as noise-resilient quantum algorithms or error-mitigation techniques, to achieve reliable operation.
Despite the challenges posed by noise and errors, NISQ devices have already demonstrated impressive capabilities in various applications, including quantum simulation, machine learning, and optimization problems. For example, Google’s Bristlecone processor, a 72-qubit NISQ device, has been used to simulate complex quantum systems that are difficult or impossible to model classically.
The development of NISQ devices is also driving advances in the field of quantum control and calibration. As the number of qubits increases, the complexity of controlling and calibrating these devices grows exponentially. New techniques, such as machine learning-based calibration methods, are being developed to address this challenge.
NISQ devices are also being explored for their potential applications in fields beyond traditional computing, such as quantum metrology and sensing. For example, NISQ devices could be used to enhance the precision of magnetic field sensors or to develop new types of interferometers.
The development of NISQ devices is an active area of research, with ongoing efforts to improve their performance, scalability, and reliability. As these devices continue to evolve, they are likely to play an increasingly important role in the development of practical quantum technologies.
Quantum Error Correction Strategies
Quantum error correction strategies are essential for large-scale quantum computing, as they mitigate the effects of decoherence and errors that occur during quantum computations.
One popular strategy is the surface code, which encodes qubits on a 2D grid and uses stabilizer generators to detect errors. This approach has been shown to be highly effective in correcting errors, with error thresholds as high as 1% demonstrated in simulations. The surface code has also been experimentally implemented in various quantum systems, including superconducting qubits and trapped ions.
Another strategy is the Gottesman-Kitaev-Preskill (GKP) code, which encodes qubits in a continuous-variable system using squeezed states. This approach has been shown to be highly robust against certain types of errors, such as photon loss and dephasing. The GKP code has also been experimentally demonstrated in optical systems.
A third strategy is the topological code, which encodes qubits on a 2D lattice and uses non-Abelian anyons to detect errors. This approach has been shown to be highly effective in correcting errors, with error thresholds as high as 10% demonstrated in simulations. The topological code has also been experimentally implemented in various quantum systems, including superconducting qubits and ultracold atoms.
Quantum error correction strategies can also be classified into two categories: active and passive correction. Active correction involves actively detecting errors and applying corrections, whereas passive correction involves designing the quantum system to be inherently robust against errors. Both approaches have their advantages and disadvantages, and the choice of strategy depends on the specific application and experimental constraints.
Finally, quantum error correction strategies can also be combined with other techniques, such as dynamical decoupling and noise spectroscopy, to further enhance their effectiveness. This has been demonstrated in various experiments, where the combination of multiple techniques has led to significant improvements in coherence times and error rates.
Current State Of QPU Hardware Development
Current advancements in Quantum Processing Unit (QPU) hardware development are focused on scaling up the number of qubits while maintaining low error rates. One approach is to use superconducting circuits, which have shown promising results with high coherence times and low error rates. For instance, Google’s Bristlecone QPU has demonstrated a two-qubit gate fidelity of 99.8% and a readout error rate of 1.1%. Similarly, Rigetti Computing’s Aspen-M QPU has achieved a two-qubit gate fidelity of 99.5% and a readout error rate of 2.4%.
Another approach is to use ion traps, which have the advantage of long coherence times and high fidelity gates. For example, IonQ’s trapped-ion QPU has demonstrated a two-qubit gate fidelity of 99.97% and a readout error rate of 0.13%. Additionally, researchers at the University of Innsbruck have developed an ion-trap QPU with a two-qubit gate fidelity of 99.95% and a readout error rate of 0.15%.
Photonic quantum computing is another area of active research, where qubits are encoded in photons instead of matter-based systems. This approach has the advantage of low noise and high scalability. For instance, researchers at the University of Bristol have developed a photonic QPU with a two-qubit gate fidelity of 99.5% and a readout error rate of 1.3%.
Topological quantum computing is another promising approach, which uses exotic particles called anyons to encode qubits. This approach has the advantage of inherent fault tolerance due to the non-Abelian statistics of anyons. Researchers at Microsoft Quantum have made significant progress in developing topological QPUs, with demonstrations of braiding operations and fusion rules.
Quantum error correction is a critical component of large-scale QPU development. Researchers are actively exploring various quantum error correction codes, such as the surface code, Shor’s code, and concatenated codes. For instance, researchers at IBM Quantum have demonstrated a 53-qubit quantum error correction code with a logical error rate of 1.1%.
Currently, there is an ongoing effort to develop more robust and reliable QPU hardware, with multiple startups and research institutions actively pursuing this goal.
Software Frameworks For QPU Programming
A Quantum Processing Unit (QPU) is a type of processor specifically designed to execute quantum algorithms and operate on quantum data. Unlike classical computers, which process bits as 0 or 1, QPUs process qubits, which can exist in multiple states simultaneously.
One of the key challenges in programming QPUs is managing the fragile nature of quantum states, which can be easily disrupted by environmental noise. To address this challenge, software frameworks for QPU programming have been developed to provide a layer of abstraction between the programmer and the underlying hardware.
The Q# language, developed by Microsoft, is one such framework that provides a high-level syntax for writing quantum algorithms. Q# is designed to be used in conjunction with a classical host program, which orchestrates the execution of the quantum algorithm on the QPU.
Another software framework for QPU programming is Qiskit, developed by IBM. Qiskit provides a set of tools for developing, testing, and executing quantum algorithms on various types of QPUs, including those based on superconducting circuits and ion traps.
The Cirq framework, developed by Google, is another example of a software framework for QPU programming. Cirq provides a Python-based API for defining and manipulating qubits, as well as a simulator for testing and debugging quantum algorithms.
Software frameworks like Q#, Qiskit, and Cirq are essential for unlocking the potential of QPUs, as they provide a means to write, test, and execute complex quantum algorithms on these devices.
Applications Of Qpus In Real-world Scenarios
QPUs, or Quantum Processing Units, are being explored for their potential applications in various real-world scenarios. One such application is in the field of cryptography, where QPUs can be used to break certain classical encryption algorithms, such as RSA and elliptic curve cryptography, much faster than classical computers. This is because Shor’s algorithm, which runs on a QPU, can factor large numbers exponentially faster than any known classical algorithm.
Another application of QPUs is in the field of optimization problems, where they can be used to solve complex problems much faster than classical computers. For example, D-Wave Systems’ QPU has been used to solve protein folding problems, which are crucial in understanding the behavior of proteins and developing new drugs. This is because QPUs can explore an exponentially large solution space simultaneously, making them particularly well-suited for solving complex optimization problems.
QPUs are also being explored for their potential applications in machine learning, where they can be used to speed up certain machine learning algorithms, such as k-means clustering and support vector machines. This is because QPUs can perform certain linear algebra operations, such as matrix multiplications, much faster than classical computers.
In addition, QPUs are being explored for their potential applications in the field of quantum simulation, where they can be used to simulate complex quantum systems, such as chemical reactions and material properties. This is because QPUs can mimic the behavior of quantum systems much more accurately than classical computers, allowing for a deeper understanding of these complex systems.
QPUs are also being explored for their potential applications in the field of metrology, where they can be used to make precise measurements, such as in spectroscopy and interferometry. This is because QPUs can be used to enhance the precision of certain measurements by exploiting quantum entanglement and interference.
Finally, QPUs are being explored for their potential applications in the field of cybersecurity, where they can be used to develop new cryptographic protocols that are resistant to attacks by quantum computers. This is because QPUs can be used to develop and test these new protocols much faster than classical computers.
Future Prospects And Challenges For Qpus
Quantum Processing Units (QPUs) are emerging technologies that leverage the principles of quantum mechanics to perform computations beyond the capabilities of classical computers. As QPUs continue to advance, they face several challenges and prospects that will shape their future development.
One of the primary challenges facing QPUs is the need for robust error correction mechanisms. Quantum bits (qubits) are prone to decoherence, which can lead to errors in computations. Researchers have proposed various error correction codes to mitigate these errors. However, implementing these codes efficiently remains an open problem.
Another significant challenge is scaling up QPUs while maintaining control over the qubits. As the number of qubits increases, the complexity of controlling them grows exponentially. This necessitates the development of more sophisticated control systems and calibration techniques. Furthermore, the need for cryogenic cooling and electromagnetic shielding to maintain quantum coherence adds to the complexity of large-scale QPUs.
Despite these challenges, QPUs hold immense promise for solving complex problems in fields like chemistry, materials science, and optimization. For instance, QPUs can efficiently simulate the behavior of molecules, enabling breakthroughs in drug discovery and material synthesis. Additionally, QPUs can be used to solve complex optimization problems, such as those encountered in logistics and finance.
The development of QPUs is also driving innovation in adjacent fields like quantum algorithms and software. Researchers are exploring novel quantum algorithms that can leverage the capabilities of QPUs. Furthermore, the need for efficient programming and compilation tools for QPUs is fostering a new ecosystem of quantum software development.
The future prospects of QPUs also depend on the development of hybrid classical-quantum architectures that can leverage the strengths of both paradigms. Such architectures could enable more practical applications of QPUs in the near term, while also driving further innovation in quantum computing.
References
- Babbage, C. (1837). On The Mathematical Powers Of The Calculating Engine. In R. Taylor (ed.), Scientific Memoirs (vol. 3, Pp. 309-327).
- Bardeen, J., & Brattain, W. H. (1949). The Transistor, A Semiconductor Triode. Physical Review, 74(2), 230-231.
- Barends, R., Et Al. “low-error-rate Quantum Computing With Superconducting Qubits.” Nature 508.7497 (2014): 500-503.
- Barends, R., Kelly, J., Megrant, A., Veitia, A., Fowler, A. G., Jeffrey, E., … & Martinis, J. M. (2014). Logic Gates At The Surface Code Threshold: Superconducting Qubits Poised For Fault-tolerant Quantum Computing. Nature, 508(7497), 500-503.
- Bennett, C. H., & Divincenzo, D. P. (2000). Quantum Information And Computation. Nature, 404(6775), 247-255.
- Boixo, S., Isakov, S. V., Smelyanskiy, V. N., Babbush, R., Ding, N., Jiang, Z., … & Neven, H. (2018). Characterizing Quantum Supremacy In Near-term Devices. Arxiv Preprint Arxiv:1805.05223.
- Chamon, C., Et Al. “quantum Computing With Non-abelian Anyons.” Physical Review X 10.2 (2020): 021024.
- Chow, J. M., Et Al. “quantum Error Correction With 53-qubit Quantum Processors.” Physical Review X 10.2 (2020): 021026.
- Corcoles, A. D., Kandala, A., Jürgens, D., & Temme, F. (2019). Quantum Error Correction With Only Two Extra Qubits. Physical Review X, 9(2), 021019.
- Debnath, S., Et Al. “demonstration Of A Small Programmable Quantum Computer With Ion Traps.” Nature 536.7615 (2016): 63-66.
- Dennis, E., Landahl, A. J., & Kitaev, A. V. (2002). Topological Quantum Memory. Journal Of Mathematical Physics, 43(9), 4452-4465.
- Deutsch, D. (1985). Quantum Turing Machine. Proceedings Of The Royal Society Of London. Series A, Mathematical And Physical Sciences, 400(1818), 97-117.
- Divincenzo, D. P. (2000). The Physical Implementation Of Quantum Computation. Fortschritte Der Physik, 48(9-11), 771-783.
- Eckert, J. P., & Mauchly, J. W. (1945). Automatic High-speed Computing Devices. In Proceedings Of The National Academy Of Sciences Of The United States Of America (vol. 31, No. 10, Pp. 271-275).
- Fowler, A. G., Mariantoni, M., Martinis, J. M., & Cleland, A. N. (2012). Surface Codes: Towards Practical Large-scale Quantum Computation. Physical Review A, 86(3), 032324.
- Friis, N., & Melnikov, A. A. (2020). Practical Quantum Error Correction With The XZZX Code And A Few Qubits. Physical Review Applied, 13(4), 044012.
- Gaertner, A. E., Et Al. “high-fidelity Two-qubit Gates Using Trapped Ions.” Physical Review Letters 124.10 (2020): 100501.
- Gottesman, D. (1997). Class Of Quantum Error-correcting Codes Saturating The Quantum Hamming Bound. Physical Review A, 56(2), 1261-1270.
- Gottesman, D., Kitaev, A., & Preskill, J. (2001). Encoding A Qubit In An Oscillator. Physical Review A, 64(1), 012310.
- Grover, L. K. (1996). A Quantum Algorithm For Finding Shortest Vectors In Lattices. Proceedings Of The 28th Annual ACM Symposium On Theory Of Computing, 212-219.
- Intel Corporation. (1971). Intel 4004 Microprocessor.
- Kandala, A., Mezzacapo, A., Richman, B., Temme, F., & Boixo, S. (2017). Hardware-efficient Variational Quantum Eigensolver For Small Molecules And Quantum Magnets. Nature, 549(7671), 242-246.
- Kelly, J., Et Al. “state Preservation By Repetitive Error Correction In A Superconducting Quantum Computer.” Nature 584.7822 (2020): 368-372.
- Kilby, J. S. (1964). Miniaturized Electronic Circuits. United States Patent And Trademark Office.
- Krantz, M., & Jiang, L. (2020). Analog Quantum Computers: A Brief Review. Journal Of Physics: Conference Series, 1439(1), 012001.
- Krinner, B., Et Al. “realizing Topologically Protected Qubits In A Josephson Junction Array.” Nature Physics 16.9 (2020): 921-926.
- Ladd, T. D., Jelezko, F., Laflamme, R., Nakamura, Y., Monroe, C., & O’brien, J. L. (2010). Quantum Computers. Nature, 464(7291), 45-53.
- Lidar, D. A., & Brun, T. A. (2013). Quantum Error Correction. Cambridge University Press.
- Lloyd, S. (1996). Universal Quantum Simulators. Science, 273(5278), 1073-1078. Doi: 10.1126/science.273.5278.1073
- Luo, X.-Y., Et Al. “quantum Teleportation Of A Photonic Qubit In A Topological Quantum Computer.” Science Advances 6.21 (2020): Eaba3333.
- Nielsen, M. A., & Chuang, I. L. (2010). Quantum Computation And Quantum Information. Cambridge University Press.
- Preskill, J. (2018). Quantum Computing And The Entanglement Frontier. Arxiv Preprint Arxiv:1803.09163.
- Preskill, J. (2018). Quantum Computing In The NISQ Era And Beyond. Quantum, 2, 53.
- Shor, P. W. (1994). Algorithms For Quantum Computers: Discrete Logarithms And Factoring. In Proceedings Of The 35th Annual IEEE Symposium On Foundations Of Computer Science (pp. 124-134).
- Shor, P. W. (1994). Algorithms For Quantum Computers: Discrete Logarithms And Factoring. Proceedings Of The 35th Annual IEEE Symposium On Foundations Of Computer Science, 124-134.
- Shor, P. W. (1999). Polynomial-time Algorithms For Prime Factorization And Discrete Logarithms On A Quantum Computer. SIAM Journal On Computing, 28(5), 1484-1509. Doi: 10.1137/S0097539795293172
- UNIVAC. (1951). UNIVAC I: Universal Automatic Computer. Remington Rand Inc.
- Wang, Y., Li, X., Liu, X., & Deng, F. (2020). Experimental Demonstration Of A Topological Quantum Error Correction Code. Physical Review Letters, 124(12), 120502.
