The quest for more powerful quantum computation receives a significant boost from new research demonstrating a novel approach to encoding and processing information using light. Uchenna Chukwu, Mohammad-Ali Miri, and Nicholas Chancellor, all from Quantum Computing Inc, present a method that encodes data in the relative displacement, or photon number, of different light modes. This technique offers enhanced protection against imperfections commonly found in quantum systems, and allows for the creation of high-quality superpositions of squeezed states with greater fidelity than existing methods. Crucially, the team’s work moves beyond the incidental use of non-Gaussian effects in current quantum systems, enabling explicit parallel processing and paving the way for more robust and efficient quantum annealers.
from imperfections. The research demonstrates that photon subtraction protocols create high-quality quantum superpositions of squeezed states, achieving significantly higher fidelity than protocols restricted to producing only cat states. The amount of squeezing and anti-squeezing introduced remains moderate, and is unlikely to dominate the photon number. This parallel processing enables explicit use of non-Gaussian interference, contrasting with the incidental role of non-Gaussianity observed in all-optical coherent Ising machines. A key observation is that displacements of optical states provide a convenient degree of freedom to encode information.
Coherent Ising Machines for Optimization Problems
Coherent Ising Machines (CIMs) are optical systems designed to solve complex optimization problems by leveraging the principles of quantum optics. This research explores the potential of CIMs as a form of quantum computation, while acknowledging the challenges in achieving a demonstrable advantage over classical computers. A central theme is determining whether CIMs can perform calculations beyond the capabilities of conventional computers, and the difficulty of simulating these systems classically. The study delves into the concepts of expressibility and barren plateaus, which can hinder the training of quantum machine learning models.
The research highlights that basic CIMs are often efficiently simulatable by classical computers, meaning they do not offer a quantum advantage. However, the team emphasizes the importance of non-Gaussian states, which are more difficult to simulate classically, for achieving true quantum computation with CIMs. Generating and manipulating these non-Gaussian states is crucial for creating computational complexity, and the study references techniques such as photon subtraction and addition. The researchers also explore the potential of using the Zeno effect for computation, potentially providing a speedup over classical search algorithms.
Current CIM implementations face challenges in terms of scalability, coherence, and control. Recent advancements in the monolithic integration of optical components offer a path towards building larger and more scalable CIMs. The study contrasts CIMs with gate-based quantum computers and Gaussian Boson Sampling. Generating non-Gaussian states, overcoming scalability limitations, and exploring novel computational paradigms like the Zeno effect are crucial for unlocking the full potential of CIMs.
Differential Photon Number Encoding Achieves Robustness
Scientists have developed a novel encoding scheme, termed Differential Photon Number Encoding (DiPNE), which encodes information in the relative photon number of different modes of light. This offers a robust alternative to phase-based encoding used in conventional coherent Ising machines. The research demonstrates that information can be reliably stored and processed by measuring the differences in photon number, rather than relying on precise phase control, a significant advancement for quantum computation. The team shows that DiPNE is inherently insensitive to squeezing and many non-Gaussian fluctuations, protecting encoded information from common sources of error.
Experiments reveal that DiPNE utilizes the relative displacement magnitudes, or photon number, in different modes, allowing for a direct measurement of encoded information using a homodyne measurement procedure. The researchers mathematically modeled the interaction of coherent states with a 50:50 beamsplitter, demonstrating how relative phase differences between input pulses determine the distribution of light in the output channels. This analysis confirms that a relative phase of +π leads to constructive interference, resulting in larger displacement, while a relative phase of -π leads to constructive interference in the opposite output. Furthermore, the study demonstrates the ability to generate high-quality superpositions of squeezed states using photon subtraction protocols, achieving significantly higher fidelity than protocols limited to creating only cat states. This approach allows for explicit use of non-Gaussian interference, moving beyond the incidental role it plays in all-coherent Ising machines, and paving the way for more direct quantum parallelism. The research highlights the potential to combine the advantages of Gaussian Boson sampling with those of coherent Ising machines, explicitly utilizing non-Gaussianity while maintaining a direct encoding of an optimization problem through interference effects.
Displacement Encoding Boosts Photonic Quantum Annealing
This research presents a novel method for encoding information in photonic quantum annealers, focusing on the manipulation of light’s displacement and photon number. Scientists have demonstrated that encoding information through these properties offers relative protection from imperfections. Importantly, the team showed that photon subtraction protocols can generate high-quality superpositions of squeezed states, exceeding the fidelity achieved when limited to creating only cat states. The work highlights the advantages of using displacement as a means of encoding and processing information, as it minimizes interference from other degrees of freedom.
Researchers acknowledge that this study focuses specifically on the encoding aspect of a broader entropy computing paradigm, with further work needed to fully develop the system. Future research will explore the complete framework for building an analog optical quantum optimiser, addressing challenges and opportunities beyond the scope of this initial investigation. The team also notes the need for erasure techniques to correct for loss channels and the ability to selectively correct degrees of freedom without disrupting processed information, representing key areas for continued development.
👉 More information
🗞 Explicitly Quantum-parallel Computation by Displacements
🧠 ArXiv: https://arxiv.org/abs/2510.19730
