Stochastic Chips Learn to Solve Complex Problems

Combinatorial optimisation problems underpin numerous scientific and engineering disciplines, driving active development of specialised hardware such as annealers and classical Ising machines. Ziyad Alsawidan from Carnegie Mellon University, Abdelrahman S. Abdelrahman from the University of California, Santa Barbara, and Md Sakibur Sajal, also of Carnegie Mellon University, and colleagues present a novel approach called Probabilistic Approximate Optimisation (PAOA), which iteratively learns circuit parameters from samples rather than relying on a fixed energy landscape. This research demonstrates PAOA’s implementation on a 64×64 perimeter-gated single-photon avalanche diode (pgSPAD) array, fabricated in 0.35m CMOS, representing the first realisation of this technique using intrinsically stochastic nanodevices, and built through collaboration between the Department of Electrical and Computer Engineering at Carnegie Mellon University and the University of California, Santa Barbara, alongside contributions from the Center for Quantum Phenomena, Department of Physics at New York University. Significantly, the system learns around device non-idealities, absorbing mismatches into its parameters, achieving high approximation ratios on standard 26-spin Sherrington-Kirkpatrick instances and closely mirroring CPU simulations, thus paving a practical route towards larger-scale, CMOS-compatible probabilistic computation.

Scientists have demonstrated a novel computing architecture using single-photon detectors, sidestepping the need for perfectly uniform hardware. This approach learns to optimise solutions despite imperfections in the underlying nanodevices, offering a pathway towards scalable, energy-efficient computation. The technology could unlock advances in fields reliant on complex problem-solving, from logistics to materials discovery.

Scientists have developed a novel approach to probabilistic computing that overcomes limitations inherent in nanoscale device variability. This innovative strategy allows the system to adapt to and compensate for the non-uniform behaviour of individual pgSPADs, each exhibiting a unique, asymmetric activation function due to variations in dark-count statistics.

PAOA can effectively learn despite these device-specific characteristics, absorbing mismatches into its variational parameters rather than requiring extensive calibration. On standard 26-spin Sherrington-Kirkpatrick instances, complex problems used to test optimisation algorithms, the pgSPAD-based PAOA system achieves high approximation ratios using a relatively small number of parameters.

Crucially, the results from the physical pgSPAD array closely mirror those obtained from CPU simulations, validating the effectiveness of the learning process and the system’s ability to find near-optimal solutions. This achievement marks the first realization of PAOA using intrinsically stochastic nanodevices and suggests a viable pathway towards building larger-scale, CMOS-compatible probabilistic computers.

By embracing device variations as an inherent part of the optimisation process, this work sidesteps the need for precise device matching and eliminates reliance on manually tuned algorithmic schedules. The demonstrated ability to learn around non-idealities opens up new possibilities for harnessing the power of nanoscale devices in probabilistic computation, potentially enabling solutions to complex optimisation problems across diverse scientific and engineering disciplines. This approach offers a significant step forward in the development of practical, scalable probabilistic hardware.

Photonic annealing learns from device asymmetry in spin glass approximation

Initial experiments with the 64×64 perimeter-gated single-photon avalanche diode (pgSPAD) array demonstrate that PAOA achieves high approximation ratios on 26-spin Sherrington-Kirkpatrick instances using between 2 and 17 layers, indicated by ‘p’ in the 2p parameter configuration. The pgSPAD-based inference closely mirrors CPU simulations, confirming the accuracy of the hardware implementation despite inherent device variations.

Each p-bit within the array exhibits a unique, asymmetric activation function of Gompertz-type, originating from fluctuations in dark-count statistics, a departure from the standard logistic response typically assumed in p-bit models. Rather than attempting to calibrate each of the 4,096 devices to enforce a uniform activation, the PAOA algorithm effectively learns around these variations.

This learning process incorporates residual activation and other mismatches directly into the variational parameters, eliminating the need for extensive per-device calibration. The study highlights that PAOA successfully absorbs device non-idealities into its learned parameters, demonstrating a robust approach to handling fabrication-induced variations.

PAOA can function effectively with intrinsically stochastic nanodevices fabricated in 0.35μm CMOS technology, a significant step towards scalable probabilistic computing. Each pgSPAD, or p-bit, operates by detecting single photons, initiating an avalanche current when the applied voltage exceeds a threshold.

A crucial design element is the active quench-and-reset (AQAR) circuit, which rapidly halts the avalanche and re-biases the device, preparing it for subsequent measurements. The binary output, denoted as ‘m’, indicates whether at least one avalanche event occurred within a defined integration window, creating a probabilistic switching behaviour governed by the gate voltage.

To characterise individual pgSPAD responses, researchers measured the output probability as a function of gate voltage, revealing a sigmoidal activation function for each device. Notably, these activation functions exhibited device-to-device variability, specifically a Gompertz function describing asymmetric slopes and offsets, attributable to dark-count fluctuations.

Instead of attempting to calibrate each device to a uniform response, the study deliberately leveraged these inherent non-idealities. Experimental samples were used to extract a best-fit hardware activation function for each pgSPAD, allowing PAOA to learn directly from this non-ideal response. This approach was validated through a controlled experiment on a four-node majority gate, demonstrating that PAOA effectively learns the correct target distribution even with asymmetric Gompertz activations.

Training with matched-activation yielded tighter convergence, confirming the benefit of calibration, but also highlighting PAOA’s robustness to device variability. The research employed a two-schedule ansatz, utilising inverse-temperature schedules to rescale programmed couplings during the optimisation process, and trained these schedules using CPU simulations before deploying them on the pgSPAD array for hardware inference.

Learning computation from inherent nanoscale device randomness

The persistent challenge of solving complex optimisation problems may be yielding to a radically different approach, one that embraces imperfection rather than striving for ideal components. Researchers have demonstrated a probabilistic computing system built not on meticulously calibrated nanoscale devices, but on harnessing the inherent stochasticity within them.

This work, utilising a custom-built array of single-photon avalanche diodes, marks a significant departure from traditional methods that demand uniformity and precision. For years, the field has been fixated on building ever-more-perfect qubits or annealing systems, assuming that error reduction is the primary path to scalability. This new system, however, actively learns to compensate for device variability, effectively turning a limitation into a feature.

The ability to absorb mismatches within the learning process is crucial, as achieving perfect uniformity at the nanoscale is both incredibly difficult and expensive. This isn’t simply a proof-of-concept demonstration; the system successfully tackled established benchmark problems, achieving competitive results despite, or perhaps because of, its imperfect components.

Limitations remain, of course. Scaling up the array size will undoubtedly introduce new challenges, and the computational cost of the learning phase needs further optimisation. However, the potential to create large-scale, energy-efficient probabilistic computers using readily available CMOS technology is now demonstrably closer. Future work will likely focus on exploring different learning algorithms and architectures, potentially incorporating feedback loops to further refine the system’s performance and broaden its applicability beyond the specific problems tested here.

👉 More information
🗞 Probabilistic approximate optimization using single-photon avalanche diode arrays
🧠 ArXiv: https://arxiv.org/abs/2602.13943

Rohail T.

Rohail T.

As a quantum scientist exploring the frontiers of physics and technology. My work focuses on uncovering how quantum mechanics, computing, and emerging technologies are transforming our understanding of reality. I share research-driven insights that make complex ideas in quantum science clear, engaging, and relevant to the modern world.

Latest Posts by Rohail T.:

Researchers Detect Faults, Improving Power System Protection

Researchers Detect Faults, Improving Power System Protection

February 18, 2026
Neural Networks Possess Hidden Structure for Compression

Neural Networks Possess Hidden Structure for Compression

February 18, 2026
New Codes Safeguard Data from Future Computers

New Codes Safeguard Data from Future Computers

February 18, 2026