Philip Andreas Turk gave an interesting presentation at IEEE Quantum Week, which was held September 18-23, 2022, in Colorado, USA. Although the paper he presented does not yet seem to be available, its title is “Learning Zero Noise Extrapolation for Deterministic Quantum Circuits.” The basic premise is to reduce noise by first adding noise; allow me to explain.
Add Circuit Depth
Let’s take a simple operation: the X gate. If we apply two X gates, they cancel each other out. If we apply three X gates, however, we still get the transformation we’re looking for, but we’ve tripled our circuit depth.
More Noise please
The result of tripling our circuit depth is that we now have much more noise. And while you may be wondering why on Earth we would want to do such a thing, we’re just getting started. Using the same example as before, we can apply five X gates. The first two cancel out and the second two cancel out, leaving us with the transformation we want and five times the noise. But, that’s still not all, because we’re going to create circuits for which everything is repeated seven times and again for which everything is repeated nine times. In total, we have our original circuit, for which every operation is implemented only once, plus circuits for which every operation is repeated three, five, seven, and nine times, for five total circuits.
An astute attendee noted that some operations do not cancel themselves out. For example, two T gates results in an S gate. The approach in the paper is to implement three pi/12 rotations, and someone inquired about implementing T, T dagger, then T. The short answer is that additional research would have to be conducted for these special cases.
The point of amplifying circuit noise is to be able to use machine learning to make a prediction in the opposite direction. In other words, we have a starting level of noise, and we gradually worsen it, worsen it, worsen it, and worsen it some more. So, if we start from the noisiest level, and we have data to mitigate it, mitigate it, mitigate it, and mitigate it some more, the model can then predict the results for an additional round of measurement error mitigation beyond our original circuit, the one for which each operation is implemented only once. We have added noise so that we could ultimately reduce noise.
It may or may not have been mentioned during the presentation, but transpiler optimization obviously needs to be turned off for this to work. Not only will two X gates cancel out, for example, our transpiler should ideally do this for us when we submit a job to run. So, even though our circuit shows three X gates, two of them should be removed before the circuit is actually executed and all five of our circuits should produce the same results. So, we definitely need to turn that off, and tell the quantum computer, basically, that we want our deep circuits.