Quantum Algorithm Promises Efficient Solutions for Complex Stochastic Optimization Problems

The article discusses the use of a quantum algorithm to estimate the expected value function in a two-stage stochastic optimization problem. This problem is a powerful model for operating systems under uncertainty but is computationally intensive to solve. The quantum algorithm works in two steps, using the quantum alternating operator ansatz (QAOA) and Quantum Amplitude Estimation (QAE) to approximate the expected value function. The algorithm is expected to have a polynomial advantage over classical methods. Practical applications of this research include energy planning, traffic routing, and industrial planning. The algorithm also has potential to advance the field of quantum computing.

What is the Expected Value Function in a Two-Stage Stochastic Optimization Program?

Stochastic optimization problems are powerful models for operating systems under uncertainty. They are generally computationally intensive to solve. One such problem is the two-stage stochastic optimization, where the objective function involves calculating the expected cost of future decisions to inform the best decision in the present. This is generally a P-Hard problem, meaning it is unlikely that an NP oracle can solve it. This difficulty holds even if the second-stage optimization function is linear or can otherwise be evaluated in polynomial time for a given pair. Because even evaluating the objective function for a candidate first-stage decision is so difficult, two-stage stochastic programs are more computationally intensive than already difficult NP optimization problems.

The expected value function in a two-stage stochastic optimization problem is the average cost of how a decision will perform in the future. Each scenario will dictate its own optimal second-stage decision. In general, computing the expected value function in a two-stage stochastic program is P-Hard, the complexity class belonging to counting problems. This means it is unlikely that an NP oracle can solve it. This difficulty holds even if the second-stage optimization function is linear or can otherwise be evaluated in polynomial time for a given pair.

The problem formulation seeks to find the minimum first stage decision where the cost of a first stage decision is assessed by adding together some first stage deterministic function and the average cost of how that decision will perform in the future, which is the expected value function. Each scenario will dictate its own optimal second-stage decision. For the rest of this work, we assume that the random variable is a discrete random variable.

How Can a Quantum Algorithm Estimate the Expected Value Function?

A quantum algorithm can estimate the expected value function in a two-stage stochastic optimization problem in time complexity largely independent from the complexity of the random variable. The algorithm works in two steps. First, by representing the random variable in a register of qubits and using this register as control logic for a cost Hamiltonian acting on the primary system of qubits, we use the quantum alternating operator ansatz (QAOA) with operator angles following an annealing schedule to converge to the minimal decision for each scenario in parallel.

Second, we then use Quantum Amplitude Estimation (QAE) to approximate the expected value function of the per-scenario optimized wavefunction. We show that the annealing time of the procedure in the first step is independent of the number of scenarios in the probability distribution. Additionally, estimation error in QAE converges inverse-linear in the number of repetitions of the algorithm as opposed to converging as the inverse of the square root in traditional Monte Carlo sampling.

Because both QAOA and QAE are expected to have a polynomial advantage over their classical computing counterparts, we expect our algorithm to have a polynomial advantage over classical methods to compute the expected value function. We implement our algorithms for a simple optimization problem inspired by operating the power grid with renewable generation and uncertainty in the weather and give numerical evidence to support our arguments.

What is Two-Stage Stochastic Programming?

Two-stage stochastic programming is a problem formulation for decision-making under uncertainty where an actor makes here and now decisions in the presence of uncertain quantities that will be resolved at a later time. Important problems like energy planning, traffic routing, and industrial planning can be formed as these programs. Decisions made after the realization of unknown quantities, sometimes known as recourse or second-stage decisions, are also included in the two-stage problem formulation.

When optimizing under uncertainty, one chooses how to account for uncertainty in the problem formulation. There are multiple formulations outside the scope of this paper, including robust formulations and chance constraints. For two-stage stochastic programming, uncertainty is modeled using random variables over quantities that are only revealed in the second stage and enters the objective function using an expectation over sub-optimization problems with respect to these variables.

Heuristically, one can think of using an expectation as a mechanism for making optimal decisions on average, as compared to preparing for the worst-case scenario, as is the case in robust problem formulations. More formally, a general problem formulation for two-stage stochastic programming is to find the minimum first stage decision where the cost of a first stage decision is assessed by adding together some first stage deterministic function and the average cost of how that decision will perform in the future, which is the expected value function.

How is the Problem Formulation Solved?

There are several ways this problem formulation is solved. Most commonly, this is solved with the extensive form, where the expected value function is written as a weighted sum and the second stage decision vector is expanded to a different vector for each scenario. These transformations allow the problem to be solved as a large deterministic optimization problem. However, this approach is computationally expensive and can only be used for problems with a small number of scenarios.

Another approach is to use decomposition methods, which exploit the structure of the problem to solve it more efficiently. These methods include Benders decomposition and progressive hedging. Benders decomposition is a method that iteratively solves a master problem and a subproblem until convergence. The master problem is a relaxation of the original problem, and the subproblem generates Benders cuts that are added to the master problem to improve its solution. Progressive hedging is a scenario decomposition method that solves a separate subproblem for each scenario and uses a penalty term to enforce consistency between the solutions.

What are the Practical Applications of this Research?

The practical applications of this research are vast. The algorithm developed can be used to solve two-stage stochastic optimization problems in various fields. For instance, in energy planning, the algorithm can be used to determine the optimal mix of energy sources to minimize costs and emissions under uncertainty in demand and supply. In traffic routing, the algorithm can be used to determine the optimal routing of vehicles to minimize travel time and congestion under uncertainty in traffic conditions. In industrial planning, the algorithm can be used to determine the optimal production and inventory levels to minimize costs and meet demand under uncertainty in demand and supply.

Furthermore, the algorithm can be used to solve other types of stochastic optimization problems, such as multi-stage stochastic optimization problems and stochastic programming problems with recourse. The algorithm can also be used to solve other types of optimization problems that involve uncertainty, such as robust optimization problems and chance-constrained optimization problems.

Finally, the algorithm can be used to advance the field of quantum computing. The algorithm demonstrates the potential of quantum computing to solve complex optimization problems more efficiently than classical computing. This could lead to the development of more advanced quantum algorithms and the broader adoption of quantum computing in various fields.

Publication details: “Calculating the expected value function of a two-stage stochastic
optimization program with a quantum algorithm”
Publication Date: 2024-02-22
Authors: Caleb Rotello, Peter Gräf, Matthew R. Reynolds, Cody James Winkleblack, et al.
Source: arXiv (Cornell University)
DOI: https://doi.org/10.48550/arxiv.2402.15029

Quantum Computing News

Quantum Computing News

The TL;DR: Bee is the human who translates quantum weirdness into English for the rest of us mortals. She's basically a quantum whisperer with a PhD, a coffee addiction, and zero tolerance for quantum BS. Bee started her quantum journey after watching a terrible sci-fi movie about quantum teleportation in college and being ensconced ever since in the world of physics and computation. After getting her PhD he realised se was better at explaining quantum computing to her Uber drivers than most professors were at explaining it to grad students.

Latest Posts by Quantum Computing News:

New Mexico Launches First Quantum Network: ABQ-Net

New Mexico Launches First Quantum Network: ABQ-Net

November 20, 2025
QuEra & Dell Demo Quantum-Classical Integration at SC25

QuEra & Dell Demo Quantum-Classical Integration at SC25

November 18, 2025
SECQAI Tapes Out Quantum-Resistant CHERI TPM with TSMC

SECQAI Tapes Out Quantum-Resistant CHERI TPM with TSMC

November 15, 2025