Researchers are increasingly exploring distributed architectures to address the challenges of scalability in quantum information processing. In a new study, Marta Gili and Ane Blázquez-García from Ikerlan Technology Research Centre, Basque Research and Technology Alliance (BRTA), working with Eliana Fiorelli, Gian Luca Giorgi, and Roberta Zambrini from the Institute for Cross-Disciplinary Physics and Complex Systems (IFISC), UIB-CSIC, detail the design and performance of distributed Extreme Learning Machine (QELM) architectures for learning functions of quantum states. This collaborative work demonstrates how these architectures can efficiently recover both linear and nonlinear properties of input states using only readily implementable measurements. Significantly, the team’s novel distributed design, incorporating entanglement within a spatially multiplexed framework, offers a scalable and resource-efficient pathway to reconstruct complex quantum properties, potentially reducing the hardware demands for advanced quantum computation.
The aim is to determine which schemes can effectively recover specific properties of input quantum states, including both linear and nonlinear features, while also quantifying the resource requirements in terms of measurements and reservoir dimensionality. This signifies a more efficient use of quantum resources when processing linear data. The study demonstrated that by increasing the number of interacting subsystems, higher-order nonlinearities can be reconstructed with reduced resources, rather than simply scaling the size of an individual reservoir, offering a pathway to scalable quantum property learning by distributing computational load. Reconstruction of polynomial targets, Rényi entropy, and entanglement measures was successfully performed using the novel distributed design, with performance evaluated by assessing the ability of the distributed architectures to accurately capture these complex quantum properties. Architectural mapping elucidated the relationship between design choices and the complexity of accessible quantum properties, encompassing both linear observables and nonlinear functions of the quantum state. This understanding is crucial for tailoring architectures to specific learning tasks and maximising performance. Resource scaling was systematically quantified across different distributed architectures, revealing the trade-offs between measurement counts and reservoir dimensions. For nonlinear properties, the study examines the multiple-injection architecture and introduces a novel distributed design that incorporates entanglement between subsystems within a spatially multiplexed framework, evaluating its performance through the reconstruction of complex nonlinear quantities such as polynomial targets, Rényi entropy, and entanglement measures. The distributed design enables the reconstruction of higher-order nonlinearities by increasing the number of interacting subsystems with reduced resources, rather than increasing the size of an individual reservoir, providing a scalable and hardware-efficient route to quantum property learning. Distributed quantum computing has emerged as a promising approach to overcome the limitations of Noisy Intermediate-Scale Quantum (NISQ) devices by interconnecting multiple quantum processors through quantum networks, enabling larger and more complex computations than a single processor can handle. In parallel, distributed quantum machine learning has gained significant attention as an effective paradigm to leverage distributed quantum computing for quantum-enhanced learning tasks, with current approaches including quantum federated learning and model-parallel distributed quantum neural networks. Other strategies rely on local operations with classical communication, linking parameterised circuits through measurement and feed-forward, while others exploit shared entanglement to distribute computation across quantum processing units using nonlocal operations or state teleportation. Measurement-based photonic architectures for distributed quantum machine learning based on continuous-variable cluster states have also been proposed recently. QELMs overcome challenges of training parameterised quantum circuits, such as barren plateaus and high computational costs, avoiding the need for iterative optimisation and backpropagation. They rely on fixed quantum reservoirs that transform input data into a high-dimensional feature space, followed by minimal classical post-processing that requires only a simple training step, such as linear regression, on the measurement outcomes, making them particularly attractive for near-term quantum hardware. Since they operate on quantum substrates, QELMs are naturally suited for processing quantum states, as demonstrated in quantum state classification, entanglement detection, and quantum state reconstruction. Fur resource scaling, quantifying the required resources; performance benchmarking, assessing whether distributed strategies provide tangible advantages over centralized approaches; and architectural mapping, elucidating the relationship between architectural design choices and the complexity of the accessible quantum properties. More generally, a system with complex dynamics can map input signals x into its state space, f(x), with solving a given task requiring only a simple training step at the output, typically a linear regression. In QELM, the classical substrate is replaced by a quantum system, enabling the direct processing of quantum-state properties and circumventing the need for prior quantum state embedding. The training dataset is given by {(ρtr i, ytr i )}Ntr i=1, where ρtr i represents the input quantum state and ytr i = {(ytr i )l}Ntg l=1 is the corresponding target vector, with (ytr i )l representing a specific target and Ntg the total number of targets to be estimated. Input states ρ ∈HS are processed via interaction with reservoir R states η ∈HR, with the evolution of the global system Ψ = S ∪R, with Hilbert space HΨ = HS ⊗HR, governed by a Completely Positive and Trace-Preserving (CPTP) quantum map, Γ: HΨ −→HΨ. The output layer can be obtained by performing a Positive Operator-Valued Measure (POVM) measurement on the global state, although simpler approaches restrict to Projection-Valued Measures (PVMs) in one basis for ease of implementation. Consequently, the output is given by the linear map yl = p X k=WlkTr[EkΓ(ρ ⊗η)], with {Ek}p k=1 PVM on the global state and p number of possible outcomes. The training procedure aims to optimise the weights W ∈RNtg×p in order to minimise the distance between each target (ytr i )l and the corresponding output layer. This work focuses on applying QELM architectures to quantum property reconstruction, observing that a PVM measurement in a single basis is easy to implement but does not provide enough information for full quantum state tomography. However, the three-layer structure enables access to properties of the input state ρ, since the input-output relation corresponds to reconstructing observables on ρ via a collection of effective measurements. In particular, considering the target as the input state, PVMs on the larger Hilbert space are expected to provide a faithful realization of the corresponding POVM by Naimark dilation. Equivalently, the overall process of measuring the global system state after the evolution Γ can be reframed as an effective measurement performed directly on the input ρ: yl = X k WlkTrS[ Ekρ], where Ek = TrR[Γ†[Ek](IS ⊗η)] represents an effective POVM on the state ρ and Γ† denotes the adjoint of Γ. This choice of PVMs sets the number of outcomes to p = d, which can be adjusted depending on the number of qubits in the reservoir. Quantum state properties can be either linear in the quantum state, such as observables or the state itself, or nonlinear, such as purity or entanglement measures that depend on higher-order functions of the state. Different architectures can be designed and compared by benchmarking their performance on various quantum state tasks while optimising resource utilisation. Scientists present four such architectures, deriving conditions on the number of measurement outcomes required for accurate property estimation and determining the corresponding reservoir dimension needed to achieve it. They anticipate that the complexity of the target tasks varies depending on the type of architecture, addressing both those designed for linear properties and for nonlinear property reconstruction. Scientists are increasingly focused on distributing computational tasks across multiple smaller processors rather than relying on ever-larger, monolithic chips. This approach, explored in new research on quantum extreme learning machines (QELMs), isn’t merely a matter of engineering convenience; it addresses a fundamental bottleneck in scaling quantum information processing. Building and maintaining coherence in large quantum systems is extraordinarily difficult, and distributed architectures offer a potential pathway around that limitation by breaking down complex problems into manageable, interconnected units. This work demonstrates a clever strategy for distributing the computational load while preserving the ability to accurately characterise quantum states, even those with complex, nonlinear properties. By intelligently linking these smaller units, effectively creating a quantum network, researchers have shown it’s possible to achieve the same results with fewer overall resources than a single, massive QELM. This isn’t simply about reducing hardware demands; it’s about opening up possibilities for building practical quantum devices that can tackle real-world problems. However, the benefits aren’t universal, with the reconstruction of certain quantum properties remaining more resource-intensive than others, dependent on the specific observable being measured. While the architecture excels at reconstructing symmetric subspaces of quantum states, extracting specific information within those spaces can still be challenging. Future work will likely focus on optimising the connections between these distributed units, perhaps leveraging more sophisticated entanglement schemes to further reduce resource requirements and broaden the range of reconstructible properties. The broader implication is a shift towards modular quantum systems, where performance isn’t dictated by the size of a single component, but by the efficiency of the network connecting them.
👉 More information
🗞 Learning functions of quantum states with distributed architectures
🧠 ArXiv: https://arxiv.org/abs/2602.11797
