Enhancing Federated Learning Privacy with QUBO Reduces Client Exposure and Mitigates Membership Inference Attacks by 33%

Federated learning offers a powerful approach to training machine learning models without directly accessing sensitive user data, but recent research reveals increasing privacy risks as training progresses. Andras Ferenczi, Sutapa Samanta, Dagen Wang, and Todd Hodges, all from American Express Co, address this challenge by substantially reducing individual client exposure during the learning process. Their work introduces a novel method, based on quadratic unconstrained binary optimisation, that intelligently selects only the most relevant client updates for each training round. This technique demonstrably lowers the risk of attacks designed to infer private information, achieving up to 95% per-round privacy exposure reduction on benchmark datasets, while maintaining, and in some cases improving, model accuracy. By limiting the contribution of individual clients, the team significantly enhances the privacy guarantees of federated learning systems, paving the way for more secure and trustworthy machine learning applications.

The risk of exposing sensitive data increases as the number of iterations grows where a client’s updates contribute to the aggregated model. Attackers can launch various privacy attacks, including determining whether a sample or client participated in training, inferring attributes of a client’s data, and even reconstructing inputs, potentially revealing private information. This research mitigates this risk by substantially reducing per-client exposure using a quantum computing-inspired Quadratic Unconstrained Binary Optimization (QUBO) formulation that strategically selects a small subset of clients.

QUBO Models Secure Client Selection in FL

This research presents a novel approach to enhancing privacy in Federated Learning (FL) by leveraging Quantum-Inspired optimization techniques, specifically using Quadratic Unconstrained Binary Optimization (QUBO) models. The core idea is to formulate client selection in FL as a QUBO problem, allowing for more sophisticated and potentially more secure client selection strategies. The team addresses privacy concerns inherent in Federated Learning, where models are trained on decentralized data without directly sharing it, but remain vulnerable to attacks that infer information from gradients or the trained model itself. By carefully designing the QUBO objective function, the authors aim to select clients that maximize model performance while minimizing privacy risks.

This is a novel application of QUBO models to the problem of client selection in FL, offering flexibility to incorporate various privacy constraints and performance metrics. The authors suggest their approach can lead to more privacy-preserving client selection strategies, laying the groundwork for future research and potential implementation. Further research should explore testing the approach on actual quantum computers as they become more capable, developing approximate QUBO formulations to handle large-scale federated networks, and extending the QUBO formulation to handle malicious clients.

Privacy Reduction via Strategic Client Selection

This research delivers a significant advancement in federated learning by substantially reducing client exposure to privacy risks during model training. The team developed a novel approach leveraging Quadratic Unconstrained Binary Optimization (QUBO) to strategically select a subset of clients for each training round, minimizing information leakage while maintaining model accuracy. Experiments on the MNIST dataset, involving 300 clients across 20 rounds, demonstrated a 95. 2% reduction in per-round privacy exposure and a 49% reduction in cumulative exposure. Remarkably, 147 clients’ updates were never utilized during training, effectively shielding their data from potential attacks.

The method’s efficiency extends to more complex scenarios, as demonstrated by experiments using the CINIC-10 dataset with 30 clients, achieving an 82% per-round privacy improvement and a 33% cumulative privacy gain. This strategic client selection does not compromise model performance; the team consistently maintained accuracy comparable to, and sometimes exceeding, standard full-aggregation methods. The QUBO formulation allows for the exploration of ten distinct strategies, balancing client relevance, diversity, and redundancy, offering flexibility in optimizing privacy and performance trade-offs. Furthermore, the research highlights the potential for scaling this approach using quantum annealers, suggesting a pathway for even greater efficiency as quantum hardware matures. By carefully controlling which clients contribute to each training round, the team has demonstrably reduced the risk of membership inference, property inference, and model inversion attacks, offering a robust solution for preserving data privacy in distributed machine learning environments. This work represents a crucial step towards building more secure and privacy-preserving federated learning systems.

QUBO Optimisation Enhances Federated Learning Privacy

This research introduces a novel method for enhancing privacy in federated learning, a technique for training machine learning models without directly sharing sensitive data. The team formulated client selection as a quadratic unconstrained binary optimization (QUBO) problem, enabling strategic selection of a small subset of client updates for each training round. Experiments using the MNIST and CINIC-10 datasets, involving hundreds of clients, demonstrate substantial reductions in both per-round and cumulative privacy exposure, with nearly half of the clients in one experiment experiencing full protection. Importantly, these privacy gains were achieved while maintaining, and in some cases improving, the accuracy of the resulting machine learning models.

The QUBO-based approach effectively balances the need for relevant client contributions with the desire to minimize overall data exposure, demonstrating that not all clients need to participate in every round of training to achieve strong model performance. The authors acknowledge limitations including the scalability of the current solution to very large numbers of clients and the reliance on a trusted central server. Future work could explore the use of quantum computing to address scalability challenges and investigate complementary privacy-enhancing techniques to mitigate server-side risks. Despite these limitations, this research provides a valuable contribution to the field by demonstrating a practical and effective method for reducing privacy risks in federated learning.

👉 More information
🗞 Enhancing Federated Learning Privacy with QUBO
🧠 ArXiv: https://arxiv.org/abs/2511.02785

Rohail T.

Rohail T.

As a quantum scientist exploring the frontiers of physics and technology. My work focuses on uncovering how quantum mechanics, computing, and emerging technologies are transforming our understanding of reality. I share research-driven insights that make complex ideas in quantum science clear, engaging, and relevant to the modern world.

Latest Posts by Rohail T.:

Quantum Technology Detects Non-Gaussian Entanglement, Escaping Limitations of Covariance-Based Criteria

Quantum Technology Detects Non-Gaussian Entanglement, Escaping Limitations of Covariance-Based Criteria

December 24, 2025
5G Networks Benefit from 24% Reconfigurable Beamforming with Liquid Antenna

5G Networks Benefit from 24% Reconfigurable Beamforming with Liquid Antenna

December 24, 2025
Quantum-resistant Cybersecurity Advances Protection Against Shor and Grover Algorithm Threats

Quantum-resistant Cybersecurity Advances Protection Against Shor and Grover Algorithm Threats

December 24, 2025