Researchers are increasingly focused on ensuring fairness in algorithms, a necessity underscored by legislation such as the EU AI Act. Changyang He, Parnian Jahangirirad (Max Planck Institute for Security and Privacy and Saarland University), Lin Kyi (Max Planck Institute for Security and Privacy), and Asia J. Biega (Max Planck Institute for Security and Privacy and University of Oxford) et al. investigate a crucial, yet often overlooked, aspect of this monitoring: user acceptance of the privacy-preserving technologies required to access sensitive data. Their work is significant because, while the AI Act permits data processing for fairness monitoring using techniques like multi-party computation under GDPR, the success of these protocols hinges on individuals’ willingness to contribute their information. This study, based on a survey of 833 European participants, reveals how different protocol designs influence acceptance, highlighting a complex interplay between perceived risks, benefits, and individual orientations towards fairness and privacy.
Algorithmic bias detection, mandated by new legislation, often requires sensitive personal data, creating a conflict with stringent privacy regulations like GDPR.
This work addresses how to reconcile these competing demands by examining user willingness to share data when robust privacy measures are in place. The study focuses on multi-party computation (MPC) protocols, a promising technology for secure data processing, and investigates how different designs impact user trust and participation.
A key finding is that users prioritise attributes related to risk, specifically, the privacy protection mechanism offered by a protocol, when directly evaluating its suitability. However, when presented with simulated choices, acceptance is more strongly influenced by benefit-related attributes, such as the stated fairness objective of the monitoring process.
This suggests a nuanced relationship between perceived risk and benefit, shaped by individual fairness and privacy orientations. The research team conducted an online survey with 833 participants across Europe, providing a substantial sample size for statistically significant results. This study moves beyond technical feasibility to explore the human factors crucial for successful deployment of privacy-preserving technologies.
By categorising protocol attributes into benefit, risk, and combined elements, the researchers identified key drivers of user acceptance. They examined how fairness and privacy orientations, alongside personal contexts, influence willingness to participate in fairness monitoring schemes. The work provides valuable implications for designing and communicating these protocols in a way that fosters informed consent and aligns with user expectations. Ultimately, this research highlights that technical solutions alone are insufficient; user-centered approaches are essential for effective and ethical algorithmic accountability.
Measuring Preferences for Fairness Monitoring Protocol Attributes is crucial for effective implementation
An online survey with 833 participants across Europe formed the core of this research into user acceptance of multi-party computation (MPC) protocols for fairness monitoring. Direct attribute ranking asked participants to prioritise seven attributes, fairness objective, monetary incentive, privacy protection mechanism, data storage location, collected information type, monitoring actor, and data use, when considering participation in fairness monitoring.
Conjoint analysis presented participants with a series of hypothetical MPC protocol scenarios, each varying in the levels of these seven attributes, allowing researchers to determine the relative importance of each attribute in driving acceptance decisions. Each participant evaluated 12 distinct protocol designs presented in a randomised order.
Regression analysis then correlated user acceptance with individual fairness and privacy orientations, alongside personal contexts and demographics. Fairness orientation was assessed through questions gauging prior openness to data sharing, while privacy orientation was measured by examining reported use of active privacy safeguards and valuation of data protection transparency. The study revealed that users prioritise attributes related to privacy risks, such as the type of collected information and the privacy protection mechanism employed, when directly ranking protocol features.
Scenario-based conjoint analysis, however, demonstrated a focus on benefit-related attributes like fairness objectives and monetary incentives. Regression analysis identified a correlation between user orientations and protocol preferences; individuals with higher openness to data sharing placed greater emphasis on fairness objectives.
Conversely, privacy-oriented users, those actively employing privacy safeguards and valuing data protection transparency, focused more on risk-related attributes and less on fairness or monetary incentives. This suggests that communication strategies should tailor messaging to these differing orientations to maximise participation.
Interestingly, the research found that more extensive data requests and broader data use surprisingly increased user willingness to share information. Researchers hypothesise this is linked to the perception that increased data contributes to the system’s overall fairness goals. The study, conducted with 833 European participants, examined how different protocol designs influence willingness to share sensitive data required for detecting algorithmic bias.
Findings indicate that while users directly prioritize risk-related attributes like privacy protection mechanisms, they place greater emphasis on benefit-related attributes, such as the fairness objective, when making choices in simulated scenarios. User acceptance appears to be shaped by individual fairness and privacy orientations, with those holding stronger privacy values favouring robust data storage and encryption methods.
Interestingly, the research also revealed a paradoxical trend where participants sometimes expressed greater acceptance of protocols requesting more data, potentially driven by a desire to contribute to fairness goals. The authors acknowledge that the study focused specifically on algorithmic hiring and may not fully generalise to other data-sharing contexts.
Future research should explore how these privacy-benefit trade-offs can inform protocol designs across diverse domains like healthcare, education, and migration. Ultimately, aligning privacy-preserving protocols with user expectations is crucial for fostering informed consent and enabling effective fairness monitoring, as data donation offers a promising path forward.
👉 More information
🗞 When Feasibility of Fairness Audits Relies on Willingness to Share Data: Examining User Acceptance of Multi-Party Computation Protocols for Fairness Monitoring
🧠 ArXiv: https://arxiv.org/abs/2602.01846
