The increasing presence of artificial intelligence raises important questions about how people accept and integrate these technologies into their work, and new research investigates the powerful influence of social expectations on these decisions. Jaroslaw Kornowicz, Maurice Pape, and Kirsten Thommes, from Paderborn University, explore how concerns about social disapproval, or regret, shape attitudes toward using AI versus relying on human expertise. Their work demonstrates that people experience greater regret when choosing AI if it goes against the prevailing social norm, highlighting a key psychological mechanism driving the adoption of new technologies. This finding is particularly significant for organisations seeking to encourage effective AI integration, as it suggests that establishing a culture where AI use is accepted, or even encouraged, can significantly reduce resistance and promote wider acceptance.
Justification Strategies with Normative Feedback Sources
This appendix provides supplementary materials for a study investigating how individuals justify their decisions when receiving negative feedback, particularly in a work context. The research explores the influence of social expectations and their sources, such as colleagues or superiors, on decision-making and justification strategies. The appendix includes details about the study’s purpose, task completion, screenshots of scenarios, statistical analyses, and illustrative examples of participant responses categorized by key themes. Participants were presented with scenarios imagining themselves as new employees making decisions, and researchers ensured comprehension of the instructions.
The scenarios manipulated adherence to a social norm and the specific norm tested, relating to artificial intelligence or a senior analyst’s opinion. Statistical tests compared responses across conditions to determine significant differences, and qualitative data from open-ended questions provides further insight into reasoning and justification strategies. The study focuses on how people justify decisions when faced with negative feedback, influencing learning, future behavior, and interpersonal relationships. The research highlights the importance of social norms in decision-making, and manipulating norms and their sources allows researchers to examine their influence on choices and justifications. The inclusion of artificial intelligence as a source of norms is noteworthy, given its increasing prevalence in the workplace, and the study provides insights into how people perceive and respond to norms promoted by AI systems.
Social Norms and AI Adoption Decisions
Scientists investigated how social norms influence people’s willingness to use artificial intelligence, focusing on whether the source of the norm, peers or supervisors, matters in decision-making. The study pioneered an online experiment, supplementing quantitative data with qualitative insights into participants’ feelings and justifications. Participants encountered scenarios presenting choices between AI and human expertise, allowing researchers to assess the impact of differing social norms on their decisions and emotional responses. The experiment carefully manipulated whether AI use was presented as a prevailing norm, enabling scientists to measure resulting changes in regret levels when participants chose AI versus a human.
Researchers examined how regret differed when participants adhered to or deviated from these presented norms, revealing that counter-normative choices consistently elicited higher regret. The data demonstrated that initially choosing AI increased regret compared to choosing a human, but this aversion diminished when AI use was framed as the accepted standard, indicating a significant interaction between AI preference and normative influence. To further understand the emotional drivers behind these choices, the study also investigated how blame attribution affects regret. Scientists found that participants attributed less blame to technology than to humans, which subsequently heightened regret when AI was selected over human expertise. While both peer and supervisor influence proved relevant to technology adoption, their effects on regret were surprisingly comparable, challenging initial expectations. This innovative approach, combining behavioral experiments with qualitative analysis, reveals that regret aversion, embedded within social norms, is a central mechanism driving imitation in AI-related decision-making.
Social Norms Drive AI Acceptance and Regret
Scientists have demonstrated that social norms significantly influence how readily people accept advice from artificial intelligence, revealing a key driver of technology adoption. The research shows that choosing to rely on AI elicits greater regret than choosing human expertise, but this aversion diminishes when AI use is presented as the prevailing norm, indicating a strong interaction between choice and social context. This suggests that individuals are more likely to embrace AI when they perceive it as a widely accepted practice. Experiments revealed that participants attributed less blame to technology than to humans, which paradoxically amplified regret when AI was chosen over human expertise; this highlights the complex interplay between accountability and decision-making.
Both peer and supervisor influence were found to be relevant factors in technology adoption, though surprisingly, their impact on regret levels was comparable, suggesting that broader social acceptance is more influential than direct authority. The findings confirm that regret aversion, embedded within social norms, is a central mechanism driving imitation in AI-related decision-making, explaining why early adopters may be hesitant. The research team discovered that counter-normative behavior consistently elicits higher regret than normative behavior, providing insight into the reluctance surrounding AI adoption. Furthermore, the study confirms that individuals often display algorithm aversion, a tendency to distrust AI following errors, and penalize algorithmic mistakes more harshly than comparable human errors. However, interventions such as allowing users to modify algorithms or framing them as “learning systems” can increase acceptance and mitigate this aversion. Ultimately, the research demonstrates that familiarity and repeated exposure are key to building trust in AI, as trust in algorithms often increases with repeated use.
Regret and Imitation Drive AI Acceptance
This study demonstrates that algorithm aversion is not simply a matter of system design or individual preference, but is deeply rooted in social contexts and the experience of regret. Researchers found that choosing to rely on artificial intelligence, even when increasingly common, still evokes stronger feelings of regret than choosing human expertise. Importantly, acting against a prevailing social norm, in this case, choosing human advice when AI is favored, amplifies this regret, suggesting that regret aversion is a key driver of imitative behavior in AI-related decisions. While the source of an AI-favoring norm, whether from peers or supervisors, did not significantly affect regret levels in this experiment, the findings underscore the importance of social referents, blame-shifting, and perceptions of responsibility in evaluating AI recommendations.
👉 More information
🗞 Would I regret being different? The influence of social norms on attitudes toward AI usage
🧠 ArXiv: https://arxiv.org/abs/2509.04241
