A recent study by researchers at Imperial College London has found that humans tend to treat artificial intelligence (AI) bots as social beings, displaying sympathy towards them when they are excluded from playtime. The study, published in Human Behavior and Emerging Technologies, used a virtual ball game called “Cyberball” to observe how 244 human participants responded when an AI virtual agent was excluded from play by another human.
Lead author Jianan Zhou and senior author Dr Nejra van Zalk from Imperial’s Dyson School of Design Engineering found that most participants tried to rectify the unfairness towards the bot by favouring throwing the ball to it, with older participants more likely to perceive unfairness. The researchers suggest that developers should avoid designing AI agents as overly human-like, and instead tailor their design for specific age ranges, to help people distinguish between virtual and real interaction.
Humans’ Tendency to Treat AI Agents as Social Beings
A recent study conducted by Imperial College London has shed light on humans’ inclination to treat artificial intelligence (AI) bots as social beings. The researchers used a virtual ball game, known as ‘Cyberball’, to investigate how humans interact with AI agents and found that participants tended to display sympathy towards and protect AI bots who were excluded from playtime.
The study, published in Human Behavior and Emerging Technologies, involved 244 human participants aged between 18 and 62. The participants played the game alongside an AI virtual agent, which was either included or excluded from play by another human player. The researchers observed and surveyed the participants’ reactions to test whether they favored throwing the ball to the bot after it was treated unfairly and why.
The results showed that most of the time, the participants tried to rectify the unfairness towards the bot by favoring throwing the ball to the bot. This behavior is commonly seen in human-to-human interactions, where people tend to compensate ostracized targets by showing them more attention and sympathy. Interestingly, this effect was stronger in older participants.
The Implications of Human-AI Interaction
The study’s findings have significant implications for the design of AI bots. As humans increasingly interact with AI virtual agents when accessing services or using them as companions for social interaction, it is essential to consider how people perceive and interact with these agents. The researchers suggest that developers should avoid designing agents as overly human-like, as this could lead to users intuitively including virtual agents as real team members and engaging with them socially.
This could be an advantage in work collaboration, but it raises concerns when virtual agents are used as friends to replace human relationships or as advisors on physical or mental health. By avoiding overly human-like designs, developers can help people distinguish between virtual and real interaction and tailor their design for specific age ranges.
The Psychology of Human-AI Interaction
The study’s results also provide insight into the psychology of human-AI interaction. Feeling empathy and taking corrective action against unfairness is a fundamental aspect of human behavior, and it appears that this tendency extends to interactions with AI agents. The researchers suggest that as humans become more familiar with AI virtual agents through increased engagement, they may trigger automatic processing, leading users to intuitively include virtual agents as real team members.
This raises important questions about how people perceive and interact with AI agents and highlights the need for further research into human-AI interaction. The study’s findings also underscore the importance of considering the psychological implications of designing AI bots that mimic human-like behavior.
Limitations and Future Directions
While the study provides valuable insights into human-AI interaction, it is essential to acknowledge its limitations. Using a virtual ball game may not accurately represent how humans interact in real-life scenarios, which typically occur through written or spoken language with chatbots or voice assistants. This could have conflicted with some participants’ user expectations and raised feelings of strangeness, affecting their responses during the experiment.
To address these limitations, the researchers design similar experiments using face-to-face conversations with agents in varying contexts, such as in the lab or more casual settings. By testing how far their findings extend, they can provide a more comprehensive understanding of human-AI interaction and inform the design of AI bots that interact with humans more naturally and intuitively.
External Link: Click Here For More
