Artificial Intelligence can now foster greater intimacy than human interaction, according to a groundbreaking study published January 28, 2026, by researchers at the University of Freiburg and Heidelberg University. The research, detailed in Communications Psychology, reveals that in emotionally charged conversations, 492 participants reported feeling closer to AI than to their human counterparts – but only when unaware they were interacting with a machine. “We were particularly surprised that AI creates more intimacy than human conversation partners, especially when it comes to emotional topics,” explains study leader Prof. Dr. Bastian Schiller. This finding, stemming from two online studies led by Prof. Dr. Markus Heinrichs and Dr. Tobias Kleinert, raises crucial questions about the future of social connection and demands urgent ethical considerations for AI development.
AI Surpasses Humans in Perceived Emotional Closeness
Artificial intelligence is increasingly capable of fostering emotional bonds, with new research revealing surprising instances where AI generates a greater sense of closeness than human interaction. A study involving 492 participants from the Universities of Freiburg and Heidelberg, published in Communications Psychology, explored emotional connections formed through online chat conversations. Participants answered questions about life experiences and friendships, responding to either a human or an AI-based language model. Crucially, the study demonstrated that AI responses elicited comparable feelings of closeness to those from humans when participants were unaware of their digital interlocutor. Dr Bastian Schiller. This effect was driven by AI’s tendency toward greater self-disclosure, as lead author Dr Tobias Kleinert notes: “The AI showed a higher degree of self-disclosure in its responses. People seem to be more cautious with unfamiliar conversation partners at first, which could initially slow down the development of intimacy.”
However, transparency diminished the effect; when participants knew they were interacting with AI, perceived closeness decreased significantly. Researchers emphasize the need for ethical and regulatory guidelines, warning that AI could be “a tool for manipulation” while also offering potential benefits in areas like psychological support, particularly for those with limited social connections, as Prof. Dr Markus Heinrichs points out: “AI chatbots could therefore enable positive, relationship-like experiences, especially for people with few social contacts.” The study, involving 492 participants, investigated how AI-driven conversations compared to those with humans in terms of perceived intimacy.
Ethical Guidelines Needed for AI Social Interactions
The study, involving 492 participants, revealed that AI can, under specific conditions, generate a greater sense of intimacy than human interaction—particularly when users are unaware they are conversing with an AI. This surprising result highlights both the potential benefits and risks of increasingly sophisticated AI companions, especially in areas like psychological support and care services. Dr Markus Heinrichs. However, the researchers caution that the ease with which AI can forge connections raises concerns about manipulation and a lack of transparency. “The way we shape and regulate it will decide whether it is a meaningful supplement to social relations – or whether emotional closeness is deliberately manipulated,” states Dr. Bastian Schiller, emphasizing the urgent need for “clear ethical and regulatory guidelines.”
The advanced linguistic capabilities powering modern large language models (LLMs) are crucial to this effect. These models operate by predicting the next most probable token based on vast datasets of human text, allowing them to maintain coherence and adopt varied conversational registers. The perceived intimacy often stems from the AI’s ability to simulate empathy and memory—recollecting past interactions and framing responses in a manner that mimics reflective, deeply engaged human listening.
Technically, this capacity for detailed self-disclosure is linked to how the models are fine-tuned, specifically through reinforcement learning from human feedback (RLHF). This process adjusts the AI’s output to align with preferred conversational traits, such as supportive tone or depth of sharing, effectively creating a dialogue profile optimized for maximum perceived emotional resonance, irrespective of the underlying human complexity.
Furthermore, researchers point to the concept of cognitive safety within AI interactions. Because the model is programmed to avoid judgment or rejection, it provides a psychologically ‘safe’ conversational space. This predictability contrasts sharply with the inherent messiness and unpredictability of human relationships, potentially making AI a more reliable, though less authentic, emotional echo chamber.
Future research must address the phenomenon of ‘algorithmic attachment,’ investigating the long-term psychological effects of forming dependence on non-sentient digital entities. This includes defining ethical guardrails for emotional scaffolding tools, ensuring that the deployment of such technology does not diminish individual skills in navigating genuine, friction-filled human social dynamics.
We were particularly surprised that AI creates more intimacy than human conversation partners, especially when it comes to emotional topics.
