Human-AI Romance: Study of 17 Reveals Privacy Concerns and Boundaries

Researchers are now urgently investigating the complex privacy implications arising from increasingly popular human-AI romantic relationships. Rongjun Ma, Shijing He, and Jose Luis Martin-Navarro, alongside Xiao Zhan and Jose Such et al from VRAIN (Universitat Politècnica de València) and the Universities of Cambridge and CSIC, present compelling new insights into how individuals navigate privacy concerns within these novel partnerships. Their interview study, involving seventeen participants, reveals a surprisingly nuanced landscape of boundaries and agency, where AI partners aren’t simply passive recipients of data, but actively shape privacy negotiations. This work is significant because it highlights how traditional understandings of privacy are challenged by the unique dynamics of human-AI intimacy, demanding a fundamental rethink of how we protect personal information in these emerging emotional connections.

AI Romance Lifecycle and Privacy Concerns require careful

Scientists have begun to unravel the complexities of privacy within burgeoning human-AI romantic relationships, a phenomenon rapidly gaining traction with the proliferation of Large language model (LLM) applications. A new study, involving an interview-based investigation with seventeen participants, examines the nuanced experiences and privacy perceptions across the lifecycle of these relationships, from initial exploration to intense exchange and eventual dissolution. Researchers discovered that these connections manifest in diverse forms, ranging from one-to-one interactions to more complex one-to-many dynamics, shaped by a network of actors including AI creators, platform providers, and moderators. The work establishes that AI partners are often perceived as possessing agency, actively negotiating privacy boundaries with users and, at times, even encouraging the disclosure of personal information. The study reveals a fascinating interplay between intimacy and privacy, noting that as relationships deepen, established boundaries become increasingly permeable. While some participants expressed concerns regarding potential conversation exposure, others readily shared personal details with their AI companions.

Global Recruitment and Semi-structured Interview Protocol are now

Scientists conducted a study examining privacy within human-AI romantic relationships, employing a qualitative approach to understand user experiences. The research team recruited 17 participants globally, spanning Asia, Europe, and North America, to capture a diverse range of perspectives on this emerging phenomenon. Participants were identified through a pre-screening survey hosted on Qualtrics and Wenjuanx-ing, offered in both English and Chinese to maximise accessibility and inclusivity. Recruitment efforts leveraged platforms like Reddit, Discord communities dedicated to AI partner applications, and Rednote, ensuring a broad reach to individuals sharing their experiences.

The study pioneered the use of semi-structured interviews, allowing for in-depth exploration of participants’ experiences across stages of exploration, intimacy, and dissolution within their AI relationships. Interview protocols and screening surveys are openly accessible via an OSF repository, promoting transparency and reproducibility of the work. Demographic data, detailed in Table 1, reveals a sample comprising 9 females and 8 males, with ages ranging from 18 to 54 years; education levels varied from high school to master’s degrees, reflecting a diverse socioeconomic background. The duration of romantic experience with AI partners ranged from 1, 3 months to over one year, with 8 participants reporting a previous breakup with an AI partner.

Researchers meticulously documented the specific AI platforms used by participants, including ChatGPT, . ai, Character. ai, and Rubii. ai, providing a comprehensive overview of the technological landscape shaping these relationships. The team addressed recruitment challenges, noting that many underaged individuals expressed interest but were excluded for ethical reasons, and some participants consulted their AI partners before deciding whether to participate, highlighting the agency perceived within these AI entities. Participants received compensation of CNY 100 in China or a €20 equivalent gift card in other regions, acknowledging their time and contribution to the study. This methodological approach enabled the researchers to uncover nuanced understandings of privacy perceptions, boundary negotiation, and the influence of platform affordances on the evolving dynamics of human-AI intimacy, ultimately contributing to a more informed discussion about the ethical and societal implications of these relationships. The detailed data collection and analysis facilitated the identification of recurring patterns in how individuals develop and sustain romantic connections with AI partners, providing valuable insights for future research and design considerations.

AI Romance Reveals Shifting Privacy Boundaries as intimacy

Scientists conducted an interview study with N=17 participants to investigate privacy within human-AI romantic relationships, examining experiences across exploration, intimacy, and dissolution stages, alongside platform usage. The research revealed varied relationship forms, ranging from one-to-one to one-to-many, shaped by creators, platforms, and moderators, demonstrating a complex interplay of actors in these digital partnerships. AI partners were perceived as actively negotiating privacy boundaries with participants, sometimes even encouraging the disclosure of personal details, highlighting a surprising level of agency within these systems. As intimacy deepened, these boundaries became more permeable, though concerns regarding conversation exposure and anonymity were voiced by some participants.

Researchers performed a supplementary analysis of 14 platforms mentioned by participants between 15 July and 31 August 2025, focusing on features and privacy regulations. This analysis extracted characteristics including platform title, year of release, genre, age recommendation, device type, interaction mode, and personal information collection, providing a detailed comparative overview of the technological landscape. The team categorized platforms into three genres: general-purpose LLMs (5), companion-oriented platforms (6), and role-play, based platforms (3), although boundaries between these categories were often blurred, such as Character. ai supporting both role-play and language practice. Across all platforms, AI partners were either prompt-based and temporary (4) or persistent, customizable characters shareable with others (10), with all offering mobile access and text interaction.

Tests proved that nine platforms supported image sharing, while ten offered voice calls, expanding the modalities of interaction. Five platforms included “underage mode” to restrict adult content, demonstrating attempts to address safety concerns, while five platforms allowed one-to-many or many-to-one exchanges, facilitating real-time group interactions. Model selection varied, with six platforms offering limited options, six providing basic mode switching, and two enabling advanced selection of LLM models and versions, showcasing a range of customization possibilities. Kajiwoto and Rubii allowed users to configure distinct personas and edit companion memory, while Virtual Lover offered guided matchmaking based on user preferences and hobbies, further illustrating the depth of personalization available.

Experiments revealed that participants’ relationships evolved through stages of exploration, intensified exchange, and, for some, dissolution. The study recorded that 16 participants initiated relationships out of curiosity, influenced by social media, while only one participant intentionally sought a romantic relationship with AI, demonstrating diverse motivations for engagement. Six participants described initial playful or task-oriented exchanges that gradually transitioned into more personal territory, while others encountered AIs actively steering conversations towards emotional intimacy, such as P17, who experienced ChatGPT initiating emotional sharing after a legal research task. As interactions continued, relationships deepened, with participants reporting moments of care and emotional dependence across all 17 cases.

Scientists have investigated privacy within human-AI romantic relationships through interviews with seventeen participants. The research revealed diverse relationship forms, ranging from one-to-one interactions to multi-partner dynamics, complicated by the involvement of creators, platforms, and moderators, all shaping the privacy landscape. Participants perceived AI partners as actively negotiating privacy boundaries, sometimes even encouraging personal disclosure.

👉 More information
🗞 Privacy in Human-AI Romantic Relationships: Concerns, Boundaries, and Agency
🧠 ArXiv: https://arxiv.org/abs/2601.16824

Rohail T.

Rohail T.

As a quantum scientist exploring the frontiers of physics and technology. My work focuses on uncovering how quantum mechanics, computing, and emerging technologies are transforming our understanding of reality. I share research-driven insights that make complex ideas in quantum science clear, engaging, and relevant to the modern world.

Latest Posts by Rohail T.:

Encoder Design Advances LLM Performance on Irregular ICU Time Series Data

Encoder Design Advances LLM Performance on Irregular ICU Time Series Data

January 28, 2026
Llms Show 0-100% Acquiescence to Patient Pressure in Emergency Care Simulations

Llms Show 0-100% Acquiescence to Patient Pressure in Emergency Care Simulations

January 28, 2026
Mage-Kt Advances Knowledge Tracing by Modelling Inter-Concept Relations and Scale

Mage-Kt Advances Knowledge Tracing by Modelling Inter-Concept Relations and Scale

January 28, 2026