New Model Explains How Virtual Agents Can Subtly Influence Human Behaviour

Researchers are increasingly focused on understanding how generative social agents (GSAs) can effectively communicate with people, yet a comprehensive theoretical framework for analysing these interactions remains absent. To address this gap, Stephan Vonschallen from the Institute of Business Information Technology at Zurich University of Applied Sciences, the Institute for Information Systems at the University of Applied Sciences and Arts Northwestern Switzerland, and the Center for Cognitive Interaction Technology at Bielefeld University, working with Friederike Eyssel and Theresa Schmiedel from the Center for Cognitive Interaction Technology, Bielefeld University and the Institute of Business Information Technology at Zurich University of Applied Sciences respectively, propose the Knowledge-based Persuasion Model (KPM). This novel model posits that a GSA’s persuasive capacity stems from its understanding of itself, the user, and the surrounding context, ultimately influencing human attitudes and behaviours. The KPM offers a structured approach to GSA interaction research, promoting the development of responsible agents designed to motivate and support user wellbeing, with potential applications spanning healthcare and education.

GSAs, powered by artificial intelligence, communicate with users in increasingly natural and adaptive ways, functioning as chatbots, avatars, or social robots. While offering opportunities for positive impact, their persuasive capabilities raise concerns about potential manipulation and misinformation. The KPM addresses a critical gap by examining autonomously generated persuasive behaviours, moving beyond studies of pre-defined agent messages. The KPM proposes that a GSA’s persuasive power stems from its knowledge base, encompassing information about itself, the user, and the surrounding context, directly driving the agent’s behaviour and shaping human responses. This model synthesises research across psychology, human-agent interaction, and information systems, offering a structured approach to studying GSA interactions and fostering the development of agents that motivate rather than manipulate. This represents a shift from researching rule-based agents to understanding how persuasion emerges from an agent’s available knowledge, building upon established psychological theories of persuasion, such as the Elaboration Likelihood Model and the Heuristic-Systematic Model, but extending them to account for the unique characteristics of GSAs. Unlike previous research utilising methods like the Wizard-of-Oz approach, this work investigates how an agent’s internal knowledge informs its behaviour. Researchers define agent knowledge as structured information internalised by the GSA, used to generate responses, encompassing both explicit inputs and implicit patterns learned during training. This understanding is crucial for designing responsible GSAs that align with human values and ethical standards, encouraging the strategic modification of an agent’s knowledge base to promote persuasion that supports user goals. The implications are significant for application domains such as healthcare and education, where GSAs are increasingly deployed as teaching aids or motivational assistants, aiming to ensure these technologies enhance, rather than compromise, human wellbeing. Central to this work is the assessment of how well large language models (LLMs) can express and maintain consistent personality traits, a crucial component of the KPM’s self-knowledge aspect. Evaluations utilising the TRAIT personality testset revealed that LLMs demonstrate varying degrees of personality consistency, with scores ranging across multiple established psychological dimensions. Analyses focused on the ‘Big Five’ personality traits, including Neuroticism, to quantify the extent to which LLMs exhibit stable and discernible personality profiles. Further investigation into personality emulation involved assessing the ability of LLMs to align with specified character profiles, with results indicating models could successfully adopt distinct personas, though consistency varied considerably. Measurements of linguistic alignment, a metric of how closely an LLM’s language reflects its assigned personality, showed considerable fluctuation across interactions. Character-LLM, a trainable agent designed for role-playing, achieved notable performance in maintaining character consistency, demonstrating the potential for fine-tuning to enhance personality expression. Scientific Reports documented that evaluations of LLM-emulated personalities yielded scores comparable to those of human subjects, suggesting a capacity for nuanced linguistic expression, but also highlighted the challenges of achieving consistent personality portrayal over extended interactions. A detailed examination of agent knowledge underpins this work, defining it as the structured information a GSA internalises to generate behavioural responses. This concept extends traditional knowledge-based systems, which typically rely on explicit symbolic knowledge like rules and facts, to encompass both situated knowledge, information derived from immediate prompts, and embedded knowledge acquired during training. Crucially, the research acknowledges the stochastic nature of GSA behaviour, meaning outputs are not entirely predictable, yet can be influenced through data curation or prompt engineering. This necessitated a shift in focus from studying responses to pre-defined agent behaviours to understanding how persuasive behaviours emerge from available knowledge, deliberately moving beyond central, involving thoughtful consideration of arguments, and peripheral, relying on superficial cues. Similarly, the HSM distinguishes between systematic processing, requiring analytical evaluation, and heuristic processing, utilising mental shortcuts. By integrating these models, the research aims to identify patterns linking agent knowledge to persuasive behaviours and ultimately, desirable outcomes for human users. The persistent challenge of building genuinely persuasive artificial intelligence isn’t about clever algorithms, but understanding why humans are swayed. For years, research has focused on mimicking persuasive techniques, such as foot-in-the-door or reciprocity, assuming their deployment through an agent would yield results. This new framework shifts the focus inward, arguing that an agent’s understanding of itself, the user, and the context is the crucial foundation for effective, and ethical, influence. This emphasis on knowledge acknowledges the limitations of current systems, which often rely on broad generalizations and lack the nuanced understanding needed to tailor interactions effectively. Imagine a healthcare app that understands anxieties about medication side effects and addresses them with empathy, or an educational tutor that adapts its approach to a user’s learning style and emotional state. However, building this “knowledge” is immensely difficult, as acquiring and representing contextual awareness, user preferences, and even self-awareness in an AI remains a substantial hurdle. Furthermore, the model rightly stresses responsible design, but defining and enforcing “social norms and ethical standards” in AI is notoriously difficult. The next step isn’t simply building smarter agents, but developing robust methods for verifying their understanding and ensuring their persuasive efforts genuinely benefit users, rather than exploit vulnerabilities; the field needs to move beyond demonstrating that AI can persuade, and focus on determining when and how it should.

👉 More information
🗞 Understanding Persuasive Interactions between Generative Social Agents and Humans: The Knowledge-based Persuasion Model (KPM)
🧠 ArXiv: https://arxiv.org/abs/2602.11483

Rohail T.

Rohail T.

As a quantum scientist exploring the frontiers of physics and technology. My work focuses on uncovering how quantum mechanics, computing, and emerging technologies are transforming our understanding of reality. I share research-driven insights that make complex ideas in quantum science clear, engaging, and relevant to the modern world.

Latest Posts by Rohail T.:

Atomic Interactions Broaden Light Spectra, Revealing Insights into Sensing Technology

Atomic Interactions Broaden Light Spectra, Revealing Insights into Sensing Technology

February 16, 2026
Tighter Uncertainty Principle Revealed for Measurements Near Black Holes

Tighter Uncertainty Principle Revealed for Measurements Near Black Holes

February 16, 2026
Closed Quantum Systems Defy Expectations by Maintaining Localised Synchronisation Despite Disorder

Closed Quantum Systems Defy Expectations by Maintaining Localised Synchronisation Despite Disorder

February 16, 2026