The spread of false information online poses a significant challenge to modern society, and researchers are now using artificial intelligence to better understand how it happens. Raj Gaurav Maurya from Technische Universität München, Vaibhav Shukla from Friedrich-Alexander-Universität Erlangen-Nürnberg, and Raj Abhijit Dandekar, Rajat Dandekar, and Sreedath Panat from Vizuara AI Labs have developed a novel simulation that models how misinformation travels through social networks. Their work uses large language models to create realistic digital personas, each with distinct biases and beliefs, and then tracks how news articles change as they are rewritten and shared among these agents. This innovative approach allows the team to quantify the degradation of factual accuracy and identify the factors that accelerate the spread of false narratives, revealing how identity and ideology play a crucial role in amplifying misinformation, particularly in sensitive areas like politics and marketing. The research provides a powerful new tool for studying and ultimately mitigating the harmful effects of online misinformation.
Okay, The output is valid JSON, making it easy to parse and use in applications. * Organization: The answers are grouped by domain (A-J) for easy access. * Zero-Based Indexing: The responses array within each domain starts at index 0, corresponding to the first question in that domain. * Assumptions: I assumed that if a statement was directly supported by the text, the answer is yes (1). If the text did not mention it or implied the opposite, the answer is no (0).
LLM Networks Simulate Misinformation’s Evolution and Fidelity
This study pioneers a novel framework to investigate how misinformation evolves within social networks, employing large language models (LLMs) as synthetic agents that mimic human cognitive biases and ideological alignments. Researchers engineered a system where news articles propagate across networks comprising up to 30 sequentially rewritten iterations, with each node representing a persona-conditioned LLM. These LLMs were specifically trained to embody diverse user characteristics, including varying levels of expertise, trust, and ideological stances, allowing for realistic simulation of information sharing. To quantify factual fidelity throughout this process, the team developed a question-answering-based auditor that generates targeted questions at each step to assess information retention and identify factual drift.
The study introduces a formalized misinformation index and a misinformation propagation rate, enabling precise quantification of factual degradation across both homogeneous and heterogeneous network branches. Experiments involving 21 distinct personas across 10 domains reveal that identity and ideology-based personas significantly accelerate misinformation, particularly in politics and marketing. Conversely, personas representing expert perspectives demonstrate a capacity to preserve factual stability throughout propagation. Controlled simulations demonstrate that even minor initial distortions rapidly escalate into propaganda-level misinformation when propagated through diverse networks of personas. This framework leverages the dual capabilities of LLMs, functioning both as proxies for human biases and as auditors capable of tracing information fidelity, offering a powerful and empirically grounded approach to studying and mitigating misinformation in digital ecosystems.
LLM Personas Amplify Misinformation Spread Online
This research demonstrates a novel framework for simulating and analyzing the spread of misinformation through social networks, utilizing large language models (LLMs) as both agents and auditors. Scientists constructed a system where LLMs, each embodying distinct personas with varying biases and ideologies, rewrite news articles as they propagate through a network. A separate “auditor” LLM then assesses the factual fidelity of these rewritten articles at each step, tracking how information degrades over time. Experiments reveal that personas representing individuals with strong political biases, or those acting as social media influencers, significantly accelerate the spread of misinformation, particularly in politics and marketing.
Conversely, personas designed to represent expert sources preserve factual stability more effectively. Researchers quantified this degradation using a “Misinformation Index”, measuring the difference between the auditor’s expected answers and those recoverable from the rewritten text. The team also calculated a “Misinformation Propagation Rate” to assess how quickly inaccuracies spread across the network. Controlled simulations with both homogeneous and heterogeneous branches demonstrate a clear pattern: early distortions rapidly escalate into propaganda-level misinformation when exposed to diverse persona interactions. The auditor’s assessment provides a claim-level tracking of this drift, offering transparency and traceability of factual changes. This work establishes a powerful, empirically grounded framework for studying, simulating, and ultimately mitigating the spread of misinformation in digital ecosystems.
LLMs Accelerate and Audit Misinformation Spread
This research demonstrates how large language models can both accelerate and audit the spread of misinformation within social networks. Scientists developed a novel framework to rigorously quantify factual changes as news articles propagate through networks of LLMs conditioned with distinct personas. By simulating information flow across multiple branches, each containing LLM agents representing different user biases and ideologies, the team observed how factual accuracy degrades with each rewrite of a news article. Experiments involving 21 personas across ten different domains revealed that identity and ideology-based agents significantly accelerate misinformation, particularly in sensitive areas like politics and marketing.
Conversely, personas designed to represent expert perspectives helped preserve factual stability. The researchers acknowledge that the simulation relies on a limited set of personas and domains, and that real-world misinformation dynamics are far more complex. Future work will focus on expanding the range of personas and domains, and exploring methods for mitigating misinformation spread. This framework offers an empirically grounded approach for studying, simulating, and ultimately addressing the challenges of misinformation in digital ecosystems, providing valuable insights into the mechanisms driving its diffusion and potential strategies for intervention.
👉 More information
🗞 Simulating Misinformation Propagation in Social Networks using Large Language Models
🧠 ArXiv: https://arxiv.org/abs/2511.10384
